1
|
Gao J, Fang N, Xu Y. Application of Artificial Intelligence in Retinopathy of Prematurity From 2010 to 2023: A Bibliometric Analysis. Health Sci Rep 2025; 8:e70718. [PMID: 40256143 PMCID: PMC12007426 DOI: 10.1002/hsr2.70718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2024] [Revised: 04/03/2025] [Accepted: 04/04/2025] [Indexed: 04/22/2025] Open
Abstract
Background and Aims Retinopathy of prematurity (ROP) remains a leading cause of childhood blindness worldwide. In recent years, artificial intelligence (AI) has emerged as a powerful tool for the screening and management of ROP. This study aimed to investigate the evolving and longitudinal publication patterns related to AI in ROP using bibliometric methodologies. Methods We conducted a descriptive analysis of AI in ROP documents retrieved from the Web of Science database up to September 10, 2023. Data analysis and visualization were performed using Bibliometrix and VOSviewer, covering publications, journals, authors, institutions, countries, collaboration networks, keywords, and trending topics. Results Our analysis of 188 publications on AI in ROP revealed an average of 7.62 authors per document and a notable increase in annual publications since 2017. The United States (98/188), Oregon Health & Science University (66/188), Investigative Ophthalmology & Visual Science (29/188) and author Michael F. Chiang (60/188) led contributions. A prominent 21-country network emerged as the largest in country-level coauthorship. Key technical terms included "artificial intelligence," "deep learning," "machine learning," and "telemedicine," with a recent shift from "feature selection" to "deep learning," "machine learning" and "fundus images" in trending topics. Conclusion Our bibliometric analysis highlights advancements in AI research on ROP, focusing on key publication characteristics, major contributors, and emerging trends. The findings indicate that AI in ROP is a rapidly growing field. Future studies should focus on addressing the clinical implementation and ethical concerns of AI in ROP.
Collapse
Affiliation(s)
- Jing Gao
- Department of OphthalmologyThe First Affiliated Hospital of Soochow UniversitySuzhouChina
| | - Na Fang
- Department of OphthalmologySuzhou TCM Hospital Affiliated to Nanjing University of Chinese MedicineSuzhouChina
| | - Yao Xu
- Department of OphthalmologyThe Fourth Affiliated Hospital of Soochow UniversitySuzhouChina
| |
Collapse
|
2
|
Pachade S, Porwal P, Kokare M, Deshmukh G, Sahasrabuddhe V, Luo Z, Han F, Sun Z, Qihan L, Kamata SI, Ho E, Wang E, Sivajohan A, Youn S, Lane K, Chun J, Wang X, Gu Y, Lu S, Oh YT, Park H, Lee CY, Yeh H, Cheng KW, Wang H, Ye J, He J, Gu L, Müller D, Soto-Rey I, Kramer F, Arai H, Ochi Y, Okada T, Giancardo L, Quellec G, Mériaudeau F. RFMiD: Retinal Image Analysis for multi-Disease Detection challenge. Med Image Anal 2025; 99:103365. [PMID: 39395210 DOI: 10.1016/j.media.2024.103365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 07/16/2024] [Accepted: 10/02/2024] [Indexed: 10/14/2024]
Abstract
In the last decades, many publicly available large fundus image datasets have been collected for diabetic retinopathy, glaucoma, and age-related macular degeneration, and a few other frequent pathologies. These publicly available datasets were used to develop a computer-aided disease diagnosis system by training deep learning models to detect these frequent pathologies. One challenge limiting the adoption of a such system by the ophthalmologist is, computer-aided disease diagnosis system ignores sight-threatening rare pathologies such as central retinal artery occlusion or anterior ischemic optic neuropathy and others that ophthalmologists currently detect. Aiming to advance the state-of-the-art in automatic ocular disease classification of frequent diseases along with the rare pathologies, a grand challenge on "Retinal Image Analysis for multi-Disease Detection" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2021). This paper, reports the challenge organization, dataset, top-performing participants solutions, evaluation measures, and results based on a new "Retinal Fundus Multi-disease Image Dataset" (RFMiD). There were two principal sub-challenges: disease screening (i.e. presence versus absence of pathology - a binary classification problem) and disease/pathology classification (a 28-class multi-label classification problem). It received a positive response from the scientific community with 74 submissions by individuals/teams that effectively entered in this challenge. The top-performing methodologies utilized a blend of data-preprocessing, data augmentation, pre-trained model, and model ensembling. This multi-disease (frequent and rare pathologies) detection will enable the development of generalizable models for screening the retina, unlike the previous efforts that focused on the detection of specific diseases.
Collapse
Affiliation(s)
- Samiksha Pachade
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded 431606, India.
| | - Prasanna Porwal
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded 431606, India
| | - Manesh Kokare
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded 431606, India
| | | | - Vivek Sahasrabuddhe
- Department of Ophthalmology, Shankarrao Chavan Government Medical College, Nanded 431606, India
| | - Zhengbo Luo
- Graduate School of Information Production and Systems, Waseda University, Japan
| | - Feng Han
- University of Shanghai for Science and Technology, Shanghai, China
| | - Zitang Sun
- Graduate School of Information Production and Systems, Waseda University, Japan
| | - Li Qihan
- Graduate School of Information Production and Systems, Waseda University, Japan
| | - Sei-Ichiro Kamata
- Graduate School of Information Production and Systems, Waseda University, Japan
| | - Edward Ho
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Edward Wang
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Asaanth Sivajohan
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Saerom Youn
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Kevin Lane
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Jin Chun
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Xinliang Wang
- Beihang University School of Computer Science, China
| | - Yunchao Gu
- Beihang University School of Computer Science, China
| | - Sixu Lu
- Beijing Normal University School of Artificial Intelligence, China
| | - Young-Tack Oh
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Republic of Korea
| | - Hyunjin Park
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea; School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, Republic of Korea
| | - Chia-Yen Lee
- Department of Electrical Engineering, National United University, Miaoli 360001, Taiwan, ROC
| | - Hung Yeh
- Department of Electrical Engineering, National United University, Miaoli 360001, Taiwan, ROC; Institute of Biomedical Engineering, National Yang Ming Chiao Tung University, 1001 Ta-Hsueh Road, Hsinchu, Taiwan, ROC
| | - Kai-Wen Cheng
- Department of Electrical Engineering, National United University, Miaoli 360001, Taiwan, ROC
| | - Haoyu Wang
- School of Biomedical Engineering, the Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Jin Ye
- ShenZhen Key Lab of Computer Vision and Pattern Recognition, Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Junjun He
- School of Biomedical Engineering, the Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China; ShenZhen Key Lab of Computer Vision and Pattern Recognition, Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lixu Gu
- School of Biomedical Engineering, the Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Dominik Müller
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Germany; Medical Data Integration Center, University Hospital Augsburg, Germany
| | - Iñaki Soto-Rey
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Germany; Medical Data Integration Center, University Hospital Augsburg, Germany
| | - Frank Kramer
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Germany
| | | | - Yuma Ochi
- National Institute of Technology, Kisarazu College, Japan
| | - Takami Okada
- Institute of Industrial Ecological Sciences, University of Occupational and Environmental Health, Japan
| | - Luca Giancardo
- Center for Precision Health, School of Biomedical Informatics, University of Texas Health Science Center at Houston (UTHealth), Houston, TX 77030, USA
| | | | | |
Collapse
|
3
|
Husain A, Knake L, Sullivan B, Barry J, Beam K, Holmes E, Hooven T, McAdams R, Moreira A, Shalish W, Vesoulis Z. AI models in clinical neonatology: a review of modeling approaches and a consensus proposal for standardized reporting of model performance. Pediatr Res 2024:10.1038/s41390-024-03774-4. [PMID: 39681669 DOI: 10.1038/s41390-024-03774-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Accepted: 11/10/2024] [Indexed: 12/18/2024]
Abstract
Artificial intelligence (AI) is a rapidly advancing area with growing clinical applications in healthcare. The neonatal intensive care unit (NICU) produces large amounts of multidimensional data allowing AI and machine learning (ML) new avenues to improve early diagnosis, enhance monitoring, and provide highly-targeted treatment approaches. In this article, we review recent clinical applications of AI to important neonatal problems, including sepsis, retinopathy of prematurity, bronchopulmonary dysplasia, and others. For each clinical area, we highlight a variety of ML models published in the literature and examine the future role they may play at the bedside. While the development of these models is rapidly expanding, a fundamental understanding of model selection, development, and performance evaluation is crucial for researchers and healthcare providers alike. As AI plays an increasing role in daily practice, understanding the implications of AI design and performance will enable more effective implementation. We provide a comprehensive explanation of the AI development process and recommendations for a standardized performance metric framework. Additionally, we address critical challenges, including model generalizability, ethical considerations, and the need for rigorous performance monitoring to avoid model drift. Finally, we outline future directions, emphasizing the importance of collaborative efforts and equitable access to AI innovations.
Collapse
Affiliation(s)
- Ameena Husain
- Division of Neonatology, Department of Pediatrics, University of Utah School of Medicine, Salt Lake City, UT, USA.
| | - Lindsey Knake
- Division of Neonatology, Department of Pediatrics, University of Iowa, Iowa City, IA, USA
| | - Brynne Sullivan
- Division of Neonatology, Department of Pediatrics, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - James Barry
- Division of Neonatology, Department of Pediatrics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Kristyn Beam
- Department of Neonatology, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Emma Holmes
- Division of Newborn Medicine, Department of Pediatrics, Mount Sinai Hospital, New York, NY, USA
| | - Thomas Hooven
- Division of Newborn Medicine, Department of Pediatrics, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Ryan McAdams
- Department of Pediatrics, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA
| | - Alvaro Moreira
- Division of Neonatology, Department of Pediatrics, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Wissam Shalish
- Division of Neonatology, Department of Pediatrics, Research Institute of the McGill University Health Center, Montreal Children's Hospital, Montreal, Canada
| | - Zachary Vesoulis
- Division of Newborn Medicine, Department of Pediatrics, Washington University in St. Louis, St. Louis, MO, USA
| |
Collapse
|
4
|
Yang W, Zhou H, Zhang Y, Sun L, Huang L, Li S, Luo X, Jin Y, Sun W, Yan W, Li J, Deng J, Xie Z, He Y, Ding X. An Interpretable System for Screening the Severity Level of Retinopathy in Premature Infants Using Deep Learning. Bioengineering (Basel) 2024; 11:792. [PMID: 39199750 PMCID: PMC11351924 DOI: 10.3390/bioengineering11080792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2024] [Revised: 07/15/2024] [Accepted: 07/31/2024] [Indexed: 09/01/2024] Open
Abstract
Accurate evaluation of retinopathy of prematurity (ROP) severity is vital for screening and proper treatment. Current deep-learning-based automated AI systems for assessing ROP severity do not follow clinical guidelines and are opaque. The aim of this study is to develop an interpretable AI system by mimicking the clinical screening process to determine ROP severity level. A total of 6100 RetCam Ⅲ wide-field digital retinal images were collected from Guangdong Women and Children Hospital at Panyu (PY) and Zhongshan Ophthalmic Center (ZOC). A total of 3330 images of 520 pediatric patients from PY were annotated to train an object detection model to detect lesion type and location. A total of 2770 images of 81 pediatric patients from ZOC were annotated for stage, zone, and the presence of plus disease. Integrating stage, zone, and the presence of plus disease according to clinical guidelines yields ROP severity such that an interpretable AI system was developed to provide the stage from the lesion type, the zone from the lesion location, and the presence of plus disease from a plus disease classification model. The ROP severity was calculated accordingly and compared with the assessment of a human expert. Our method achieved an area under the curve (AUC) of 0.95 (95% confidence interval [CI] 0.90-0.98) in assessing the severity level of ROP. Compared with clinical doctors, our method achieved the highest F1 score value of 0.76 in assessing the severity level of ROP. In conclusion, we developed an interpretable AI system for assessing the severity level of ROP that shows significant potential for use in clinical practice for ROP severity level screening.
Collapse
Affiliation(s)
- Wenhan Yang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China (Z.X.)
| | - Hao Zhou
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China (Z.X.)
| | - Yun Zhang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China (Z.X.)
| | - Limei Sun
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China (Z.X.)
| | - Li Huang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China (Z.X.)
| | - Songshan Li
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China (Z.X.)
| | - Xiaoling Luo
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China (Z.X.)
| | - Yili Jin
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China (Z.X.)
| | - Wei Sun
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China
| | - Wenjia Yan
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China (Z.X.)
| | - Jing Li
- Department of Ophthalmology, Guangdong Women and Children Hospital, Guangzhou 511400, China
| | - Jianxiang Deng
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China (Z.X.)
| | - Zhi Xie
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China (Z.X.)
| | - Yao He
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China (Z.X.)
| | - Xiaoyan Ding
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China (Z.X.)
| |
Collapse
|
5
|
Ebrahimi B, Le D, Abtahi M, Dadzie AK, Rossi A, Rahimi M, Son T, Ostmo S, Campbell JP, Paul Chan RV, Yao X. Assessing spectral effectiveness in color fundus photography for deep learning classification of retinopathy of prematurity. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:076001. [PMID: 38912212 PMCID: PMC11188587 DOI: 10.1117/1.jbo.29.7.076001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 05/24/2024] [Accepted: 05/29/2024] [Indexed: 06/25/2024]
Abstract
Significance Retinopathy of prematurity (ROP) poses a significant global threat to childhood vision, necessitating effective screening strategies. This study addresses the impact of color channels in fundus imaging on ROP diagnosis, emphasizing the efficacy and safety of utilizing longer wavelengths, such as red or green for enhanced depth information and improved diagnostic capabilities. Aim This study aims to assess the spectral effectiveness in color fundus photography for the deep learning classification of ROP. Approach A convolutional neural network end-to-end classifier was utilized for deep learning classification of normal, stage 1, stage 2, and stage 3 ROP fundus images. The classification performances with individual-color-channel inputs, i.e., red, green, and blue, and multi-color-channel fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared. Results For individual-color-channel inputs, similar performance was observed for green channel (88.00% accuracy, 76.00% sensitivity, and 92.00% specificity) and red channel (87.25% accuracy, 74.50% sensitivity, and 91.50% specificity), which is substantially outperforming the blue channel (78.25% accuracy, 56.50% sensitivity, and 85.50% specificity). For multi-color-channel fusion options, the early-fusion and intermediate-fusion architecture showed almost the same performance when compared to the green/red channel input, and they outperformed the late-fusion architecture. Conclusions This study reveals that the classification of ROP stages can be effectively achieved using either the green or red image alone. This finding enables the exclusion of blue images, acknowledged for their increased susceptibility to light toxicity.
Collapse
Affiliation(s)
- Behrouz Ebrahimi
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - David Le
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Mansour Abtahi
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Albert K. Dadzie
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Alfa Rossi
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Mojtaba Rahimi
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Taeyoon Son
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Susan Ostmo
- Oregon Health and Science University, Casey Eye Institute, Department of Ophthalmology, Portland, Oregon, United States
| | - J. Peter Campbell
- Oregon Health and Science University, Casey Eye Institute, Department of Ophthalmology, Portland, Oregon, United States
| | - R. V. Paul Chan
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
- University of Illinois Chicago, Department of Ophthalmology and Visual Sciences, Chicago, Illinois, United States
| | - Xincheng Yao
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
- University of Illinois Chicago, Department of Ophthalmology and Visual Sciences, Chicago, Illinois, United States
| |
Collapse
|
6
|
Chu Y, Hu S, Li Z, Yang X, Liu H, Yi X, Qi X. Image Analysis-Based Machine Learning for the Diagnosis of Retinopathy of Prematurity: A Meta-analysis and Systematic Review. Ophthalmol Retina 2024; 8:678-687. [PMID: 38237772 DOI: 10.1016/j.oret.2024.01.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 01/02/2024] [Accepted: 01/09/2024] [Indexed: 02/17/2024]
Abstract
TOPIC To evaluate the performance of machine learning (ML) in the diagnosis of retinopathy of prematurity (ROP) and to assess whether it can be an effective automated diagnostic tool for clinical applications. CLINICAL RELEVANCE Early detection of ROP is crucial for preventing tractional retinal detachment and blindness in preterm infants, which has significant clinical relevance. METHODS Web of Science, PubMed, Embase, IEEE Xplore, and Cochrane Library were searched for published studies on image-based ML for diagnosis of ROP or classification of clinical subtypes from inception to October 1, 2022. The quality assessment tool for artificial intelligence-centered diagnostic test accuracy studies was used to determine the risk of bias (RoB) of the included original studies. A bivariate mixed effects model was used for quantitative analysis of the data, and the Deek's test was used for calculating publication bias. Quality of evidence was assessed using Grading of Recommendations Assessment, Development and Evaluation. RESULTS Twenty-two studies were included in the systematic review; 4 studies had high or unclear RoB. In the area of indicator test items, only 2 studies had high or unclear RoB because they did not establish predefined thresholds. In the area of reference standards, 3 studies had high or unclear RoB. Regarding applicability, only 1 study was considered to have high or unclear applicability in terms of patient selection. The sensitivity and specificity of image-based ML for the diagnosis of ROP were 93% (95% confidence interval [CI]: 0.90-0.94) and 95% (95% CI: 0.94-0.97), respectively. The area under the receiver operating characteristic curve (AUC) was 0.98 (95% CI: 0.97-0.99). For the classification of clinical subtypes of ROP, the sensitivity and specificity were 93% (95% CI: 0.89-0.96) and 93% (95% CI: 0.89-0.95), respectively, and the AUC was 0.97 (95% CI: 0.96-0.98). The classification results were highly similar to those of clinical experts (Spearman's R = 0.879). CONCLUSIONS Machine learning algorithms are no less accurate than human experts and hold considerable potential as automated diagnostic tools for ROP. However, given the quality and high heterogeneity of the available evidence, these algorithms should be considered as supplementary tools to assist clinicians in diagnosing ROP. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Yihang Chu
- Central South University of Forestry and Technology, Changsha, Hunan, China; State Key Laboratory of Pathogenesis, Prevention and Treatment of High Incidence Diseases in Central Asia, Clinical Medical Research Institute, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, Xinjiang, China
| | - Shipeng Hu
- Central South University of Forestry and Technology, Changsha, Hunan, China
| | - Zilan Li
- Department of Biochemistry, McGill University, Montreal, Quebec, Canada
| | - Xiao Yang
- State Key Laboratory of Pathogenesis, Prevention and Treatment of High Incidence Diseases in Central Asia, Clinical Medical Research Institute, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, Xinjiang, China
| | - Hui Liu
- Central South University of Forestry and Technology, Changsha, Hunan, China.
| | - Xianglong Yi
- Department of Ophthalmology, The First Affiliated Hospital of Xinjiang Medical University, Urumchi, China.
| | - Xinwei Qi
- State Key Laboratory of Pathogenesis, Prevention and Treatment of High Incidence Diseases in Central Asia, Clinical Medical Research Institute, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, Xinjiang, China.
| |
Collapse
|
7
|
Sorrentino FS, Gardini L, Fontana L, Musa M, Gabai A, Maniaci A, Lavalle S, D’Esposito F, Russo A, Longo A, Surico PL, Gagliano C, Zeppieri M. Novel Approaches for Early Detection of Retinal Diseases Using Artificial Intelligence. J Pers Med 2024; 14:690. [PMID: 39063944 PMCID: PMC11278069 DOI: 10.3390/jpm14070690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 06/24/2024] [Accepted: 06/25/2024] [Indexed: 07/28/2024] Open
Abstract
BACKGROUND An increasing amount of people are globally affected by retinal diseases, such as diabetes, vascular occlusions, maculopathy, alterations of systemic circulation, and metabolic syndrome. AIM This review will discuss novel technologies in and potential approaches to the detection and diagnosis of retinal diseases with the support of cutting-edge machines and artificial intelligence (AI). METHODS The demand for retinal diagnostic imaging exams has increased, but the number of eye physicians or technicians is too little to meet the request. Thus, algorithms based on AI have been used, representing valid support for early detection and helping doctors to give diagnoses and make differential diagnosis. AI helps patients living far from hub centers to have tests and quick initial diagnosis, allowing them not to waste time in movements and waiting time for medical reply. RESULTS Highly automated systems for screening, early diagnosis, grading and tailored therapy will facilitate the care of people, even in remote lands or countries. CONCLUSION A potential massive and extensive use of AI might optimize the automated detection of tiny retinal alterations, allowing eye doctors to perform their best clinical assistance and to set the best options for the treatment of retinal diseases.
Collapse
Affiliation(s)
| | - Lorenzo Gardini
- Unit of Ophthalmology, Department of Surgical Sciences, Ospedale Maggiore, 40100 Bologna, Italy; (F.S.S.)
| | - Luigi Fontana
- Ophthalmology Unit, Department of Surgical Sciences, Alma Mater Studiorum University of Bologna, IRCCS Azienda Ospedaliero-Universitaria Bologna, 40100 Bologna, Italy
| | - Mutali Musa
- Department of Optometry, University of Benin, Benin City 300238, Edo State, Nigeria
| | - Andrea Gabai
- Department of Ophthalmology, Humanitas-San Pio X, 20159 Milan, Italy
| | - Antonino Maniaci
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
| | - Salvatore Lavalle
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
| | - Fabiana D’Esposito
- Imperial College Ophthalmic Research Group (ICORG) Unit, Imperial College, 153-173 Marylebone Rd, London NW15QH, UK
- Department of Neurosciences, Reproductive Sciences and Dentistry, University of Naples Federico II, Via Pansini 5, 80131 Napoli, Italy
| | - Andrea Russo
- Department of Ophthalmology, University of Catania, 95123 Catania, Italy
| | - Antonio Longo
- Department of Ophthalmology, University of Catania, 95123 Catania, Italy
| | - Pier Luigi Surico
- Schepens Eye Research Institute of Mass Eye and Ear, Harvard Medical School, Boston, MA 02114, USA
- Department of Ophthalmology, Campus Bio-Medico University, 00128 Rome, Italy
| | - Caterina Gagliano
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
- Eye Clinic, Catania University, San Marco Hospital, Viale Carlo Azeglio Ciampi, 95121 Catania, Italy
| | - Marco Zeppieri
- Department of Ophthalmology, University Hospital of Udine, 33100 Udine, Italy
| |
Collapse
|
8
|
Wang Y, Zhen L, Tan TE, Fu H, Feng Y, Wang Z, Xu X, Goh RSM, Ng Y, Calhoun C, Tan GSW, Sun JK, Liu Y, Ting DSW. Geometric Correspondence-Based Multimodal Learning for Ophthalmic Image Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1945-1957. [PMID: 38206778 DOI: 10.1109/tmi.2024.3352602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2024]
Abstract
Color fundus photography (CFP) and Optical coherence tomography (OCT) images are two of the most widely used modalities in the clinical diagnosis and management of retinal diseases. Despite the widespread use of multimodal imaging in clinical practice, few methods for automated diagnosis of eye diseases utilize correlated and complementary information from multiple modalities effectively. This paper explores how to leverage the information from CFP and OCT images to improve the automated diagnosis of retinal diseases. We propose a novel multimodal learning method, named geometric correspondence-based multimodal learning network (GeCoM-Net), to achieve the fusion of CFP and OCT images. Specifically, inspired by clinical observations, we consider the geometric correspondence between the OCT slice and the CFP region to learn the correlated features of the two modalities for robust fusion. Furthermore, we design a new feature selection strategy to extract discriminative OCT representations by automatically selecting the important feature maps from OCT slices. Unlike the existing multimodal learning methods, GeCoM-Net is the first method that formulates the geometric relationships between the OCT slice and the corresponding region of the CFP image explicitly for CFP and OCT fusion. Experiments have been conducted on a large-scale private dataset and a publicly available dataset to evaluate the effectiveness of GeCoM-Net for diagnosing diabetic macular edema (DME), impaired visual acuity (VA) and glaucoma. The empirical results show that our method outperforms the current state-of-the-art multimodal learning methods by improving the AUROC score 0.4%, 1.9% and 2.9% for DME, VA and glaucoma detection, respectively.
Collapse
|
9
|
Hoyek S, Cruz NFSD, Patel NA, Al-Khersan H, Fan KC, Berrocal AM. Identification of novel biomarkers for retinopathy of prematurity in preterm infants by use of innovative technologies and artificial intelligence. Prog Retin Eye Res 2023; 97:101208. [PMID: 37611892 DOI: 10.1016/j.preteyeres.2023.101208] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 08/16/2023] [Accepted: 08/18/2023] [Indexed: 08/25/2023]
Abstract
Retinopathy of prematurity (ROP) is a leading cause of preventable vision loss in preterm infants. While appropriate screening is crucial for early identification and treatment of ROP, current screening guidelines remain limited by inter-examiner variability in screening modalities, absence of local protocol for ROP screening in some settings, a paucity of resources and an increased survival of younger and smaller infants. This review summarizes the advancements and challenges of current innovative technologies, artificial intelligence (AI), and predictive biomarkers for the diagnosis and management of ROP. We provide a contemporary overview of AI-based models for detection of ROP, its severity, progression, and response to treatment. To address the transition from experimental settings to real-world clinical practice, challenges to the clinical implementation of AI for ROP are reviewed and potential solutions are proposed. The use of optical coherence tomography (OCT) and OCT angiography (OCTA) technology is also explored, providing evaluation of subclinical ROP characteristics that are often imperceptible on fundus examination. Furthermore, we explore several potential biomarkers to reduce the need for invasive procedures, to enhance diagnostic accuracy and treatment efficacy. Finally, we emphasize the need of a symbiotic integration of biologic and imaging biomarkers and AI in ROP screening, where the robustness of biomarkers in early disease detection is complemented by the predictive precision of AI algorithms.
Collapse
Affiliation(s)
- Sandra Hoyek
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Natasha F S da Cruz
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Nimesh A Patel
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Hasenin Al-Khersan
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Kenneth C Fan
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Audina M Berrocal
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA.
| |
Collapse
|
10
|
Rao DP, Savoy FM, Tan JZE, Fung BPE, Bopitiya CM, Sivaraman A, Vinekar A. Development and validation of an artificial intelligence based screening tool for detection of retinopathy of prematurity in a South Indian population. Front Pediatr 2023; 11:1197237. [PMID: 37794964 PMCID: PMC10545957 DOI: 10.3389/fped.2023.1197237] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 08/29/2023] [Indexed: 10/06/2023] Open
Abstract
Purpose The primary objective of this study was to develop and validate an AI algorithm as a screening tool for the detection of retinopathy of prematurity (ROP). Participants Images were collected from infants enrolled in the KIDROP tele-ROP screening program. Methods We developed a deep learning (DL) algorithm with 227,326 wide-field images from multiple camera systems obtained from the KIDROP tele-ROP screening program in India over an 11-year period. 37,477 temporal retina images were utilized with the dataset split into train (n = 25,982, 69.33%), validation (n = 4,006, 10.69%), and an independent test set (n = 7,489, 19.98%). The algorithm consists of a binary classifier that distinguishes between the presence of ROP (Stages 1-3) and the absence of ROP. The image labels were retrieved from the daily registers of the tele-ROP program. They consist of per-eye diagnoses provided by trained ROP graders based on all images captured during the screening session. Infants requiring treatment and a proportion of those not requiring urgent referral had an additional confirmatory diagnosis from an ROP specialist. Results Of the 7,489 temporal images analyzed in the test set, 2,249 (30.0%) images showed the presence of ROP. The sensitivity and specificity to detect ROP was 91.46% (95% CI: 90.23%-92.59%) and 91.22% (95% CI: 90.42%-91.97%), respectively, while the positive predictive value (PPV) was 81.72% (95% CI: 80.37%-83.00%), negative predictive value (NPV) was 96.14% (95% CI: 95.60%-96.61%) and the AUROC was 0.970. Conclusion The novel ROP screening algorithm demonstrated high sensitivity and specificity in detecting the presence of ROP. A prospective clinical validation in a real-world tele-ROP platform is under consideration. It has the potential to lower the number of screening sessions required to be conducted by a specialist for a high-risk preterm infant thus significantly improving workflow efficiency.
Collapse
Affiliation(s)
- Divya Parthasarathy Rao
- Artificial Intelligence Research and Development, Remidio Innovative Solutions Inc., Glen Allen, VA, United States
| | - Florian M. Savoy
- Artificial Intelligence Research and Development, Medios Technologies Pvt. Ltd., Singapore, Singapore
| | - Joshua Zhi En Tan
- Artificial Intelligence Research and Development, Medios Technologies Pvt. Ltd., Singapore, Singapore
| | - Brian Pei-En Fung
- Artificial Intelligence Research and Development, Medios Technologies Pvt. Ltd., Singapore, Singapore
| | - Chiran Mandula Bopitiya
- Artificial Intelligence Research and Development, Medios Technologies Pvt. Ltd., Singapore, Singapore
| | - Anand Sivaraman
- Artificial Intelligence Research and Development, Remidio Innovative Solutions Pvt. Ltd., Bangalore, India
| | - Anand Vinekar
- Department of Pediatric Retina, Narayana Nethralaya Eye Institute, Bangalore, India
| |
Collapse
|
11
|
Nakayama LF, Mitchell WG, Ribeiro LZ, Dychiao RG, Phanphruk W, Celi LA, Kalua K, Santiago APD, Regatieri CVS, Moraes NSB. Fairness and generalisability in deep learning of retinopathy of prematurity screening algorithms: a literature review. BMJ Open Ophthalmol 2023; 8:e001216. [PMID: 37558406 PMCID: PMC10414056 DOI: 10.1136/bmjophth-2022-001216] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 07/04/2023] [Indexed: 08/11/2023] Open
Abstract
BACKGROUND Retinopathy of prematurity (ROP) is a vasoproliferative disease responsible for more than 30 000 blind children worldwide. Its diagnosis and treatment are challenging due to the lack of specialists, divergent diagnostic concordance and variation in classification standards. While artificial intelligence (AI) can address the shortage of professionals and provide more cost-effective management, its development needs fairness, generalisability and bias controls prior to deployment to avoid producing harmful unpredictable results. This review aims to compare AI and ROP study's characteristics, fairness and generalisability efforts. METHODS Our review yielded 220 articles, of which 18 were included after full-text assessment. The articles were classified into ROP severity grading, plus detection, detecting treatment requiring, ROP prediction and detection of retinal zones. RESULTS All the article's authors and included patients are from middle-income and high-income countries, with no low-income countries, South America, Australia and Africa Continents representation.Code is available in two articles and in one on request, while data are not available in any article. 88.9% of the studies use the same retinal camera. In two articles, patients' sex was described, but none applied a bias control in their models. CONCLUSION The reviewed articles included 180 228 images and reported good metrics, but fairness, generalisability and bias control remained limited. Reproducibility is also a critical limitation, with few articles sharing codes and none sharing data. Fair and generalisable ROP and AI studies are needed that include diverse datasets, data and code sharing, collaborative research, and bias control to avoid unpredictable and harmful deployments.
Collapse
Affiliation(s)
- Luis Filipe Nakayama
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- Department of Ophthalmology, Sao Paulo Federal University, Sao Paulo, Brazil
| | - William Greig Mitchell
- Department of Ophthalmology, The Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | - Lucas Zago Ribeiro
- Department of Ophthalmology, Sao Paulo Federal University, Sao Paulo, Brazil
| | - Robyn Gayle Dychiao
- University of the Philippines Manila College of Medicine, Manila, Philippines
| | | | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- Department of Biostatistics, Harvard University T H Chan School of Public Health, Boston, Massachusetts, USA
| | - Khumbo Kalua
- Department of Ophthalmology, Blantyre Institute for Community Ophthalmology, BICO, Blantyre, Malawi
| | | | | | | |
Collapse
|
12
|
Ramanathan A, Athikarisamy SE, Lam GC. Artificial intelligence for the diagnosis of retinopathy of prematurity: A systematic review of current algorithms. Eye (Lond) 2023; 37:2518-2526. [PMID: 36577806 PMCID: PMC10397194 DOI: 10.1038/s41433-022-02366-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 11/23/2022] [Accepted: 12/09/2022] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND/OBJECTIVES With the increasing survival of premature infants, there is an increased demand to provide adequate retinopathy of prematurity (ROP) services. Wide field retinal imaging (WFDRI) and artificial intelligence (AI) have shown promise in the field of ROP and have the potential to improve the diagnostic performance and reduce the workload for screening ophthalmologists. The aim of this review is to systematically review and provide a summary of the diagnostic characteristics of existing deep learning algorithms. SUBJECT/METHODS Two authors independently searched the literature, and studies using a deep learning system from retinal imaging were included. Data were extracted, assessed and reported using PRISMA guidelines. RESULTS Twenty-seven studies were included in this review. Nineteen studies used AI systems to diagnose ROP, classify the staging of ROP, diagnose the presence of pre-plus or plus disease, or assess the quality of retinal images. The included studies reported a sensitivity of 71%-100%, specificity of 74-99% and area under the curve of 91-99% for the primary outcome of the study. AI techniques were comparable to the assessment of ophthalmologists in terms of overall accuracy and sensitivity. Eight studies evaluated vascular severity scores and were able to accurately differentiate severity using an automated classification score. CONCLUSION Artificial intelligence for ROP diagnosis is a growing field, and many potential utilities have already been identified, including the presence of plus disease, staging of disease and a new automated severity score. AI has a role as an adjunct to clinical assessment; however, there is insufficient evidence to support its use as a sole diagnostic tool currently.
Collapse
Affiliation(s)
- Ashwin Ramanathan
- Department of Paediatrics, Perth Children's Hospital, Perth, Australia
| | - Sam Ebenezer Athikarisamy
- Department of Neonatology, Perth Children's Hospital, Perth, Australia.
- School of Medicine, University of Western Australia, Crawley, Australia.
| | - Geoffrey C Lam
- Department of Ophthalmology, Perth Children's Hospital, Perth, Australia
- Centre for Ophthalmology and Visual Science, University of Western Australia, Crawley, Australia
| |
Collapse
|
13
|
Shen Y, Luo Z, Xu M, Liang Z, Fan X, Lu X. Automated detection for Retinopathy of Prematurity with knowledge distilling from multi-stream fusion network. Knowl Based Syst 2023; 269:110461. [DOI: 10.1016/j.knosys.2023.110461] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2025]
|
14
|
GabROP: Gabor Wavelets-Based CAD for Retinopathy of Prematurity Diagnosis via Convolutional Neural Networks. Diagnostics (Basel) 2023; 13:diagnostics13020171. [PMID: 36672981 PMCID: PMC9857608 DOI: 10.3390/diagnostics13020171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 12/12/2022] [Accepted: 12/19/2022] [Indexed: 01/05/2023] Open
Abstract
One of the most serious and dangerous ocular problems in premature infants is retinopathy of prematurity (ROP), a proliferative vascular disease. Ophthalmologists can use automatic computer-assisted diagnostic (CAD) tools to help them make a safe, accurate, and low-cost diagnosis of ROP. All previous CAD tools for ROP diagnosis use the original fundus images. Unfortunately, learning the discriminative representation from ROP-related fundus images is difficult. Textural analysis techniques, such as Gabor wavelets (GW), can demonstrate significant texture information that can help artificial intelligence (AI) based models to improve diagnostic accuracy. In this paper, an effective and automated CAD tool, namely GabROP, based on GW and multiple deep learning (DL) models is proposed. Initially, GabROP analyzes fundus images using GW and generates several sets of GW images. Next, these sets of images are used to train three convolutional neural networks (CNNs) models independently. Additionally, the actual fundus pictures are used to build these networks. Using the discrete wavelet transform (DWT), texture features retrieved from every CNN trained with various sets of GW images are combined to create a textural-spectral-temporal demonstration. Afterward, for each CNN, these features are concatenated with spatial deep features obtained from the original fundus images. Finally, the previous concatenated features of all three CNN are incorporated using the discrete cosine transform (DCT) to lessen the size of features caused by the fusion process. The outcomes of GabROP show that it is accurate and efficient for ophthalmologists. Additionally, the effectiveness of GabROP is compared to recently developed ROP diagnostic techniques. Due to GabROP's superior performance compared to competing tools, ophthalmologists may be able to identify ROP more reliably and precisely, which could result in a reduction in diagnostic effort and examination time.
Collapse
|
15
|
Luo Z, Ding X, Hou N, Wan J. A Deep-Learning-Based Collaborative Edge-Cloud Telemedicine System for Retinopathy of Prematurity. SENSORS (BASEL, SWITZERLAND) 2022; 23:276. [PMID: 36616874 PMCID: PMC9824555 DOI: 10.3390/s23010276] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/22/2022] [Accepted: 12/22/2022] [Indexed: 06/17/2023]
Abstract
Retinopathy of prematurity is an ophthalmic disease with a very high blindness rate. With its increasing incidence year by year, its timely diagnosis and treatment are of great significance. Due to the lack of timely and effective fundus screening for premature infants in remote areas, leading to an aggravation of the disease and even blindness, in this paper, a deep learning-based collaborative edge-cloud telemedicine system is proposed to mitigate this issue. In the proposed system, deep learning algorithms are mainly used for classification of processed images. Our algorithm is based on ResNet101 and uses undersampling and resampling to improve the data imbalance problem in the field of medical image processing. Artificial intelligence algorithms are combined with a collaborative edge-cloud architecture to implement a comprehensive telemedicine system to realize timely screening and diagnosis of retinopathy of prematurity in remote areas with shortages or a complete lack of expert medical staff. Finally, the algorithm is successfully embedded in a mobile terminal device and deployed through the support of a core hospital of Guangdong Province. The results show that we achieved 75% ACC and 60% AUC. This research is of great significance for the development of telemedicine systems and aims to mitigate the lack of medical resources and their uneven distribution in rural areas.
Collapse
Affiliation(s)
- Zeliang Luo
- College of Electro-Mechanical Engineering, Zhuhai City Polytechnic, Zhuhai 519090, China
| | - Xiaoxuan Ding
- Guangdong Provincial Key Laboratory of Technique and Equipment for Macromolecular Advanced Manufacturing, School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou 510641, China
| | - Ning Hou
- Guangdong Provincial Key Laboratory of Technique and Equipment for Macromolecular Advanced Manufacturing, School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou 510641, China
| | - Jiafu Wan
- Guangdong Provincial Key Laboratory of Technique and Equipment for Macromolecular Advanced Manufacturing, School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou 510641, China
| |
Collapse
|
16
|
Multi-instance learning based on spatial continuous category representation for case-level meningioma grading in MRI images. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04114-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
17
|
Peng Y, Chen Z, Zhu W, Shi F, Wang M, Zhou Y, Xiang D, Chen X, Chen F. ADS-Net: attention-awareness and deep supervision based network for automatic detection of retinopathy of prematurity. BIOMEDICAL OPTICS EXPRESS 2022; 13:4087-4101. [PMID: 36032570 PMCID: PMC9408258 DOI: 10.1364/boe.461411] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 06/17/2022] [Accepted: 06/21/2022] [Indexed: 06/15/2023]
Abstract
Retinopathy of prematurity (ROP) is a proliferative vascular disease, which is one of the most dangerous and severe ocular complications in premature infants. Automatic ROP detection system can assist ophthalmologists in the diagnosis of ROP, which is safe, objective, and cost-effective. Unfortunately, due to the large local redundancy and the complex global dependencies in medical image processing, it is challenging to learn the discriminative representation from ROP-related fundus images. To bridge this gap, a novel attention-awareness and deep supervision based network (ADS-Net) is proposed to detect the existence of ROP (Normal or ROP) and 3-level ROP grading (Mild, Moderate, or Severe). First, to balance the problems of large local redundancy and complex global dependencies in images, we design a multi-semantic feature aggregation (MsFA) module based on self-attention mechanism to take full advantage of convolution and self-attention, generating attention-aware expressive features. Then, to solve the challenge of difficult training of deep model and further improve ROP detection performance, we propose an optimization strategy with deeply supervised loss. Finally, the proposed ADS-Net is evaluated on ROP screening and grading tasks with per-image and per-examination strategies, respectively. In terms of per-image classification pattern, the proposed ADS-Net achieves 0.9552 and 0.9037 for Kappa index in ROP screening and grading, respectively. Experimental results demonstrate that the proposed ADS-Net generally outperforms other state-of-the-art classification networks, showing the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Yuanyuan Peng
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu Province, 215006, China
| | - Zhongyue Chen
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu Province, 215006, China
| | - Weifang Zhu
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu Province, 215006, China
| | - Fei Shi
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu Province, 215006, China
| | - Meng Wang
- Institute of High Performance Computing, ASTAR, Singapore
| | - Yi Zhou
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu Province, 215006, China
| | - Daoman Xiang
- Guangzhou Women and Children's Medical Center, Guangzhou, 510623, China
| | - Xinjian Chen
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu Province, 215006, China
- State Key Laboratory of Radiation Medicine and Protection, Soochow University, Suzhou, 215123, China
| | - Feng Chen
- Guangzhou Women and Children's Medical Center, Guangzhou, 510623, China
| |
Collapse
|
18
|
Wang Y, Zhang L, Shu X, Feng Y, Yi Z, Lv Q. Feature-Sensitive Deep Convolutional Neural Network for Multi-Instance Breast Cancer Detection. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2022; 19:2241-2251. [PMID: 33600319 DOI: 10.1109/tcbb.2021.3060183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
To obtain a well-performed computer-aided detection model for detecting breast cancer, it is usually needed to design an effective and efficient algorithm and a well-labeled dataset to train it. In this paper, first, a multi-instance mammography clinic dataset was constructed. Each case in the dataset includes a different number of instances captured from different views, it is labeled according to the pathological report, and all the instances of one case share one label. Nevertheless, the instances captured from different views may have various levels of contributions to conclude the category of the target case. Motivated by this observation, a feature-sensitive deep convolutional neural network with an end-to-end training manner is proposed to detect breast cancer. The proposed method first uses a pre-train model with some custom layers to extract image features. Then, it adopts a feature fusion module to learn to compute the weight of each feature vector. It makes the different instances of each case have different sensibility on the classifier. Lastly, a classifier module is used to classify the fused features. The experimental results on both our constructed clinic dataset and two public datasets have demonstrated the effectiveness of the proposed method.
Collapse
|
19
|
Li P, Liu J. Early Diagnosis and Quantitative Analysis of Stages in Retinopathy of Prematurity Based on Deep Convolutional Neural Networks. Transl Vis Sci Technol 2022; 11:17. [PMID: 35579887 PMCID: PMC9123509 DOI: 10.1167/tvst.11.5.17] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Retinopathy of prematurity (ROP) is a leading cause of childhood blindness. An accurate and timely diagnosis of the early stages of ROP allows ophthalmologists to recommend appropriate treatment while blindness is still preventable. The purpose of this study was to develop an automatic deep convolutional neural network-based system that provided a diagnosis of stage I to III ROP with feature parameters. Methods We developed three data sets containing 18,827 retinal images of preterm infants. These retinal images were obtained from the ophthalmology department of Jiaxing Maternal and Child Health Hospital in China. After segmenting images, we calculated the region of interest (ROI). We trained our system based on segmented ROI images from the training data set, tested the performance of the classifier on the test data set, and evaluated the widths of the demarcation lines or ridges extracted by the system, as well as the ratios of vascular proliferation within the ROI on a comparison data set. Results The trained network achieved a sensitivity of 90.21% with 97.67% specificity for the diagnosis of stage I ROP, 92.75% sensitivity with 98.74% specificity for stage II ROP, and 91.84% sensitivity with 99.29% sensitivity for stage III ROP. When the system diagnosed normal images, the sensitivity and specificity reached 95.93% and 96.41%, respectively. The widths (in pixels) of the demarcation lines or ridges for normal, stage I, stage II, and stage III were 15.22 ± 1.06, 26.35 ± 1.36, and 30.75 ± 1.55. The ratios of the vascular proliferation within the ROI were 1.40 ± 0.29, 1.54 ± 0.26, and 1.81 ± 0.33. All parameters were statistically different among the groups. When physicians integrated quantitative parameters of the extracted features with their clinic diagnosis, the κ score was significantly improved. Conclusions Our system achieved a high accuracy of diagnosis for stage I to III ROP. It used the quantitative analysis of the extracted features to assist physicians in providing classification decisions. Translational Relevance The high performance of the system suggests potential applications in ancillary diagnosis of the early stages of ROP.
Collapse
Affiliation(s)
- Peng Li
- School of Electronic and Information Engineering, Tongji University, Shanghai,China.,Department of Electronic and Information Engineering, Tongji Zhejiang College, Jiaxing, China
| | - Jia Liu
- Optometry Center, Jiaxing Maternity and Child Health Care Hospital, Jiaxing, China
| |
Collapse
|
20
|
Peng Y, Chen Z, Zhu W, Shi F, Wang M, Zhou Y, Xiang D, Chen X, Chen F. Automatic zoning for retinopathy of prematurity with semi-supervised feature calibration adversarial learning. BIOMEDICAL OPTICS EXPRESS 2022; 13:1968-1984. [PMID: 35519283 PMCID: PMC9045915 DOI: 10.1364/boe.447224] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 01/05/2022] [Accepted: 02/09/2022] [Indexed: 06/14/2023]
Abstract
Retinopathy of prematurity (ROP) is an eye disease, which affects prematurely born infants with low birth weight and is one of the main causes of children's blindness globally. In recent years, there are many studies on automatic ROP diagnosis, mainly focusing on ROP screening such as "Yes/No ROP" or "Mild/Severe ROP" and presence/absence detection of "plus disease". Due to the lack of corresponding high-quality annotations, there are few studies on ROP zoning, which is one of the important indicators to evaluate the severity of ROP. Moreover, how to effectively utilize the unlabeled data to train model is also worth studying. Therefore, we propose a novel semi-supervised feature calibration adversarial learning network (SSFC-ALN) for 3-level ROP zoning, which consists of two subnetworks: a generative network and a compound network. The generative network is a U-shape network for producing the reconstructed images and its output is taken as one of the inputs of the compound network. The compound network is obtained by extending a common classification network with a discriminator, introducing adversarial mechanism into the whole training process. Because the definition of ROP tells us where and what to focus on in the fundus images, which is similar to the attention mechanism. Therefore, to further improve classification performance, a new attention mechanism based feature calibration module (FCM) is designed and embedded in the compound network. The proposed method was evaluated on 1013 fundus images of 108 patients with 3-fold cross validation strategy. Compared with other state-of-the-art classification methods, the proposed method achieves high classification performance.
Collapse
Affiliation(s)
- Yuanyuan Peng
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu 215006, China
| | - Zhongyue Chen
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu 215006, China
| | - Weifang Zhu
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu 215006, China
| | - Fei Shi
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu 215006, China
| | - Meng Wang
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu 215006, China
| | - Yi Zhou
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu 215006, China
| | - Daoman Xiang
- Guangzhou Women and Children's Medical Center, Guangzhou 510623, China
| | - Xinjian Chen
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu 215006, China
- State Key Laboratory of Radiation Medicine and Protection, Soochow University, Suzhou 215123, China
| | - Feng Chen
- Guangzhou Women and Children's Medical Center, Guangzhou 510623, China
| |
Collapse
|
21
|
Chen S, Huang H, Yang X, Wang H, Wei M, Zhang H, Wang Z, Yi Z. TeachMe: a web-based teaching system for annotating abdominal lymph nodes. Sci Rep 2022; 12:5167. [PMID: 35338176 PMCID: PMC8956716 DOI: 10.1038/s41598-022-08958-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 03/09/2022] [Indexed: 02/05/2023] Open
Abstract
The detection and characterization of lymph nodes through interpreting abdominal medical images are significant for diagnosing and treating colorectal cancer recurrence. However, interpreting abdominal medical images manually is labor-intensive and time-consuming. The related radiology education has many limitations as well. In this context, we seek to build an extensive collection of abdominal medical images with ground truth labels for lymph nodes recognition research and help junior doctors to train their interpretation skills. Therefore, we develop TeachMe, which is a web-based teaching system for annotating abdominal lymph nodes. The system has a three-level annotation-review workflow to construct an expert database of abdominal lymph nodes and a feedback mechanism helping junior doctors to learn the tricks of interpreting abdominal medical images. TeachMe’s functionalities make itself stand out against other platforms. To validate these functionalities, we invite a medical team from Gastrointestinal Surgery Center, West China Hospital, to participate in the data collection workflow and experience the feedback mechanism. With the help of TeachMe, an expert dataset of abdominal lymph nodes has been created and an automated detection model for abdominal lymph nodes with incredible performances has been proposed. Moreover, through three rounds of practicing via TeachMe, our junior doctors’ interpretation skills have been improved.
Collapse
Affiliation(s)
- Shuaihua Chen
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Hao Huang
- Gastrointestinal Surgery Center, West China Hospital, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Xuyang Yang
- Gastrointestinal Surgery Center, West China Hospital, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Han Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Mingtian Wei
- Gastrointestinal Surgery Center, West China Hospital, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Haixian Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Ziqiang Wang
- Gastrointestinal Surgery Center, West China Hospital, Sichuan University, Chengdu, 610041, People's Republic of China.
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China.
| |
Collapse
|
22
|
Kelly CJ, Brown APY, Taylor JA. Artificial Intelligence in Pediatrics. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
23
|
Attallah O. DIAROP: Automated Deep Learning-Based Diagnostic Tool for Retinopathy of Prematurity. Diagnostics (Basel) 2021; 11:2034. [PMID: 34829380 PMCID: PMC8620568 DOI: 10.3390/diagnostics11112034] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 09/24/2021] [Accepted: 11/01/2021] [Indexed: 12/12/2022] Open
Abstract
Retinopathy of Prematurity (ROP) affects preterm neonates and could cause blindness. Deep Learning (DL) can assist ophthalmologists in the diagnosis of ROP. This paper proposes an automated and reliable diagnostic tool based on DL techniques called DIAROP to support the ophthalmologic diagnosis of ROP. It extracts significant features by first obtaining spatial features from the four Convolution Neural Networks (CNNs) DL techniques using transfer learning and then applying Fast Walsh Hadamard Transform (FWHT) to integrate these features. Moreover, DIAROP explores the best-integrated features extracted from the CNNs that influence its diagnostic capability. The results of DIAROP indicate that DIAROP achieved an accuracy of 93.2% and an area under receiving operating characteristic curve (AUC) of 0.98. Furthermore, DIAROP performance is compared with recent ROP diagnostic tools. Its promising performance shows that DIAROP may assist the ophthalmologic diagnosis of ROP.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| |
Collapse
|
24
|
Wang H, Hu J, Song Y, Zhang L, Bai S, Yi Z. Multi-view fusion segmentation for brain glioma on CT images. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02784-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
|
25
|
Chen JS, Coyner AS, Ostmo S, Sonmez K, Bajimaya S, Pradhan E, Valikodath N, Cole ED, Al-Khaled T, Chan RVP, Singh P, Kalpathy-Cramer J, Chiang MF, Campbell JP. Deep Learning for the Diagnosis of Stage in Retinopathy of Prematurity: Accuracy and Generalizability across Populations and Cameras. Ophthalmol Retina 2021; 5:1027-1035. [PMID: 33561545 PMCID: PMC8364291 DOI: 10.1016/j.oret.2020.12.013] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 12/02/2020] [Accepted: 12/16/2020] [Indexed: 12/23/2022]
Abstract
PURPOSE Stage is an important feature to identify in retinal images of infants at risk of retinopathy of prematurity (ROP). The purpose of this study was to implement a convolutional neural network (CNN) for binary detection of stages 1, 2, and 3 in ROP and to evaluate its generalizability across different populations and camera systems. DESIGN Diagnostic validation study of CNN for stage detection. PARTICIPANTS Retinal fundus images obtained from preterm infants during routine ROP screenings. METHODS Two datasets were used: 5943 fundus images obtained by RetCam camera (Natus Medical, Pleasanton, CA) from 9 North American institutions and 5049 images obtained by 3nethra camera (Forus Health Incorporated, Bengaluru, India) from 4 hospitals in Nepal. Images were labeled based on the presence of stage by 1 to 3 expert graders. Three CNN models were trained using 5-fold cross-validation on datasets from North America alone, Nepal alone, and a combined dataset and were evaluated on 2 held-out test sets consisting of 708 and 247 images from the Nepali and North American datasets, respectively. MAIN OUTCOME MEASURES Convolutional neural network performance was evaluated using area under the receiver operating characteristic curve (AUROC), area under the precision-recall curve (AUPRC), sensitivity, and specificity. RESULTS Both the North American- and Nepali-trained models demonstrated high performance on a test set from the same population: AUROC, 0.99; AUPRC, 0.98; sensitivity, 94%; and AUROC, 0.97; AUPRC, 0.91; and sensitivity, 73%; respectively. However, the performance of each model decreased to AUROC of 0.96 and AUPRC of 0.88 (sensitivity, 52%) and AUROC of 0.62 and AUPRC of 0.36 (sensitivity, 44%) when evaluated on a test set from the other population. Compared with the models trained on individual datasets, the model trained on a combined dataset achieved improved performance on each respective test set: sensitivity improved from 94% to 98% on the North American test set and from 73% to 82% on the Nepali test set. CONCLUSIONS A CNN can identify accurately the presence of ROP stage in retinal images, but performance depends on the similarity between training and testing populations. We demonstrated that internal and external performance can be improved by increasing the heterogeneity of the training dataset features of the training dataset, in this case by combining images from different populations and cameras.
Collapse
Affiliation(s)
- Jimmy S Chen
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Aaron S Coyner
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Kemal Sonmez
- Cancer Early Detection Advanced Research Center, Knight Cancer Institute, Oregon Health & Science University, Portland, Oregon
| | | | - Eli Pradhan
- Tilganga Institute of Ophthalmology, Kathmandu, Nepal
| | - Nita Valikodath
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Emily D Cole
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Tala Al-Khaled
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Praveer Singh
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts
| | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon; Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon.
| |
Collapse
|
26
|
Shao E, Liu C, Wang L, Song D, Guo L, Yao X, Xiong J, Wang B, Hu Y. Artificial intelligence-based detection of epimacular membrane from color fundus photographs. Sci Rep 2021; 11:19291. [PMID: 34588493 PMCID: PMC8481557 DOI: 10.1038/s41598-021-98510-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Accepted: 09/01/2021] [Indexed: 12/25/2022] Open
Abstract
Epiretinal membrane (ERM) is a common ophthalmological disorder of high prevalence. Its symptoms include metamorphopsia, blurred vision, and decreased visual acuity. Early diagnosis and timely treatment of ERM is crucial to preventing vision loss. Although optical coherence tomography (OCT) is regarded as a de facto standard for ERM diagnosis due to its intuitiveness and high sensitivity, ophthalmoscopic examination or fundus photographs still have the advantages of price and accessibility. Artificial intelligence (AI) has been widely applied in the health care industry for its robust and significant performance in detecting various diseases. In this study, we validated the use of a previously trained deep neural network based-AI model in ERM detection based on color fundus photographs. An independent test set of fundus photographs was labeled by a group of ophthalmologists according to their corresponding OCT images as the gold standard. Then the test set was interpreted by other ophthalmologists and AI model without knowing their OCT results. Compared with manual diagnosis based on fundus photographs alone, the AI model had comparable accuracy (AI model 77.08% vs. integrated manual diagnosis 75.69%, χ2 = 0.038, P = 0.845, McNemar’s test), higher sensitivity (75.90% vs. 63.86%, χ2 = 4.500, P = 0.034, McNemar’s test), under the cost of lower but reasonable specificity (78.69% vs. 91.80%, χ2 = 6.125, P = 0.013, McNemar’s test). Thus our AI model can serve as a possible alternative for manual diagnosis in ERM screening.
Collapse
Affiliation(s)
- Enhua Shao
- Department of Ophthalmology, Beijing Tisnghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Congxin Liu
- Beijing Eaglevision Technology Co., Ltd, Beijing, China
| | - Lei Wang
- Department of Ophthalmology, Beijing Tisnghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Dan Song
- Department of Ophthalmology, Beijing Tisnghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Libin Guo
- Department of Ophthalmology, Beijing Tisnghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Xuan Yao
- Beijing Eaglevision Technology Co., Ltd, Beijing, China
| | - Jianhao Xiong
- Beijing Eaglevision Technology Co., Ltd, Beijing, China
| | - Bin Wang
- Beijing Eaglevision Technology Co., Ltd, Beijing, China
| | - Yuntao Hu
- Department of Ophthalmology, Beijing Tisnghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
27
|
An automatic framework for perioperative risks classification from retinal images of complex congenital heart disease patients. INT J MACH LEARN CYB 2021. [DOI: 10.1007/s13042-021-01419-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
28
|
Wang Z, Zhang L, Shu X, Lv Q, Yi Z. An End-to-End Mammogram Diagnosis: A New Multi-Instance and Multiscale Method Based on Single-Image Feature. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2019.2963682] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
29
|
Nikolaidou A, Tsaousis KT. Teleophthalmology and Artificial Intelligence As Game Changers in Ophthalmic Care After the COVID-19 Pandemic. Cureus 2021; 13:e16392. [PMID: 34408945 PMCID: PMC8363234 DOI: 10.7759/cureus.16392] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/13/2021] [Indexed: 12/17/2022] Open
Abstract
The current COVID-19 pandemic has boosted a sudden demand for telemedicine due to quarantine and travel restrictions. The exponential increase in the use of telemedicine is expected to affect ophthalmology drastically. The aim of this review is to discuss the utility, effectiveness and challenges of teleophthalmological new tools for eyecare delivery as well as its implementation and possible facilitation with artificial intelligence. We used the terms: “teleophthalmology,” “telemedicine and COVID-19,” “retinal diseases and telemedicine,” “virtual ophthalmology,” “cost effectiveness of teleophthalmology,” “pediatric teleophthalmology,” “Artificial intelligence and ophthalmology,” “Glaucoma and teleophthalmology” and “teleophthalmology limitations” in the database of PubMed and selected the articles being published in the course of 2015-2020. After the initial search, 321 articles returned as relevant. A meticulous screening followed and eventually 103 published manuscripts were included and used as our references. Emerging in the market, teleophthalmology is showing great potential for the future of ophthalmological care, benefiting both patients and ophthalmologists in times of pandemics. The spectrum of eye diseases that could benefit from teleophthalmology is wide, including mostly retinal diseases such as diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration but also glaucoma and anterior segment conditions. Simultaneously, artificial intelligence provides ways of implementing teleophthalmology easier and with better outcomes, contributing as significant changing factors for ophthalmology practice after the COVID-19 pandemic.
Collapse
Affiliation(s)
- Anna Nikolaidou
- Ophthalmology, Aristotle University of Thessaloniki, Thessaloniki, GRC
| | | |
Collapse
|
30
|
Accuracy of Deep Learning Algorithms for the Diagnosis of Retinopathy of Prematurity by Fundus Images: A Systematic Review and Meta-Analysis. J Ophthalmol 2021; 2021:8883946. [PMID: 34394982 PMCID: PMC8363465 DOI: 10.1155/2021/8883946] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 06/30/2021] [Accepted: 07/27/2021] [Indexed: 12/14/2022] Open
Abstract
Background Retinopathy of prematurity (ROP) occurs in preterm infants and may contribute to blindness. Deep learning (DL) models have been used for ophthalmologic diagnoses. We performed a systematic review and meta-analysis of published evidence to summarize and evaluate the diagnostic accuracy of DL algorithms for ROP by fundus images. Methods We searched PubMed, EMBASE, Web of Science, and Institute of Electrical and Electronics Engineers Xplore Digital Library on June 13, 2021, for studies using a DL algorithm to distinguish individuals with ROP of different grades, which provided accuracy measurements. The pooled sensitivity and specificity values and the area under the curve (AUC) of summary receiver operating characteristics curves (SROC) summarized overall test performance. The performances in validation and test datasets were assessed together and separately. Subgroup analyses were conducted between the definition and grades of ROP. Threshold and nonthreshold effects were tested to assess biases and evaluate accuracy factors associated with DL models. Results Nine studies with fifteen classifiers were included in our meta-analysis. A total of 521,586 objects were applied to DL models. For combined validation and test datasets in each study, the pooled sensitivity and specificity were 0.953 (95% confidence intervals (CI): 0.946-0.959) and 0.975 (0.973-0.977), respectively, and the AUC was 0.984 (0.978-0.989). For the validation dataset and test dataset, the AUC was 0.977 (0.968-0.986) and 0.987 (0.982-0.992), respectively. In the subgroup analysis of ROP vs. normal and differentiation of two ROP grades, the AUC was 0.990 (0.944-0.994) and 0.982 (0.964-0.999), respectively. Conclusions Our study shows that DL models can play an essential role in detecting and grading ROP with high sensitivity, specificity, and repeatability. The application of a DL-based automated system may improve ROP screening and diagnosis in the future.
Collapse
|
31
|
Yang L, Wang H, Zeng Q, Liu Y, Bian G. A hybrid deep segmentation network for fundus vessels via deep-learning framework. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.085] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
32
|
Chen Y, Yi Z. Adaptive sparse dropout: Learning the certainty and uncertainty in deep neural networks. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.04.047] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
33
|
Agrawal R, Kulkarni S, Walambe R, Kotecha K. Assistive Framework for Automatic Detection of All the Zones in Retinopathy of Prematurity Using Deep Learning. J Digit Imaging 2021; 34:932-947. [PMID: 34240273 DOI: 10.1007/s10278-021-00477-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 05/06/2021] [Accepted: 05/21/2021] [Indexed: 11/30/2022] Open
Abstract
Retinopathy of prematurity (ROP) is a potentially blinding disorder seen in low birth weight preterm infants. In India, the burden of ROP is high, with nearly 200,000 premature infants at risk. Early detection through screening and treatment can prevent this blindness. The automatic screening systems developed so far can detect "severe ROP" or "plus disease," but this information does not help schedule follow-up. Identifying vascularized retinal zones and detecting the ROP stage is essential for follow-up or discharge from screening. There is no automatic system to assist these crucial decisions to the best of the authors' knowledge. The low contrast of images, incompletely developed vessels, macular structure, and lack of public data sets are a few challenges in creating such a system. In this paper, a novel method using an ensemble of "U-Network" and "Circle Hough Transform" is developed to detect zones I, II, and III from retinal images in which macula is not developed. The model developed is generic and trained on mixed images of different sizes. It detects zones in images of variable sizes captured by two different imaging systems with an accuracy of 98%. All images of the test set (including the low-quality images) are considered. The time taken for training was only 14 min, and a single image was tested in 30 ms. The present study can help medical experts interpret retinal vascular status correctly and reduce subjective variation in diagnosis.
Collapse
Affiliation(s)
- Ranjana Agrawal
- School of Computer Engineering and Technology, Dr. Vishwanath Karad MIT World Peace University, Pune, India.,Symbiosis Institute of Technology, Symbiosis International (Deemed) University, Pune, India
| | | | - Rahee Walambe
- Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis International (Deemed) University, Pune, India.
| | - Ketan Kotecha
- Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis International (Deemed) University, Pune, India.
| |
Collapse
|
34
|
Hu T, Zhang L, Xie L, Yi Z. A multi-instance networks with multiple views for classification of mammograms. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.02.070] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
35
|
Peng Y, Zhu W, Chen Z, Wang M, Geng L, Yu K, Zhou Y, Wang T, Xiang D, Chen F, Chen X. Automatic Staging for Retinopathy of Prematurity With Deep Feature Fusion and Ordinal Classification Strategy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1750-1762. [PMID: 33710954 DOI: 10.1109/tmi.2021.3065753] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Retinopathy of prematurity (ROP) is a retinal disease which frequently occurs in premature babies with low birth weight and is considered as one of the major preventable causes of childhood blindness. Although automatic and semi-automatic diagnoses of ROP based on fundus image have been researched, most of the previous studies focused on plus disease detection and ROP screening. There are few studies focusing on ROP staging, which is important for the severity evaluation of the disease. To be consistent with clinical 5-level ROP staging, a novel and effective deep neural network based 5-level ROP staging network is proposed, which consists of multi-stream based parallel feature extractor, concatenation based deep feature fuser and clinical practice based ordinal classifier. First, the three-stream parallel framework including ResNet18, DenseNet121 and EfficientNetB2 is proposed as the feature extractor, which can extract rich and diverse high-level features. Second, the features from three streams are deeply fused by concatenation and convolution to generate a more effective and comprehensive feature. Finally, in the classification stage, an ordinal classification strategy is adopted, which can effectively improve the ROP staging performance. The proposed ROP staging network was evaluated with per-image and per-examination strategies. For per-image ROP staging, the proposed method was evaluated on 635 retinal fundus images from 196 examinations, including 303 Normal, 26 Stage 1, 127 Stage 2, 106 Stage 3, 61 Stage 4 and 12 Stage 5, which achieves 0.9055 for weighted recall, 0.9092 for weighted precision, 0.9043 for weighted F1 score, 0.9827 for accuracy with 1 (ACC1) and 0.9786 for Kappa, respectively. While for per-examination ROP staging, 1173 examinations with a 4-fold cross validation strategy were used to evaluate the effectiveness of the proposed method, which prove the validity and advantage of the proposed method.
Collapse
|
36
|
Wang J, Ji J, Zhang M, Lin JW, Zhang G, Gong W, Cen LP, Lu Y, Huang X, Huang D, Li T, Ng TK, Pang CP. Automated Explainable Multidimensional Deep Learning Platform of Retinal Images for Retinopathy of Prematurity Screening. JAMA Netw Open 2021; 4:e218758. [PMID: 33950206 PMCID: PMC8100867 DOI: 10.1001/jamanetworkopen.2021.8758] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 02/17/2021] [Indexed: 02/05/2023] Open
Abstract
Importance A retinopathy of prematurity (ROP) diagnosis currently relies on indirect ophthalmoscopy assessed by experienced ophthalmologists. A deep learning algorithm based on retinal images may facilitate early detection and timely treatment of ROP to improve visual outcomes. Objective To develop a retinal image-based, multidimensional, automated, deep learning platform for ROP screening and validate its performance accuracy. Design, Setting, and Participants A total of 14 108 eyes of 8652 preterm infants who received ROP screening from 4 centers from November 4, 2010, to November 14, 2019, were included, and a total of 52 249 retinal images were randomly split into training, validation, and test sets. Four main dimensional independent classifiers were developed, including image quality, any stage of ROP, intraocular hemorrhage, and preplus/plus disease. Referral-warranted ROP was automatically generated by integrating the results of 4 classifiers at the image, eye, and patient levels. DeepSHAP, a method based on DeepLIFT and Shapley values (solution concepts in cooperative game theory), was adopted as the heat map technology to explain the predictions. The performance of the platform was further validated as compared with that of the experienced ROP experts. Data were analyzed from February 12, 2020, to June 24, 2020. Exposure A deep learning algorithm. Main Outcomes and Measures The performance of each classifier included true negative, false positive, false negative, true positive, F1 score, sensitivity, specificity, receiver operating characteristic, area under curve (AUC), and Cohen unweighted κ. Results A total of 14 108 eyes of 8652 preterm infants (mean [SD] gestational age, 32.9 [3.1] weeks; 4818 boys [60.4%] of 7973 with known sex) received ROP screening. The performance of all classifiers achieved an F1 score of 0.718 to 0.981, a sensitivity of 0.918 to 0.982, a specificity of 0.949 to 0.992, and an AUC of 0.983 to 0.998, whereas that of the referral system achieved an F1 score of 0.898 to 0.956, a sensitivity of 0.981 to 0.986, a specificity of 0.939 to 0.974, and an AUC of 0.9901 to 0.9956. Fine-grained and class-discriminative heat maps were generated by DeepSHAP in real time. The platform achieved a Cohen unweighted κ of 0.86 to 0.98 compared with a Cohen κ of 0.93 to 0.98 by the ROP experts. Conclusions and Relevance In this diagnostic study, an automated ROP screening platform was able to identify and classify multidimensional pathologic lesions in the retinal images. This platform may be able to assist routine ROP screening in general and children hospitals.
Collapse
Affiliation(s)
- Ji Wang
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jie Ji
- Network and Information Center, Shantou University, Shantou, Guangdong, China
- XuanShi Med Tech (Shanghai) Company Limited, Shanghai, China
| | - Mingzhi Zhang
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jian-Wei Lin
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Guihua Zhang
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Weifen Gong
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Ling-Ping Cen
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yamei Lu
- Department of Ophthalmology, The Sixth Affiliated Hospital of Guangzhou Medical University, Qingyuan People’s Hospital, Qingyuan, Guangdong, China
| | - Xuelin Huang
- Department of Ophthalmology, Guangdong Women and Children Hospital, Guangzhou, Guangdong, China
| | - Dingguo Huang
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Taiping Li
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tsz Kin Ng
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Chi Pui Pang
- Joint Shantou International Eye Center of Shantou University, the Chinese University of Hong Kong, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
37
|
Hu J, Song Y, Zhang L, Bai S, Yi Z. Multi-scale attention U-net for segmenting clinical target volume in graves’ ophthalmopathy. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.11.028] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
38
|
Li T, Bo W, Hu C, Kang H, Liu H, Wang K, Fu H. Applications of deep learning in fundus images: A review. Med Image Anal 2021; 69:101971. [PMID: 33524824 DOI: 10.1016/j.media.2021.101971] [Citation(s) in RCA: 99] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 01/12/2021] [Indexed: 02/06/2023]
Abstract
The use of fundus images for the early screening of eye diseases is of great clinical importance. Due to its powerful performance, deep learning is becoming more and more popular in related applications, such as lesion segmentation, biomarkers segmentation, disease diagnosis and image synthesis. Therefore, it is very necessary to summarize the recent developments in deep learning for fundus images with a review paper. In this review, we introduce 143 application papers with a carefully designed hierarchy. Moreover, 33 publicly available datasets are presented. Summaries and analyses are provided for each task. Finally, limitations common to all tasks are revealed and possible solutions are given. We will also release and regularly update the state-of-the-art results and newly-released datasets at https://github.com/nkicsl/Fundus_Review to adapt to the rapid development of this field.
Collapse
Affiliation(s)
- Tao Li
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Wang Bo
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Chunyu Hu
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hong Kang
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hanruo Liu
- Beijing Tongren Hospital, Capital Medical University, Address, Beijing 100730 China
| | - Kai Wang
- College of Computer Science, Nankai University, Tianjin 300350, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE
| |
Collapse
|
39
|
Artificial Intelligence in Pediatrics. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_316-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
40
|
Liao S, Jin L, Dai W, Huang G, Pan W, Hu C, Pan W. A machine learning‐based risk scoring system for infertility considering different age groups. INT J INTELL SYST 2020. [DOI: 10.1002/int.22344] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- ShuJie Liao
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College Huazhong University of Science and Technology Wuhan Hubei China
| | - Lei Jin
- Department of Obstetrics and Gynecology, Tongji Hospital, Tongji Medical College Huazhong University of Science and Technology Wuhan Hubei China
| | - Wan‐Qiang Dai
- School of Economic and Management Wuhan University Wuhan China
| | - Ge Huang
- School of Economic and Management Wuhan University Wuhan China
| | - Wulin Pan
- School of Economic and Management Wuhan University Wuhan China
| | - Cheng Hu
- School of Economic and Management Wuhan University Wuhan China
| | - Wei Pan
- School of Applied Economics Renmin University of China Beijing China
| |
Collapse
|
41
|
Wang H, Zhang H, Hu J, Song Y, Bai S, Yi Z. DeepEC: An error correction framework for dose prediction and organ segmentation using deep neural networks. INT J INTELL SYST 2020. [DOI: 10.1002/int.22280] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Han Wang
- Machine Intelligence Laboratory, College of Computer Science Sichuan University Chengdu China
| | - Haixian Zhang
- Machine Intelligence Laboratory, College of Computer Science Sichuan University Chengdu China
| | - Junjie Hu
- Machine Intelligence Laboratory, College of Computer Science Sichuan University Chengdu China
| | - Ying Song
- Department of Radiation Oncology, West China Hospital Sichuan University Chengdu China
| | - Sen Bai
- Department of Radiation Oncology, West China Hospital Sichuan University Chengdu China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science Sichuan University Chengdu China
| |
Collapse
|
42
|
Neural networks model based on an automated multi-scale method for mammogram classification. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.106465] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
43
|
Incorporating historical sub-optimal deep neural networks for dose prediction in radiotherapy. Med Image Anal 2020; 67:101886. [PMID: 33166773 DOI: 10.1016/j.media.2020.101886] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 10/21/2020] [Accepted: 10/22/2020] [Indexed: 02/07/2023]
Abstract
As the main treatment for cancer patients, radiotherapy has achieved enormous advancement over recent decades. However, these achievements have come at the cost of increased treatment plan complexity, necessitating high levels of expertise experience and effort. The accurate prediction of dose distribution would alleviate the above issues. Deep convolutional neural networks are known to be effective models for such prediction tasks. Most studies on dose prediction have attempted to modify the network architecture to accommodate the requirement of different diseases. In this paper, we focus on the input and output of dose prediction model, rather than the network architecture. Regarding the input, the non-modulated dose distribution, which is the initial quantity in the inverse optimization of the treatment plan, is used to provide auxiliary information for the prediction task. Regarding the output, a historical sub-optimal ensemble (HSE) method is proposed, which leverages the sub-optimal models during the training phase to improve the prediction results. The proposed HSE is a general method that does not require any modification of the learning algorithm and does not incur additional computational cost during the training phase. Multiple experiments, including the dose prediction, segmentation, and classification tasks, demonstrate the effectiveness of the strategies applied to the input and output parts.
Collapse
|
44
|
Surrogate dropout: Learning optimal drop rate through proxy. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.106340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
45
|
Deep Learning Models for Automated Diagnosis of Retinopathy of Prematurity in Preterm Infants. ELECTRONICS 2020. [DOI: 10.3390/electronics9091444] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Retinopathy of prematurity (ROP) is a disease that can cause blindness in premature infants. It is characterized by immature vascular growth of the retinal blood vessels. However, early detection and treatment of ROP can significantly improve the visual acuity of high-risk patients. Thus, early diagnosis of ROP is crucial in preventing visual impairment. However, several patients refrain from treatment owing to the lack of medical expertise in diagnosing the disease; this is especially problematic considering that the number of ROP cases is on the rise. To this end, we applied transfer learning to five deep neural network architectures for identifying ROP in preterm infants. Our results showed that the VGG19 model outperformed the other models in determining whether a preterm infant has ROP, with 96% accuracy, 96.6% sensitivity, and 95.2% specificity. We also classified the severity of the disease; the VGG19 model showed 98.82% accuracy in predicting the severity of the disease with a sensitivity and specificity of 100% and 98.41%, respectively. We performed 5-fold cross-validation on the datasets to validate the reliability of the VGG19 model and found that the VGG19 model exhibited high accuracy in predicting ROP. These findings could help promote the development of computer-aided diagnosis.
Collapse
|
46
|
Gao Z, Chen Y, Yi Z. A novel method to compute the weights of neural networks. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.03.114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
47
|
Cho WK, Choi SH. Comparison of Convolutional Neural Network Models for Determination of Vocal Fold Normality in Laryngoscopic Images. J Voice 2020; 36:590-598. [PMID: 32873430 DOI: 10.1016/j.jvoice.2020.08.003] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 08/04/2020] [Accepted: 08/04/2020] [Indexed: 01/02/2023]
Abstract
OBJECTIVES Deep learning using convolutional neural networks (CNNs) is widely used in medical imaging research. This study was performed to investigate if vocal fold normality in laryngoscopic images can be determined by CNN-based deep learning and to compare accuracy of CNN models and explore the feasibility of application of deep learning on laryngoscopy. METHODS Laryngoscopy videos were screen-captured and each image was cropped to include abducted vocal fold regions. A total of 2216 image (899 normal, 1317 abnormal) were allocated to training, validation, and test sets. Augmentation of training sets was used to train a constructed CNN model with six layers (CNN6), VGG16, Inception V3, and Xception models. Trained models were applied to the test set; for each model, receiver operating characteristic curves and cutoff values were obtained. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were calculated. The best model was employed in video-streams and localization of features was attempted using Grad-CAM. RESULTS All of the trained models showed high area under the receiver operating characteristic curve and the most discriminative cutoff levels of probability of normality were determined to be 35.6%, 61.8%, 13.5%, 39.7% for CNN6, VGG16, Inception V3, and Xception models, respectively. Accuracy of the CNN models selecting normal and abnormal vocal folds in the test set was 82.3%, 99.7%, 99.1%, and 83.8%, respectively. CONCLUSION All four models showed acceptable diagnostic accuracy. Performance of VGG16 and Inception V3 was better than the simple CNN6 model and the recently published Xception model. Real-time classification with a combination of the VGG16 model, OpenCV, and Grad-CAM on a video stream showed the potential clinical applications of the deep learning model in laryngoscopy.
Collapse
Affiliation(s)
- Won Ki Cho
- Departments of Otorhinolaryngology-Head and Neck Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Seung-Ho Choi
- Departments of Otorhinolaryngology-Head and Neck Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea.
| |
Collapse
|
48
|
Huang YP, Basanta H, Kang EYC, Chen KJ, Hwang YS, Lai CC, Campbell JP, Chiang MF, Chan RVP, Kusaka S, Fukushima Y, Wu WC. Automated detection of early-stage ROP using a deep convolutional neural network. Br J Ophthalmol 2020; 105:1099-1103. [PMID: 32830123 DOI: 10.1136/bjophthalmol-2020-316526] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 06/21/2020] [Accepted: 07/28/2020] [Indexed: 12/14/2022]
Abstract
BACKGROUND/AIM To automatically detect and classify the early stages of retinopathy of prematurity (ROP) using a deep convolutional neural network (CNN). METHODS This retrospective cross-sectional study was conducted in a referral medical centre in Taiwan. Only premature infants with no ROP, stage 1 ROP or stage 2 ROP were enrolled. Overall, 11 372 retinal fundus images were compiled and split into 10 235 images (90%) for training, 1137 (10%) for validation and 244 for testing. A deep CNN was implemented to classify images according to the ROP stage. Data were collected from December 17, 2013 to May 24, 2019 and analysed from December 2018 to January 2020. The metrics of sensitivity, specificity and area under the receiver operating characteristic curve were adopted to evaluate the performance of the algorithm relative to the reference standard diagnosis. RESULTS The model was trained using fivefold cross-validation, yielding an average accuracy of 99.93%±0.03 during training and 92.23%±1.39 during testing. The sensitivity and specificity scores of the model were 96.14%±0.87 and 95.95%±0.48, 91.82%±2.03 and 94.50%±0.71, and 89.81%±1.82 and 98.99%±0.40 when predicting no ROP versus ROP, stage 1 ROP versus no ROP and stage 2 ROP, and stage 2 ROP versus no ROP and stage 1 ROP, respectively. CONCLUSIONS The proposed system can accurately differentiate among ROP early stages and has the potential to help ophthalmologists classify ROP at an early stage.
Collapse
Affiliation(s)
- Yo-Ping Huang
- Department of Electrical Engineering, National Taipei University of Technology, Taipei, Taiwan.,Department of Information and Communication Engineering, Chaoyang University of Technology, Taichung, Taiwan
| | - Haobijam Basanta
- Department of Electrical Engineering, National Taipei University of Technology, Taipei, Taiwan
| | - Eugene Yu-Chuan Kang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan.,College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Kuan-Jen Chen
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan.,College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Yih-Shiou Hwang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan.,College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Chi-Chun Lai
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan.,College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - John P Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon, USA
| | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon, USA
| | - Robison Vernon Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, Chicago, Illinois, USA
| | - Shunji Kusaka
- Department of Ophthalmology, Kindai University, Osaka, Japan
| | - Yoko Fukushima
- Department of Ophthalmology, Osaka University, Osaka, Japan
| | - Wei-Chi Wu
- Department of Ophthalmology, Chang Gung Memorial Hospital, Taoyuan, Taiwan .,College of Medicine, Chang Gung University, Taoyuan, Taiwan
| |
Collapse
|
49
|
Tong Y, Lu W, Deng QQ, Chen C, Shen Y. Automated identification of retinopathy of prematurity by image-based deep learning. EYE AND VISION (LONDON, ENGLAND) 2020; 7:40. [PMID: 32766357 PMCID: PMC7395360 DOI: 10.1186/s40662-020-00206-2] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 07/02/2020] [Indexed: 12/13/2022]
Abstract
BACKGROUND Retinopathy of prematurity (ROP) is a leading cause of childhood blindness worldwide but can be a treatable retinal disease with appropriate and timely diagnosis. This study was performed to develop a robust intelligent system based on deep learning to automatically classify the severity of ROP from fundus images and detect the stage of ROP and presence of plus disease to enable automated diagnosis and further treatment. METHODS A total of 36,231 fundus images were labeled by 13 licensed retinal experts. A 101-layer convolutional neural network (ResNet) and a faster region-based convolutional neural network (Faster-RCNN) were trained for image classification and identification. We applied a 10-fold cross-validation method to train and optimize our algorithms. The accuracy, sensitivity, and specificity were assessed in a four-degree classification task to evaluate the performance of the intelligent system. The performance of the system was compared with results obtained by two retinal experts. Moreover, the system was designed to detect the stage of ROP and presence of plus disease as well as to highlight lesion regions based on an object detection network using Faster-RCNN. RESULTS The system achieved an accuracy of 0.903 for the ROP severity classification. Specifically, the accuracies in discriminating normal, mild, semi-urgent, and urgent were 0.883, 0.900, 0.957, and 0.870, respectively; the corresponding accuracies of the two experts were 0.902 and 0.898. Furthermore, our model achieved an accuracy of 0.957 for detecting the stage of ROP and 0.896 for detecting plus disease; the accuracies in discriminating stage I to stage V were 0.876, 0.942, 0.968, 0.998 and 0.999, respectively. CONCLUSIONS Our system was able to detect ROP and differentiate four-level classification fundus images with high accuracy and specificity. The performance of the system was comparable to or better than that of human experts, demonstrating that this system could be used to support clinical decisions.
Collapse
Affiliation(s)
- Yan Tong
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Wei Lu
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Qin-qin Deng
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Changzheng Chen
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Yin Shen
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
- Medical Research Institute, Wuhan University, Wuhan, Hubei China
| |
Collapse
|
50
|
Automated detection of kidney abnormalities using multi-feature fusion convolutional neural networks. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.105873] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|