101
|
Choudhary RA, Hashmi S, Tayyab H. Smartphone-based fundus imaging for evaluation of Retinopathy of Prematurity in a low-income country: A pilot study. Pak J Med Sci 2023; 39:638-643. [PMID: 37250571 PMCID: PMC10214799 DOI: 10.12669/pjms.39.3.7053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 01/21/2023] [Accepted: 01/31/2023] [Indexed: 11/02/2023] Open
Abstract
Objectives To evaluate the feasibility of a novel and simple smart phone-based Retinopathy of Prematurity (ROP) screening approach in a resource-constrained setting. Methods This cross-sectional validation study was conducted at the Department of Ophthalmology and Neonatal Intensive Care Unit (NICU) of The Aga Khan University Hospital, Pakistan, from January 2022 to April 2022. A total of 63 images of eyes with active ROP (stage-1, 2, 3, 4 and/or plus or pre-plus disease) were included in this study. The stage of ROP was documented by the principal investigator using an indirect ophthalmoscope and retinal images were obtained using this novel technique. These images were shared with two masked ROP experts who rated the image quality and determined the stage of ROP and presence of plus disease. Their reports were compared with the initial findings reported by principal investigator using indirect ophthalmoscope. Results We reviewed 63 images for image quality, stage of ROP and presence of plus disease. There was significant agreement between the gold standard and the Rater-1 and 2 for the presence of plus disease (Cohen's kappa was 0.84 and 1.0) and the stage of the disease (Cohen's kappa 0.65 and 1.0). There was significant agreement between the Rater for presence of plus disease and any stage of ROP (Cohen's κ: 0.84 and 0.65 for plus disease and any stage of the ROP, respectively). Rater-1 and 2 rated 96.83% and 98.41% images as excellent / acceptable respectively. Conclusions High quality retinal images can be captured with a smartphone and 28D lens without using any additional adapter equipment. This approach of ROP screening can form basis of telemedicine for ROP in resource constrained areas.
Collapse
Affiliation(s)
- Roha Ahmad Choudhary
- Roha Ahmad Choudhary, MBBS. Department of Ophthalmology and Visual Sciences, The Aga Khan University Hospital, Karachi Stadium, Stadium Road, Karachi, Pakistani
| | - Shiraz Hashmi
- Shiraz Hashmi, MBBS, MPH. Department of Ophthalmology and Visual Sciences, The Aga Khan University Hospital, Karachi Stadium, Stadium Road, Karachi, Pakistani
| | - Haroon Tayyab
- Haroon Tayyab, MBBS, FCPS (Ophth), FCPS (VRO), FRCS (Glasg), FACS. Department of Ophthalmology and Visual Sciences, The Aga Khan University Hospital, Karachi Stadium, Stadium Road, Karachi, Pakistani
| |
Collapse
|
102
|
Dolar-Szczasny J, Barańska A, Rejdak R. Evaluating the Efficacy of Teleophthalmology in Delivering Ophthalmic Care to Underserved Populations: A Literature Review. J Clin Med 2023; 12:jcm12093161. [PMID: 37176602 PMCID: PMC10179149 DOI: 10.3390/jcm12093161] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 04/24/2023] [Accepted: 04/25/2023] [Indexed: 05/15/2023] Open
Abstract
Technological advancement has brought commendable changes in medicine, advancing diagnosis, treatment, and interventions. Telemedicine has been adopted by various subspecialties including ophthalmology. Over the years, teleophthalmology has been implemented in various countries, and continuous progress is being made in this area. In underserved populations, due to socioeconomic factors, there is little or no access to healthcare facilities, and people are at higher risk of eye diseases and vision impairment. Transportation is the major hurdle for these people in obtaining access to eye care in the main hospitals. There is a dire need for accessible eye care for such populations, and teleophthalmology is the ray of hope for providing eye care facilities to underserved people. Numerous studies have reported the advantages of teleophthalmology for rural populations such as being cost-effective, timesaving, reliable, efficient, and satisfactory for patients. Although it is being practiced in urban populations, for rural populations, its benefits amplify. However, there are certain obstacles as well, such as the cost of equipment, lack of steady electricity and internet supply in rural areas, and the attitude of people in certain regions toward acceptance of teleophthalmology. In this review, we have discussed in detail eye health in rural populations, teleophthalmology, and its effectiveness in rural populations of different countries.
Collapse
Affiliation(s)
- Joanna Dolar-Szczasny
- Chair and Department of General and Pediatric Ophthalmology, Medical University of Lublin, 20-079 Lublin, Poland
| | - Agnieszka Barańska
- Department of Medical Informatics and Statistics with E-Learning Laboratory, Medical University of Lublin, 20-090 Lublin, Poland
| | - Robert Rejdak
- Chair and Department of General and Pediatric Ophthalmology, Medical University of Lublin, 20-079 Lublin, Poland
| |
Collapse
|
103
|
Wu CT, Lin TY, Lin CJ, Hwang DK. The future application of artificial intelligence and telemedicine in the retina: A perspective. Taiwan J Ophthalmol 2023; 13:133-141. [PMID: 37484624 PMCID: PMC10361422 DOI: 10.4103/tjo.tjo-d-23-00028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Accepted: 04/02/2023] [Indexed: 07/25/2023] Open
Abstract
The development of artificial intelligence (AI) and deep learning provided precise image recognition and classification in the medical field. Ophthalmology is an exceptional department to translate AI applications since noninvasive imaging is routinely used for the diagnosis and monitoring. In recent years, AI-based image interpretation of optical coherence tomography and fundus photograph in retinal diseases has been extended to diabetic retinopathy, age-related macular degeneration, and retinopathy of prematurity. The rapid development of portable ocular monitoring devices coupled with AI-informed interpretations allows possible home monitoring or remote monitoring of retinal diseases and patients to gain autonomy and responsibility for their conditions. This review discusses the current research and application of AI, telemedicine, and home monitoring devices on retinal disease. Furthermore, we propose a future model of how AI and digital technology could be implemented in retinal diseases.
Collapse
Affiliation(s)
- Chu-Ting Wu
- Department of Medical Education, Taipei Veterans General Hospital, Taipei, Taiwan
- Department of Medicine, School of Medicine, Tzu Chi University, Hualien, Taiwan
| | - Ting-Yi Lin
- Doctoral Degree Program of Translational Medicine, National Yang Ming Chiao Tung University and Academia Sinica, Taipei, Taiwan
| | - Cheng-Jun Lin
- Department of Biological Science and Technology, Institute of Biological Science and Technology, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
- Institute of Population Health Sciences, National Health Research Institutes, Zhunan, Miaoli County, Taiwan
| | - De-Kuang Hwang
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei, Taiwan
- Department of Medicine, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| |
Collapse
|
104
|
Ji Y, Ji Y, Liu Y, Zhao Y, Zhang L. Research progress on diagnosing retinal vascular diseases based on artificial intelligence and fundus images. Front Cell Dev Biol 2023; 11:1168327. [PMID: 37056999 PMCID: PMC10086262 DOI: 10.3389/fcell.2023.1168327] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 03/20/2023] [Indexed: 03/30/2023] Open
Abstract
As the only blood vessels that can directly be seen in the whole body, pathological changes in retinal vessels are related to the metabolic state of the whole body and many systems, which seriously affect the vision and quality of life of patients. Timely diagnosis and treatment are key to improving vision prognosis. In recent years, with the rapid development of artificial intelligence, the application of artificial intelligence in ophthalmology has become increasingly extensive and in-depth, especially in the field of retinal vascular diseases. Research study results based on artificial intelligence and fundus images are remarkable and provides a great possibility for early diagnosis and treatment. This paper reviews the recent research progress on artificial intelligence in retinal vascular diseases (including diabetic retinopathy, hypertensive retinopathy, retinal vein occlusion, retinopathy of prematurity, and age-related macular degeneration). The limitations and challenges of the research process are also discussed.
Collapse
Affiliation(s)
- Yuke Ji
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Yun Ji
- Affiliated Hospital of Shandong University of traditional Chinese Medicine, Jinan, Shandong, China
| | - Yunfang Liu
- Department of Ophthalmology, The First People’s Hospital of Huzhou, Huzhou, Zhejiang, China
| | - Ying Zhao
- Affiliated Hospital of Shandong University of traditional Chinese Medicine, Jinan, Shandong, China
- *Correspondence: Liya Zhang, ; Ying Zhao,
| | - Liya Zhang
- Department of Ophthalmology, The First People’s Hospital of Huzhou, Huzhou, Zhejiang, China
- *Correspondence: Liya Zhang, ; Ying Zhao,
| |
Collapse
|
105
|
Artificial intelligence using deep learning to predict the anatomical outcome of rhegmatogenous retinal detachment surgery: a pilot study. Graefes Arch Clin Exp Ophthalmol 2023; 261:715-721. [PMID: 36303063 DOI: 10.1007/s00417-022-05884-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 10/14/2022] [Accepted: 10/21/2022] [Indexed: 11/04/2022] Open
Abstract
PURPOSE To develop and evaluate an automated deep learning model to predict the anatomical outcome of rhegmatogenous retinal detachment (RRD) surgery. METHODS Six thousand six hundred and sixty-one digital images of RRD treated by vitrectomy and internal tamponade were collected from the British and Eire Association of Vitreoretinal Surgeons database. Each image was classified as a primary surgical success or a primary surgical failure. The synthetic minority over-sampling technique was used to address class imbalance. We adopted the state-of-the-art deep convolutional neural network architecture Inception v3 to train, validate, and test deep learning models to predict the anatomical outcome of RRD surgery. The area under the curve (AUC), sensitivity, and specificity for predicting the outcome of RRD surgery was calculated for the best predictive deep learning model. RESULTS The deep learning model was able to predict the anatomical outcome of RRD surgery with an AUC of 0.94, with a corresponding sensitivity of 73.3% and a specificity of 96%. CONCLUSION A deep learning model is capable of accurately predicting the anatomical outcome of RRD surgery. This fully automated model has potential application in surgical care of patients with RRD.
Collapse
|
106
|
Joseph N, Benetz BA, Chirra P, Menegay H, Oellerich S, Baydoun L, Melles GRJ, Lass JH, Wilson DL. Machine Learning Analysis of Postkeratoplasty Endothelial Cell Images for the Prediction of Future Graft Rejection. Transl Vis Sci Technol 2023; 12:22. [PMID: 36790821 PMCID: PMC9940770 DOI: 10.1167/tvst.12.2.22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023] Open
Abstract
Purpose This study developed machine learning (ML) classifiers of postoperative corneal endothelial cell images to identify postkeratoplasty patients at risk for allograft rejection within 1 to 24 months of treatment. Methods Central corneal endothelium specular microscopic images were obtained from 44 patients after Descemet membrane endothelial keratoplasty (DMEK), half of whom had experienced graft rejection. After deep learning segmentation of images from all patients' last and second-to-last imaging, time points prior to rejection were analyzed (175 and 168, respectively), and 432 quantitative features were extracted assessing cellular spatial arrangements and cell intensity values. Random forest (RF) and logistic regression (LR) models were trained on novel-to-this-application features from single time points, delta-radiomics, and traditional morphometrics (endothelial cell density, coefficient of variation, hexagonality) via 10 iterations of threefold cross-validation. Final assessments were evaluated on a held-out test set. Results ML classifiers trained on novel-to-this-application features outperformed those trained on traditional morphometrics for predicting future graft rejection. RF and LR models predicted post-DMEK patients' allograft rejection in the held-out test set with >0.80 accuracy. RF models trained on novel features from second-to-last time points and delta-radiomics predicted post-DMEK patients' rejection with >0.70 accuracy. Cell-graph spatial arrangement, intensity, and shape features were most indicative of graft rejection. Conclusions ML classifiers successfully predicted future graft rejections 1 to 24 months prior to clinically apparent rejection. This technology could aid clinicians to identify patients at risk for graft rejection and guide treatment plans accordingly. Translational Relevance Our software applies ML techniques to clinical images and enhances patient care by detecting preclinical keratoplasty rejection.
Collapse
Affiliation(s)
- Naomi Joseph
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Beth Ann Benetz
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA,Cornea Image Analysis Reading Center, Cleveland, OH, USA
| | - Prathyush Chirra
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Harry Menegay
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA,Cornea Image Analysis Reading Center, Cleveland, OH, USA
| | - Silke Oellerich
- Netherlands Institute for Innovative Ocular Surgery (NIIOS), Rotterdam, The Netherlands
| | - Lamis Baydoun
- Netherlands Institute for Innovative Ocular Surgery (NIIOS), Rotterdam, The Netherlands,University Eye Hospital Münster, Münster, Germany,ELZA Institute Dietikon/Zurich, Zurich, Switzerland
| | - Gerrit R. J. Melles
- Netherlands Institute for Innovative Ocular Surgery (NIIOS), Rotterdam, The Netherlands,NIIOS-USA, San Diego, CA, USA
| | - Jonathan H. Lass
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA,Cornea Image Analysis Reading Center, Cleveland, OH, USA
| | - David L. Wilson
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| |
Collapse
|
107
|
Xie H, Liu Y, Lei H, Song T, Yue G, Du Y, Wang T, Zhang G, Lei B. Adversarial learning-based multi-level dense-transmission knowledge distillation for AP-ROP detection. Med Image Anal 2023; 84:102725. [PMID: 36527770 DOI: 10.1016/j.media.2022.102725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 10/31/2022] [Accepted: 12/02/2022] [Indexed: 12/13/2022]
Abstract
The Aggressive Posterior Retinopathy of Prematurity (AP-ROP) is the major cause of blindness for premature infants. The automatic diagnosis method has become an important tool for detecting AP-ROP. However, most existing automatic diagnosis methods were with heavy complexity, which hinders the development of the detecting devices. Hence, a small network (student network) with a high imitation ability is exactly needed, which can mimic a large network (teacher network) with promising diagnostic performance. Also, if the student network is too small due to the increasing gap between teacher and student networks, the diagnostic performance will drop. To tackle the above issues, we propose a novel adversarial learning-based multi-level dense knowledge distillation method for detecting AP-ROP. Specifically, the pre-trained teacher network is utilized to train multiple intermediate-size networks (i.e., teacher-assistant networks) and one student network by dense transmission mode, where the knowledge from all upper-level networks is transmitted to the current lower-level network. To ensure that two adjacent networks can distill the abundant knowledge, the adversarial learning module is leveraged to enforce the lower-level network to generate the features that are similar to those of the upper-level network. Extensive experiments demonstrate that our proposed method can realize the effective knowledge distillation from the teacher to student networks. We achieve a promising knowledge distillation performance for our private dataset and a public dataset, which can provide a new insight for devising lightweight detecting systems of fundus diseases for practical use.
Collapse
Affiliation(s)
- Hai Xie
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Yaling Liu
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Haijun Lei
- Guangdong Province Key Laboratory of Popular High-performance Computers, School of Computer and Software Engineering, Shenzhen University, Shenzhen, China
| | - Tiancheng Song
- Shenzhen Silan Zhichuang Technology Co., Ltd., Shenzhen, China
| | - Guanghui Yue
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Yueshanyi Du
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China.
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.
| |
Collapse
|
108
|
Zhang J, Zou H. Artificial intelligence technology for myopia challenges: A review. Front Cell Dev Biol 2023; 11:1124005. [PMID: 36733459 PMCID: PMC9887165 DOI: 10.3389/fcell.2023.1124005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 01/10/2023] [Indexed: 01/19/2023] Open
Abstract
Myopia is a significant global health concern and affects human visual function, resulting in blurred vision at a distance. There are still many unsolved challenges in this field that require the help of new technologies. Currently, artificial intelligence (AI) technology is dominating medical image and data analysis and has been introduced to address challenges in the clinical practice of many ocular diseases. AI research in myopia is still in its early stages. Understanding the strengths and limitations of each AI method in specific tasks of myopia could be of great value and might help us to choose appropriate approaches for different tasks. This article reviews and elaborates on the technical details of AI methods applied for myopia risk prediction, screening and diagnosis, pathogenesis, and treatment.
Collapse
Affiliation(s)
- Juzhao Zhang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haidong Zou
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China,Shanghai Eye Diseases Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai, China,National Clinical Research Center for Eye Diseases, Shanghai, China,Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China,*Correspondence: Haidong Zou,
| |
Collapse
|
109
|
GabROP: Gabor Wavelets-Based CAD for Retinopathy of Prematurity Diagnosis via Convolutional Neural Networks. Diagnostics (Basel) 2023; 13:diagnostics13020171. [PMID: 36672981 PMCID: PMC9857608 DOI: 10.3390/diagnostics13020171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 12/12/2022] [Accepted: 12/19/2022] [Indexed: 01/05/2023] Open
Abstract
One of the most serious and dangerous ocular problems in premature infants is retinopathy of prematurity (ROP), a proliferative vascular disease. Ophthalmologists can use automatic computer-assisted diagnostic (CAD) tools to help them make a safe, accurate, and low-cost diagnosis of ROP. All previous CAD tools for ROP diagnosis use the original fundus images. Unfortunately, learning the discriminative representation from ROP-related fundus images is difficult. Textural analysis techniques, such as Gabor wavelets (GW), can demonstrate significant texture information that can help artificial intelligence (AI) based models to improve diagnostic accuracy. In this paper, an effective and automated CAD tool, namely GabROP, based on GW and multiple deep learning (DL) models is proposed. Initially, GabROP analyzes fundus images using GW and generates several sets of GW images. Next, these sets of images are used to train three convolutional neural networks (CNNs) models independently. Additionally, the actual fundus pictures are used to build these networks. Using the discrete wavelet transform (DWT), texture features retrieved from every CNN trained with various sets of GW images are combined to create a textural-spectral-temporal demonstration. Afterward, for each CNN, these features are concatenated with spatial deep features obtained from the original fundus images. Finally, the previous concatenated features of all three CNN are incorporated using the discrete cosine transform (DCT) to lessen the size of features caused by the fusion process. The outcomes of GabROP show that it is accurate and efficient for ophthalmologists. Additionally, the effectiveness of GabROP is compared to recently developed ROP diagnostic techniques. Due to GabROP's superior performance compared to competing tools, ophthalmologists may be able to identify ROP more reliably and precisely, which could result in a reduction in diagnostic effort and examination time.
Collapse
|
110
|
Eilts SK, Pfeil JM, Poschkamp B, Krohne TU, Eter N, Barth T, Guthoff R, Lagrèze W, Grundel M, Bründer MC, Busch M, Kalpathy-Cramer J, Chiang MF, Chan RVP, Coyner AS, Ostmo S, Campbell JP, Stahl A. Assessment of Retinopathy of Prematurity Regression and Reactivation Using an Artificial Intelligence-Based Vascular Severity Score. JAMA Netw Open 2023; 6:e2251512. [PMID: 36656578 PMCID: PMC9857423 DOI: 10.1001/jamanetworkopen.2022.51512] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/20/2023] Open
Abstract
IMPORTANCE One of the biggest challenges when using anti-vascular endothelial growth factor (VEGF) agents to treat retinopathy of prematurity (ROP) is the need to perform long-term follow-up examinations to identify eyes at risk of ROP reactivation requiring retreatment. OBJECTIVE To evaluate whether an artificial intelligence (AI)-based vascular severity score (VSS) can be used to analyze ROP regression and reactivation after anti-VEGF treatment and potentially identify eyes at risk of ROP reactivation requiring retreatment. DESIGN, SETTING, AND PARTICIPANTS This prognostic study was a secondary analysis of posterior pole fundus images collected during the multicenter, double-blind, investigator-initiated Comparing Alternative Ranibizumab Dosages for Safety and Efficacy in Retinopathy of Prematurity (CARE-ROP) randomized clinical trial, which compared 2 different doses of ranibizumab (0.12 mg vs 0.20 mg) for the treatment of ROP. The CARE-ROP trial screened and enrolled infants between September 5, 2014, and July 14, 2016. A total of 1046 wide-angle fundus images obtained from 19 infants at predefined study time points were analyzed. The analyses of VSS were performed between January 20, 2021, and November 18, 2022. INTERVENTIONS An AI-based algorithm assigned a VSS between 1 (normal) and 9 (most severe) to fundus images. MAIN OUTCOMES AND MEASURES Analysis of VSS in infants with ROP over time and VSS comparisons between the 2 treatment groups (0.12 mg vs 0.20 mg of ranibizumab) and between infants who did and did not receive retreatment for ROP reactivation. RESULTS Among 19 infants with ROP in the CARE-ROP randomized clinical trial, the median (range) postmenstrual age at first treatment was 36.4 (34.7-39.7) weeks; 10 infants (52.6%) were male, and 18 (94.7%) were White. The mean (SD) VSS was 6.7 (1.9) at baseline and significantly decreased to 2.7 (1.9) at week 1 (P < .001) and 2.9 (1.3) at week 4 (P < .001). The mean (SD) VSS of infants with ROP reactivation requiring retreatment was 6.5 (1.9) at the time of retreatment, which was significantly higher than the VSS at week 4 (P < .001). No significant difference was found in VSS between the 2 treatment groups, but the change in VSS between baseline and week 1 was higher for infants who later required retreatment (mean [SD], 7.8 [1.3] at baseline vs 1.7 [0.7] at week 1) vs infants who did not (mean [SD], 6.4 [1.9] at baseline vs 3.0 [2.0] at week 1). In eyes requiring retreatment, higher baseline VSS was correlated with earlier time of retreatment (Pearson r = -0.9997; P < .001). CONCLUSIONS AND RELEVANCE In this study, VSS decreased after ranibizumab treatment, consistent with clinical disease regression. In cases of ROP reactivation requiring retreatment, VSS increased again to values comparable with baseline values. In addition, a greater change in VSS during the first week after initial treatment was found to be associated with a higher risk of later ROP reactivation, and high baseline VSS was correlated with earlier retreatment. These findings may have implications for monitoring ROP regression and reactivation after anti-VEGF treatment.
Collapse
Affiliation(s)
- Sonja K. Eilts
- Department of Ophthalmology, University Medicine Greifswald, Greifswald, Germany
| | - Johanna M. Pfeil
- Department of Ophthalmology, University Medicine Greifswald, Greifswald, Germany
| | - Broder Poschkamp
- Department of Ophthalmology, University Medicine Greifswald, Greifswald, Germany
| | - Tim U. Krohne
- Department of Ophthalmology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Nicole Eter
- Department of Ophthalmology, University of Muenster Medical Center, Muenster, Germany
| | - Teresa Barth
- Department of Ophthalmology, University of Regensburg, Regensburg, Germany
| | - Rainer Guthoff
- Department of Ophthalmology, Faculty of Medicine, University of Düsseldorf, Düsseldorf, Germany
| | - Wolf Lagrèze
- Eye Center, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Milena Grundel
- Department of Ophthalmology, University Medicine Greifswald, Greifswald, Germany
| | | | - Martin Busch
- Department of Ophthalmology, University Medicine Greifswald, Greifswald, Germany
| | - Jayashree Kalpathy-Cramer
- Center for Clinical Data Science, Massachusetts General Hospital, Brigham and Women’s Hospital, Boston
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
- National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - R. V. Paul Chan
- Department of Ophthalmology, University of Illinois Chicago, Chicago
| | - Aaron S. Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Susan Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland
| | | | - Andreas Stahl
- Department of Ophthalmology, University Medicine Greifswald, Greifswald, Germany
| |
Collapse
|
111
|
Liu G, Jiang A, Cao L, Ling S, Wang X, Bu S, Lu F. Optic disc and retinal vascular features in first 6 years of Chinese children. Front Pediatr 2023; 11:1101768. [PMID: 37033190 PMCID: PMC10077150 DOI: 10.3389/fped.2023.1101768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 02/23/2023] [Indexed: 04/11/2023] Open
Abstract
Purpose Retinal microvasculature plays an important role in children's fundus lesions and even in their later life. However, little was known on the features of normal retina in early life. The purpose of this study was to explore the normal retinal features in the first 6 years of life and provide information for future research. Methods Children, aged from birth to 6 years old and diagnosed with various unilateral ocular diseases were included. Venous phase fundus fluorescein angiography images with the optic disc at the center were collected. Based on the ResUNet convolutional neural network, optic disc and retinal vascular features in the posterior retina were computed automatically. Results A total of 146 normal eyes of 146 children were included. Among different age groups, no changes were shown in the optic disc diameter (y = -0.00002x + 1.362, R2 = 0.025, p = 0.058). Retinal vessel density and fractal dimension are linearly and strongly correlated (r = 0.979, p < 0.001). Older children had smaller value of fractal dimension (y = -0.000026x + 1.549, R2 = 0.075, p = 0.001) and narrower vascular caliber if they were less than 3 years old (y = -0.008x + 84.861, R2 = 0.205, p < 0.001). No differences were in the density (y = -0.000007x + 0.134, R2 = 0.023, p = 0.067) and the curvature of retinal vessels (lnC = -0.00001x - 4.657, R2 = 0.001, p = 0.667). Conclusions Age and gender did not impact the optic disc diameter, vessel density, and vessel curvature significantly in this group of children. Trends of decreased vessel caliber in the first 3 years of life and decreased vessel complexity with age were observed. The structural characteristics provide information for future research to better understand the developmental origin of the healthy and diseased retina.
Collapse
Affiliation(s)
- Guina Liu
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Anna Jiang
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Le Cao
- Department of Neurology, West China Hospital, Sichuan University, Chengdu, China
| | - Saiguang Ling
- EVision Technology (Beijing) Co. LTD, Beijing, China
| | - Xi Wang
- EVision Technology (Beijing) Co. LTD, Beijing, China
| | - Shaochong Bu
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
- Correspondence: Shaochong Bu Fang Lu
| | - Fang Lu
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
- Correspondence: Shaochong Bu Fang Lu
| |
Collapse
|
112
|
Beam KS, Zupancic JAF. Machine learning: remember the fundamentals. Pediatr Res 2023; 93:291-292. [PMID: 36550355 DOI: 10.1038/s41390-022-02420-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022]
Affiliation(s)
- Kristyn S Beam
- Department of Neonatology, Beth Israel Deaconess Medical Center, Boston, MA, USA.,Department of Pediatrics, Harvard Medical School, Boston, MA, USA
| | - John A F Zupancic
- Department of Neonatology, Beth Israel Deaconess Medical Center, Boston, MA, USA. .,Department of Pediatrics, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
113
|
Bujoreanu Bezman L, Tiutiuca C, Totolici G, Carneciu N, Bujoreanu FC, Ciortea DA, Niculet E, Fulga A, Alexandru AM, Stan DJ, Nechita A. Latest Trends in Retinopathy of Prematurity: Research on Risk Factors, Diagnostic Methods and Therapies. Int J Gen Med 2023; 16:937-949. [PMID: 36942030 PMCID: PMC10024537 DOI: 10.2147/ijgm.s401122] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 02/17/2023] [Indexed: 03/15/2023] Open
Abstract
Retinopathy of prematurity (ROP) is a vasoproliferative disorder with an imminent risk of blindness, in cases where early diagnosis and treatment are not performed. The doctors' constant motivation to give these fragile beings a chance at life with optimal visual acuity has never stopped, since Terry first described this condition. Thus, throughout time, several specific advancements have been made in the management of ROP. Apart from the most known risk factors, this narrative review brings to light the latest research about new potential risk factors, such as: proteinuria, insulin-like growth factor 1 (IGF-1) and blood transfusions. Digital imaging has revolutionized the management of retinal pathologies, and it is more and more used in identifying and staging ROP, particularly in the disadvantaged regions by the means of telescreening. Moreover, optical coherence tomography (OCT) and automated diagnostic tools based on deep learning offer new perspectives on the ROP diagnosis. The new therapeutical trend based on the use of anti-VEGF agents is increasingly used in the treatment of ROP patients, and recent research sustains the theory according to which these agents do not interfere with the neurodevelopment of premature babies.
Collapse
Affiliation(s)
- Laura Bujoreanu Bezman
- Department of Ophthalmology, “Sfantul Apostol Andrei” Emergency Clinical Hospital, Galati, Romania
- Department of Morphological and Functional Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Carmen Tiutiuca
- Department of Ophthalmology, “Sfantul Apostol Andrei” Emergency Clinical Hospital, Galati, Romania
- Clinical Surgical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
- Correspondence: Carmen Tiutiuca, Clinical Surgical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, 800008, Romania, Tel +40741330788, Email
| | - Geanina Totolici
- Department of Ophthalmology, “Sfantul Apostol Andrei” Emergency Clinical Hospital, Galati, Romania
- Clinical Surgical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Nicoleta Carneciu
- Department of Ophthalmology, “Sfantul Apostol Andrei” Emergency Clinical Hospital, Galati, Romania
- Department of Morphological and Functional Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Florin Ciprian Bujoreanu
- Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
- Florin Ciprian Bujoreanu, Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, 800008, Romania, Tel +40741395844, Email
| | - Diana Andreea Ciortea
- Department of Pediatrics, “Sfantul Ioan” Emergency Clinical Hospital for Children, Galati, Romania
- Clinical Medical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Elena Niculet
- Department of Morphological and Functional Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
- Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Ana Fulga
- Clinical Surgical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
- Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Anamaria Madalina Alexandru
- Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
- Department of Neonatology, “Sfantul Apostol Andrei” Emergency Clinical Hospital, Galati, Romania
| | - Daniela Jicman Stan
- Doctoral School of Biomedical Sciences, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| | - Aurel Nechita
- Department of Pediatrics, “Sfantul Ioan” Emergency Clinical Hospital for Children, Galati, Romania
- Clinical Medical Department, Faculty of Medicine and Pharmacy, “Dunărea de Jos” University, Galati, Romania
| |
Collapse
|
114
|
Selvachandran G, Quek SG, Paramesran R, Ding W, Son LH. Developments in the detection of diabetic retinopathy: a state-of-the-art review of computer-aided diagnosis and machine learning methods. Artif Intell Rev 2023; 56:915-964. [PMID: 35498558 PMCID: PMC9038999 DOI: 10.1007/s10462-022-10185-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/04/2022] [Indexed: 02/02/2023]
Abstract
The exponential increase in the number of diabetics around the world has led to an equally large increase in the number of diabetic retinopathy (DR) cases which is one of the major complications caused by diabetes. Left unattended, DR worsens the vision and would lead to partial or complete blindness. As the number of diabetics continue to increase exponentially in the coming years, the number of qualified ophthalmologists need to increase in tandem in order to meet the demand for screening of the growing number of diabetic patients. This makes it pertinent to develop ways to automate the detection process of DR. A computer aided diagnosis system has the potential to significantly reduce the burden currently placed on the ophthalmologists. Hence, this review paper is presented with the aim of summarizing, classifying, and analyzing all the recent development on automated DR detection using fundus images from 2015 up to this date. Such work offers an unprecedentedly thorough review of all the recent works on DR, which will potentially increase the understanding of all the recent studies on automated DR detection, particularly on those that deploys machine learning algorithms. Firstly, in this paper, a comprehensive state-of-the-art review of the methods that have been introduced in the detection of DR is presented, with a focus on machine learning models such as convolutional neural networks (CNN) and artificial neural networks (ANN) and various hybrid models. Each AI will then be classified according to its type (e.g. CNN, ANN, SVM), its specific task(s) in performing DR detection. In particular, the models that deploy CNN will be further analyzed and classified according to some important properties of the respective CNN architectures of each model. A total of 150 research articles related to the aforementioned areas that were published in the recent 5 years have been utilized in this review to provide a comprehensive overview of the latest developments in the detection of DR. Supplementary Information The online version contains supplementary material available at 10.1007/s10462-022-10185-6.
Collapse
Affiliation(s)
- Ganeshsree Selvachandran
- Department of Actuarial Science and Applied Statistics, Faculty of Business & Management, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Shio Gai Quek
- Department of Actuarial Science and Applied Statistics, Faculty of Business & Management, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Raveendran Paramesran
- Institute of Computer Science and Digital Innovation, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, Nantong, 226019 People’s Republic of China
| | - Le Hoang Son
- VNU Information Technology Institute, Vietnam National University, Hanoi, Vietnam
| |
Collapse
|
115
|
Shiihara H, Sonoda S, Terasaki H, Fujiwara K, Funatsu R, Shiba Y, Kumagai Y, Honda N, Sakamoto T. Wayfinding artificial intelligence to detect clinically meaningful spots of retinal diseases: Artificial intelligence to help retina specialists in real world practice. PLoS One 2023; 18:e0283214. [PMID: 36972243 PMCID: PMC10042340 DOI: 10.1371/journal.pone.0283214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 02/20/2023] [Indexed: 03/29/2023] Open
Abstract
AIM/BACKGROUND To aim of this study is to develop an artificial intelligence (AI) that aids in the thought process by providing retinal clinicians with clinically meaningful or abnormal findings rather than just a final diagnosis, i.e., a "wayfinding AI." METHODS Spectral domain optical coherence tomography B-scan images were classified into 189 normal and 111 diseased eyes. These were automatically segmented using a deep-learning based boundary-layer detection model. During segmentation, the AI model calculates the probability of the boundary surface of the layer for each A-scan. If this probability distribution is not biased toward a single point, layer detection is defined as ambiguous. This ambiguity was calculated using entropy, and a value referred to as the ambiguity index was calculated for each OCT image. The ability of the ambiguity index to classify normal and diseased images and the presence or absence of abnormalities in each layer of the retina were evaluated based on the area under the curve (AUC). A heatmap, i.e., an ambiguity-map, of each layer, that changes the color according to the ambiguity index value, was also created. RESULTS The ambiguity index of the overall retina of the normal and disease-affected images (mean ± SD) were 1.76 ± 0.10 and 2.06 ± 0.22, respectively, with a significant difference (p < 0.05). The AUC used to distinguish normal and disease-affected images using the ambiguity index was 0.93, and was 0.588 for the internal limiting membrane boundary, 0.902 for the nerve fiber layer/ganglion cell layer boundary, 0.920 for the inner plexiform layer/inner nuclear layer boundary, 0.882 for the outer plexiform layer/outer nuclear layer boundary, 0.926 for the ellipsoid zone line, and 0.866 for the retinal pigment epithelium/Bruch's membrane boundary. Three representative cases reveal the usefulness of an ambiguity map. CONCLUSIONS The present AI algorithm can pinpoint abnormal retinal lesions in OCT images, and its localization is known at a glance when using an ambiguity map. This will help diagnose the processes of clinicians as a wayfinding tool.
Collapse
Affiliation(s)
- Hideki Shiihara
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | - Shozo Sonoda
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
- Sonoda Eye Clinic, Kagoshima, Japan
| | - Hiroto Terasaki
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | - Kazuki Fujiwara
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | - Ryoh Funatsu
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | | | | | | | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| |
Collapse
|
116
|
Ali MH, Jaber MM, Abd SK, Alkhayyat A, Jasim AD. Artificial Neural Network-Based Medical Diagnostics and Therapeutics. INT J PATTERN RECOGN 2022; 36. [DOI: 10.1142/s0218001422400079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
The advancement of healthcare technology is impossible without machine learning (ML). There have been numerous advances in ML to analyze, predict, and diagnose medical data. Integrating a centralized scheme and therapy for classifying and diagnosing illnesses and disorders is a major obstacle in modern healthcare. To standardize all medical data into a single repository, researchers have proposed using ML using the centralized artificial neural network model (ML-CANNM). Random tree, support vector machine, and gradient booster are just a few proposed ML classifiers. Artificial neural networks (ANNs) have been trained using a variety of medical datasets to predict and analyze outcomes. ML-CANNM collects patient data from various studies and uses ML and ANNs to determine the results. Three layers make up an ANN. ML is used to classify the given patients’ data in the input layer. In the hidden layer, classification data are compared to a training dataset. The output layer’s job is to identify, classify, and diagnose diseases. As a result, disease diagnosis and detection are integrated into a single healthcare database. The proposed framework has proven that ML-CANNM works with more accuracy and lesser execution time. Thus, the numerical outcome suggested ML-CANNM increased accuracy ratio of 99.2% and a prediction ratio of 97.5%. The findings further show that the execution time is enhanced by less than 2[Formula: see text]h, decision table using ML and results in an efficiency ratio of 97.5%.
Collapse
Affiliation(s)
- Mohammed Hasan Ali
- Computer Techniques Engineering Department, Faculty of Information Technology, Imam Ja’afar Al-Sadiq University, Najaf 10023, Iraq
| | - Mustafa Musa Jaber
- Department of Computer Science, Al-Turath University College, Baghdad, Iraq
- Department of Medical Instruments Engineering Techniques, Al-Farahidi University, Baghdad, Iraq
| | - Sura Khalil Abd
- Department of Computer Science, Dijlah University College, Baghdad 10021, Iraq
| | - Ahmed Alkhayyat
- Department of Computer Engineering Techniques, College of Technical Engineering, The Islamic University, Najaf, Iraq
| | - Abdali Dakhil Jasim
- English Language Department, Al-Mustaqbal University College, Hillah 51001, Iraq
| |
Collapse
|
117
|
Anton N, Doroftei B, Curteanu S, Catãlin L, Ilie OD, Târcoveanu F, Bogdănici CM. Comprehensive Review on the Use of Artificial Intelligence in Ophthalmology and Future Research Directions. Diagnostics (Basel) 2022; 13:100. [PMID: 36611392 PMCID: PMC9818832 DOI: 10.3390/diagnostics13010100] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/12/2022] [Accepted: 12/26/2022] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND Having several applications in medicine, and in ophthalmology in particular, artificial intelligence (AI) tools have been used to detect visual function deficits, thus playing a key role in diagnosing eye diseases and in predicting the evolution of these common and disabling diseases. AI tools, i.e., artificial neural networks (ANNs), are progressively involved in detecting and customized control of ophthalmic diseases. The studies that refer to the efficiency of AI in medicine and especially in ophthalmology were analyzed in this review. MATERIALS AND METHODS We conducted a comprehensive review in order to collect all accounts published between 2015 and 2022 that refer to these applications of AI in medicine and especially in ophthalmology. Neural networks have a major role in establishing the demand to initiate preliminary anti-glaucoma therapy to stop the advance of the disease. RESULTS Different surveys in the literature review show the remarkable benefit of these AI tools in ophthalmology in evaluating the visual field, optic nerve, and retinal nerve fiber layer, thus ensuring a higher precision in detecting advances in glaucoma and retinal shifts in diabetes. We thus identified 1762 applications of artificial intelligence in ophthalmology: review articles and research articles (301 pub med, 144 scopus, 445 web of science, 872 science direct). Of these, we analyzed 70 articles and review papers (diabetic retinopathy (N = 24), glaucoma (N = 24), DMLV (N = 15), other pathologies (N = 7)) after applying the inclusion and exclusion criteria. CONCLUSION In medicine, AI tools are used in surgery, radiology, gynecology, oncology, etc., in making a diagnosis, predicting the evolution of a disease, and assessing the prognosis in patients with oncological pathologies. In ophthalmology, AI potentially increases the patient's access to screening/clinical diagnosis and decreases healthcare costs, mainly when there is a high risk of disease or communities face financial shortages. AI/DL (deep learning) algorithms using both OCT and FO images will change image analysis techniques and methodologies. Optimizing these (combined) technologies will accelerate progress in this area.
Collapse
Affiliation(s)
- Nicoleta Anton
- Faculty of Medicine, University of Medicine and Pharmacy “Grigore T. Popa”, University Street, No 16, 700115 Iasi, Romania
| | - Bogdan Doroftei
- Faculty of Medicine, University of Medicine and Pharmacy “Grigore T. Popa”, University Street, No 16, 700115 Iasi, Romania
| | - Silvia Curteanu
- Department of Chemical Engineering, Cristofor Simionescu Faculty of Chemical Engineering and Environmental Protection, Gheorghe Asachi Technical University, Prof.dr.doc Dimitrie Mangeron Avenue, No 67, 700050 Iasi, Romania
| | - Lisa Catãlin
- Department of Chemical Engineering, Cristofor Simionescu Faculty of Chemical Engineering and Environmental Protection, Gheorghe Asachi Technical University, Prof.dr.doc Dimitrie Mangeron Avenue, No 67, 700050 Iasi, Romania
| | - Ovidiu-Dumitru Ilie
- Department of Biology, Faculty of Biology, “Alexandru Ioan Cuza” University, Carol I Avenue, No 20A, 700505 Iasi, Romania
| | - Filip Târcoveanu
- Faculty of Medicine, University of Medicine and Pharmacy “Grigore T. Popa”, University Street, No 16, 700115 Iasi, Romania
| | - Camelia Margareta Bogdănici
- Faculty of Medicine, University of Medicine and Pharmacy “Grigore T. Popa”, University Street, No 16, 700115 Iasi, Romania
| |
Collapse
|
118
|
Luo Z, Ding X, Hou N, Wan J. A Deep-Learning-Based Collaborative Edge-Cloud Telemedicine System for Retinopathy of Prematurity. SENSORS (BASEL, SWITZERLAND) 2022; 23:276. [PMID: 36616874 PMCID: PMC9824555 DOI: 10.3390/s23010276] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/22/2022] [Accepted: 12/22/2022] [Indexed: 06/17/2023]
Abstract
Retinopathy of prematurity is an ophthalmic disease with a very high blindness rate. With its increasing incidence year by year, its timely diagnosis and treatment are of great significance. Due to the lack of timely and effective fundus screening for premature infants in remote areas, leading to an aggravation of the disease and even blindness, in this paper, a deep learning-based collaborative edge-cloud telemedicine system is proposed to mitigate this issue. In the proposed system, deep learning algorithms are mainly used for classification of processed images. Our algorithm is based on ResNet101 and uses undersampling and resampling to improve the data imbalance problem in the field of medical image processing. Artificial intelligence algorithms are combined with a collaborative edge-cloud architecture to implement a comprehensive telemedicine system to realize timely screening and diagnosis of retinopathy of prematurity in remote areas with shortages or a complete lack of expert medical staff. Finally, the algorithm is successfully embedded in a mobile terminal device and deployed through the support of a core hospital of Guangdong Province. The results show that we achieved 75% ACC and 60% AUC. This research is of great significance for the development of telemedicine systems and aims to mitigate the lack of medical resources and their uneven distribution in rural areas.
Collapse
Affiliation(s)
- Zeliang Luo
- College of Electro-Mechanical Engineering, Zhuhai City Polytechnic, Zhuhai 519090, China
| | - Xiaoxuan Ding
- Guangdong Provincial Key Laboratory of Technique and Equipment for Macromolecular Advanced Manufacturing, School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou 510641, China
| | - Ning Hou
- Guangdong Provincial Key Laboratory of Technique and Equipment for Macromolecular Advanced Manufacturing, School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou 510641, China
| | - Jiafu Wan
- Guangdong Provincial Key Laboratory of Technique and Equipment for Macromolecular Advanced Manufacturing, School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou 510641, China
| |
Collapse
|
119
|
Theodoridis K, Gika H, Kotali A. Acylcarnitines in Ophthalmology: Promising Emerging Biomarkers. Int J Mol Sci 2022; 23:ijms232416183. [PMID: 36555822 PMCID: PMC9784861 DOI: 10.3390/ijms232416183] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/10/2022] [Accepted: 12/12/2022] [Indexed: 12/23/2022] Open
Abstract
Several common ocular diseases are leading causes of irreversible visual impairment. Over the last decade, various mainly untargeted metabolic studies have been performed to show that metabolic dysfunction plays an important role in the pathogenesis of ocular diseases. A number of metabolites in plasma/serum, aqueous or vitreous humor, or in tears have been found to differ between patients and controls; among them are L-carnitine and acylcarnitines, which are essential for mitochondrial fatty acid oxidation. The metabolic profile of carnitines regarding a variety of diseases has attracted researchers' interest. In this review, we present and discuss recent advances that have been made in the identification of carnitines as potential metabolic biomarkers in common ocular diseases, such as age-related macular degeneration, diabetic retinopathy, retinopathy of prematurity, central retinal vein occlusion, primary open-angle glaucoma, rhegmatogenous retinal detachment, and dry eye syndrome.
Collapse
Affiliation(s)
- Konstantinos Theodoridis
- Laboratory of Organic Chemistry, School of Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
- Laboratory of Forensic Medicine and Toxicology, Medical School, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
- Correspondence:
| | - Helen Gika
- Laboratory of Forensic Medicine and Toxicology, Medical School, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
- Biomic AUTh, Center for Interdisciplinary Research and Innovation (CIRI-AUTH), Balkan Center B1.4, 57001 Thessaloniki, Greece
| | - Antigoni Kotali
- Laboratory of Organic Chemistry, School of Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| |
Collapse
|
120
|
Cole E, Valikodath NG, Al-Khaled T, Bajimaya S, KC S, Chuluunbat T, Munkhuu B, Jonas KE, Chuluunkhuu C, MacKeen LD, Yap V, Hallak J, Ostmo S, Wu WC, Coyner AS, Singh P, Kalpathy-Cramer J, Chiang MF, Campbell JP, Chan RVP. Evaluation of an Artificial Intelligence System for Retinopathy of Prematurity Screening in Nepal and Mongolia. OPHTHALMOLOGY SCIENCE 2022; 2:100165. [PMID: 36531583 PMCID: PMC9754980 DOI: 10.1016/j.xops.2022.100165] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 04/19/2022] [Accepted: 04/19/2022] [Indexed: 05/09/2023]
Abstract
PURPOSE To evaluate the performance of a deep learning (DL) algorithm for retinopathy of prematurity (ROP) screening in Nepal and Mongolia. DESIGN Retrospective analysis of prospectively collected clinical data. PARTICIPANTS Clinical information and fundus images were obtained from infants in 2 ROP screening programs in Nepal and Mongolia. METHODS Fundus images were obtained using the Forus 3nethra neo (Forus Health) in Nepal and the RetCam Portable (Natus Medical, Inc.) in Mongolia. The overall severity of ROP was determined from the medical record using the International Classification of ROP (ICROP). The presence of plus disease was determined independently in each image using a reference standard diagnosis. The Imaging and Informatics for ROP (i-ROP) DL algorithm was trained on images from the RetCam to classify plus disease and to assign a vascular severity score (VSS) from 1 through 9. MAIN OUTCOME MEASURES Area under the receiver operating characteristic curve and area under the precision-recall curve for the presence of plus disease or type 1 ROP and association between VSS and ICROP disease category. RESULTS The prevalence of type 1 ROP was found to be higher in Mongolia (14.0%) than in Nepal (2.2%; P < 0.001) in these data sets. In Mongolia (RetCam images), the area under the receiver operating characteristic curve for examination-level plus disease detection was 0.968, and the area under the precision-recall curve was 0.823. In Nepal (Forus images), these values were 0.999 and 0.993, respectively. The ROP VSS was associated with ICROP classification in both datasets (P < 0.001). At the population level, the median VSS was found to be higher in Mongolia (2.7; interquartile range [IQR], 1.3-5.4]) as compared with Nepal (1.9; IQR, 1.2-3.4; P < 0.001). CONCLUSIONS These data provide preliminary evidence of the effectiveness of the i-ROP DL algorithm for ROP screening in neonatal populations in Nepal and Mongolia using multiple camera systems and are useful for consideration in future clinical implementation of artificial intelligence-based ROP screening in low- and middle-income countries.
Collapse
Key Words
- Artificial intelligence
- BW, birth weight
- DL, deep learning
- Deep learning
- GA, gestational age
- ICROP, International Classification of Retinopathy of Prematurity
- IQR, interquartile range
- LMIC, low- and middle-income country
- Mongolia
- Nepal
- ROP, retinopathy of prematurity
- RSD, reference standard diagnosis
- Retinopathy of prematurity
- TR, treatment-requiring
- VSS, vascular severity score
- i-ROP, Imaging and Informatics for Retinopathy of Prematurity
Collapse
Affiliation(s)
- Emily Cole
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
| | - Nita G. Valikodath
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
| | - Tala Al-Khaled
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
| | | | - Sagun KC
- Helen Keller International, Kathmandu, Nepal
| | | | - Bayalag Munkhuu
- National Center for Maternal and Child Health, Ulaanbaatar, Mongolia
| | - Karyn E. Jonas
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
| | | | - Leslie D. MacKeen
- The Hospital for Sick Children, Toronto, Canada
- Phoenix Technology Group, Pleasanton, California
| | - Vivien Yap
- Department of Pediatrics, Weill Cornell Medical College, New York, New York
| | - Joelle Hallak
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Wei-Chi Wu
- Chang Gung Memorial Hospital, Taoyuan, Taiwan, and Chang Gung University, College of Medicine, Taoyuan, Taiwan
| | - Aaron S. Coyner
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | | | | | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - R. V. Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
- Correspondence: R. V. Paul Chan, MD, MSc, MBA, Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, 1905 West Taylor Street, Chicago, IL 60612.
| |
Collapse
|
121
|
Patel NA, Acaba-Berrocal LA, Hoyek S, Fan KC, Martinez-Castellanos MA, Baumal CR, Harper CA, Berrocal AM. Practice Patterns and Outcomes of Intravitreal Anti-VEGF Injection for Retinopathy of Prematurity: An International Multicenter Study. Ophthalmology 2022; 129:1380-1388. [PMID: 35863512 DOI: 10.1016/j.ophtha.2022.07.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 06/22/2022] [Accepted: 07/13/2022] [Indexed: 01/06/2023] Open
Abstract
PURPOSE To report practice patterns of intravitreal injections of anti-VEGF for retinopathy of prematurity (ROP) and outcomes data with a focus on retreatments and complications. DESIGN Multicenter, international, retrospective, consecutive series. SUBJECTS Patients with ROP treated with anti-VEGF injections from 2007 to 2021. METHODS Twenfty-three sites (16 United States [US] and 7 non-US) participated. Data collected included demographics, birth characteristics, examination findings, and methods of injections. Comparisons between US and non-US sites were made. MAIN OUTCOME MEASURES Primary outcomes included number and types of retreatments as well as complications. Secondary outcomes included specifics of the injection protocols, including types of medication, doses, distance from limbus, use of antibiotics, and quadrants where injections were delivered. RESULTS A total of 1677 eyes of 918 patients (43% female, 57% male) were included. Mean gestational age was 25.7 weeks (range, 21.2-41.5 weeks), and mean birth weight was 787 g (range, 300-2700 g). Overall, a 30-gauge needle was most commonly used (51%), and the quadrant injected was most frequently the inferior-temporal (51.3%). The distance from the limbus ranged from 0.75 to 2 mm, with 1 mm being the most common (65%). Bevacizumab was the most common anti-VEGF (71.4%), with a dose of 0.625 mg in 64% of cases. Overall, 604 (36%) eyes required retreatment. Of those, 79.8% were retreated with laser alone, 10.6% with anti-VEGF injection alone, and 9.6% with combined laser and injection. Complications after anti-VEGF injections occurred in 15 (0.9%) eyes, and no cases of endophthalmitis were reported. Patients in the United States had lower birth weights and gestational ages (665.6 g and 24.5 weeks, respectively) compared with non-US patients (912.7 g and 26.9 weeks, respectively) (P < 0.0001). Retreatment with reinjection and laser was significantly more common in the US compared with the non-US group (8.5% vs. 4.7% [P = 0.0016] and 55% vs. 7.2% [P < 0.001], respectively). There was no difference in the incidence of complications between the 2 geographic subgroups. CONCLUSIONS Anti-VEGF injections for ROP were safe and well tolerated despite a variance in practice patterns. Infants with ROP receiving injections in the US tended to be younger and smaller, and they were treated earlier with more retreatments than non-US neonates with ROP.
Collapse
Affiliation(s)
- Nimesh A Patel
- Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, Massachusetts; Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, Florida; Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Luis A Acaba-Berrocal
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Sandra Hoyek
- Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, Massachusetts
| | - Kenneth C Fan
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, Florida
| | | | - Caroline R Baumal
- Department of Ophthalmology, Tufts Medical Center, Tufts University School of Medicine, Boston, Massachusetts
| | - C Armitage Harper
- Department of Ophthalmology, Austin Retina Associates, Austin, Texas
| | - Audina M Berrocal
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, Florida.
| | | |
Collapse
|
122
|
McAdams RM, Kaur R, Sun Y, Bindra H, Cho SJ, Singh H. Predicting clinical outcomes using artificial intelligence and machine learning in neonatal intensive care units: a systematic review. J Perinatol 2022; 42:1561-1575. [PMID: 35562414 DOI: 10.1038/s41372-022-01392-8] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 03/30/2022] [Accepted: 04/01/2022] [Indexed: 01/19/2023]
Abstract
BACKGROUND Advances in technology, data availability, and analytics have helped improve quality of care in the neonatal intensive care unit. OBJECTIVE To provide an in-depth review of artificial intelligence (AI) and machine learning techniques being utilized to predict neonatal outcomes. METHODS The PRISMA protocol was followed that considered articles from established digital repositories. Included articles were categorized based on predictions of: (a) major neonatal morbidities such as sepsis, bronchopulmonary dysplasia, intraventricular hemorrhage, necrotizing enterocolitis, and retinopathy of prematurity; (b) mortality; and (c) length of stay. RESULTS A total of 366 studies were considered; 68 studies were eligible for inclusion in the review. The current set of predictor models are primarily built on supervised learning and mostly used regression models built on retrospective data. CONCLUSION With the availability of EMR data and data-sharing of NICU outcomes across neonatal research networks, machine learning algorithms have shown breakthrough performance in predicting neonatal disease.
Collapse
Affiliation(s)
- Ryan M McAdams
- Department of Pediatrics, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA
| | - Ravneet Kaur
- Child Health Imprints (CHIL) USA Inc, Madison, WI, USA
| | - Yao Sun
- Division of Neonatology, University of California San Francisco, San Francisco, CA, USA
| | | | - Su Jin Cho
- College of Medicine, Ewha Womans University Seoul, Seoul, Korea
| | | |
Collapse
|
123
|
Cole ED, Park SH, Kim SJ, Kang KB, Valikodath NG, Al-Khaled T, Patel SN, Jonas KE, Ostmo S, Coyner A, Berrocal A, Drenser KA, Nagiel A, Horowitz JD, Lee TC, Kalpathy-Cramer J, Chiang MF, Campbell JP, Chan RVP. Variability in Plus Disease Diagnosis using Single and Serial Images. Ophthalmol Retina 2022; 6:1122-1129. [PMID: 35659941 DOI: 10.1016/j.oret.2022.05.024] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 05/21/2022] [Accepted: 05/23/2022] [Indexed: 01/06/2023]
Abstract
PURPOSE To assess changes in retinopathy of prematurity (ROP) diagnosis in single and serial retinal images. DESIGN Cohort study. PARTICIPANTS Cases of ROP recruited from the Imaging and Informatics in Retinopathy of Prematurity (i-ROP) consortium evaluated by 7 graders. METHODS Seven ophthalmologists reviewed both single and 3 consecutive serial retinal images from 15 cases with ROP, and severity was assigned as plus, preplus, or none. Imaging data were acquired during routine ROP screening from 2011 to 2015, and a reference standard diagnosis was established for each image. A secondary analysis was performed using the i-ROP deep learning system to assign a vascular severity score (VSS) to each image, ranging from 1 to 9, with 9 being the most severe disease. This score has been previously demonstrated to correlate with the International Classification of ROP. Mean plus disease severity was calculated by averaging 14 labels per image in serial and single images to decrease noise. MAIN OUTCOME MEASURES Grading severity of ROP as defined by plus, preplus, or no ROP. RESULTS Assessment of serial retinal images changed the grading severity for > 50% of the graders, although there was wide variability. Cohen's kappa ranged from 0.29 to 1.0, which showed a wide range of agreement from slight to perfect by each grader. Changes in the grading of serial retinal images were noted more commonly in cases of preplus disease. The mean severity in cases with a diagnosis of plus disease and no disease did not change between single and serial images. The ROP VSS demonstrated good correlation with the range of expert classifications of plus disease and overall agreement with the mode class (P = 0.001). The VSS correlated with mean plus disease severity by expert diagnosis (correlation coefficient, 0.89). The more aggressive graders tended to be influenced by serial images to increase the severity of their grading. The VSS also demonstrated agreement with disease progression across serial images, which progressed to preplus and plus disease. CONCLUSIONS Clinicians demonstrated variability in ROP diagnosis when presented with both single and serial images. The use of deep learning as a quantitative assessment of plus disease has the potential to standardize ROP diagnosis and treatment.
Collapse
Affiliation(s)
- Emily D Cole
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Shin Hae Park
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois; Department of Ophthalmology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Sang Jin Kim
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon; Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Kai B Kang
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Nita G Valikodath
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Tala Al-Khaled
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | | | - Karyn E Jonas
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Aaron Coyner
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Audina Berrocal
- Bascom Palmer Eye Institute, University of Miami, Miami, Florida
| | - Kimberly A Drenser
- Department of Ophthalmology, Beaumont Eye Institute, Royal Oak, Michigan
| | - Aaron Nagiel
- Stein Eye Institute, University of California Los Angeles, Los Angeles, California
| | - Jason D Horowitz
- Department of Ophthalmology, Columbia University, New York, New York
| | - Thomas C Lee
- Roski Eye Institute, Department of Ophthalmology, Keck School of Medicine of the University of Southern California, Los Angeles, California
| | | | - Michael F Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois.
| | | |
Collapse
|
124
|
Almadhi NH, Dow ER, Paul Chan RV, Alsulaiman SM. Multimodal Imaging, Tele-Education, and Telemedicine in Retinopathy of Prematurity. Middle East Afr J Ophthalmol 2022; 29:38-50. [PMID: 36685346 PMCID: PMC9846956 DOI: 10.4103/meajo.meajo_56_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 05/25/2022] [Accepted: 09/25/2022] [Indexed: 01/24/2023] Open
Abstract
Retinopathy of prematurity (ROP) is a disease that affects retinal vasculature in premature infants and remains one of the leading causes of blindness in childhood worldwide. ROP screening can encounter some difficulties such as the lack of specialists and services in rural areas. The evolution of technology has helped address these issues and led to the emergence of state-of-the-art multimodal digital imaging devices such fundus cameras with its variable properties, optical coherence tomography (OCT), OCT angiography, and fluorescein angiography which has helped immensely in the process of improving ROP care and understanding the disease pathophysiology. Computer-based imaging analysis and deep learning have recently been demonstrating promising outcomes in regard to ROP diagnosis. Telemedicine is considered an acceptable alternative to clinical examination when optimal circumstances for ROP screening in certain areas are lacking, and the expansion of these programs has been reported. Tele-education programs in ROP have the potential to improve the quality of training to physicians to optimize ROP care.
Collapse
Affiliation(s)
- Nada H. Almadhi
- Vitreoretinal division, King Khaled Eye Specialist Hospital, Riyadh, Saudi Arabia
| | - Eliot R. Dow
- Department of Ophthalmology, Jules Stein Eye Institute, University of California, Los Angeles, USA
| | - R. V. Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois, Chicago, Illinois, USA
| | - Sulaiman M. Alsulaiman
- Vitreoretinal division, King Khaled Eye Specialist Hospital, Riyadh, Saudi Arabia,Address for correspondence: Dr. Sulaiman M. Alsulaiman, Vitreoretinal Division, King Khaled Eye Specialist Hospital, P.O. Box: 7191, Riyadh 11462, Saudi Arabia. E-mail:
| |
Collapse
|
125
|
Carrera-Escalé L, Benali A, Rathert AC, Martín-Pinardel R, Bernal-Morales C, Alé-Chilet A, Barraso M, Marín-Martinez S, Feu-Basilio S, Rosinés-Fonoll J, Hernandez T, Vilá I, Castro-Dominguez R, Oliva C, Vinagre I, Ortega E, Gimenez M, Vellido A, Romero E, Zarranz-Ventura J. Radiomics-Based Assessment of OCT Angiography Images for Diabetic Retinopathy Diagnosis. OPHTHALMOLOGY SCIENCE 2022; 3:100259. [PMID: 36578904 PMCID: PMC9791596 DOI: 10.1016/j.xops.2022.100259] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 10/25/2022] [Accepted: 11/14/2022] [Indexed: 11/23/2022]
Abstract
Purpose To evaluate the diagnostic accuracy of machine learning (ML) techniques applied to radiomic features extracted from OCT and OCT angiography (OCTA) images for diabetes mellitus (DM), diabetic retinopathy (DR), and referable DR (R-DR) diagnosis. Design Cross-sectional analysis of a retinal image dataset from a previous prospective OCTA study (ClinicalTrials.govNCT03422965). Participants Patients with type 1 DM and controls included in the progenitor study. Methods Radiomic features were extracted from fundus retinographies, OCT, and OCTA images in each study eye. Logistic regression, linear discriminant analysis, support vector classifier (SVC)-linear, SVC-radial basis function, and random forest models were created to evaluate their diagnostic accuracy for DM, DR, and R-DR diagnosis in all image types. Main Outcome Measures Area under the receiver operating characteristic curve (AUC) mean and standard deviation for each ML model and each individual and combined image types. Results A dataset of 726 eyes (439 individuals) were included. For DM diagnosis, the greatest AUC was observed for OCT (0.82, 0.03). For DR detection, the greatest AUC was observed for OCTA (0.77, 0.03), especially in the 3 × 3 mm superficial capillary plexus OCTA scan (0.76, 0.04). For R-DR diagnosis, the greatest AUC was observed for OCTA (0.87, 0.12) and the deep capillary plexus OCTA scan (0.86, 0.08). The addition of clinical variables (age, sex, etc.) improved most models AUC for DM, DR and R-DR diagnosis. The performance of the models was similar in unilateral and bilateral eyes image datasets. Conclusions Radiomics extracted from OCT and OCTA images allow identification of patients with DM, DR, and R-DR using standard ML classifiers. OCT was the best test for DM diagnosis, OCTA for DR and R-DR diagnosis and the addition of clinical variables improved most models. This pioneer study demonstrates that radiomics-based ML techniques applied to OCT and OCTA images may be an option for DR screening in patients with type 1 DM. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
Key Words
- AI, artificial intelligence
- AUC, area under the curve
- Artificial intelligence
- DCP, deep capillary plexus
- DM, diabetes mellitus
- DR, diabetic retinopathy
- Diabetic retinopathy
- FR, fundus retinographies
- LDA, linear discriminant analysis
- LR, logistic regression
- ML, machine learning
- Machine learning
- OCT angiography
- OCTA, OCT angiography
- R-DR, referable DR
- RF, random forest
- Radiomics
- SCP, superficial capillary plexus
- SVC, support vector classifier
- rbf, radial basis function
Collapse
Affiliation(s)
- Laura Carrera-Escalé
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Anass Benali
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Ann-Christin Rathert
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Ruben Martín-Pinardel
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain,August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain
| | | | - Anibal Alé-Chilet
- Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Marina Barraso
- Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Sara Marín-Martinez
- Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Silvia Feu-Basilio
- Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Josep Rosinés-Fonoll
- Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Teresa Hernandez
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Irene Vilá
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | | | - Cristian Oliva
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Irene Vinagre
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Diabetes Unit, Hospital Clínic de Barcelona, Spain,Institut Clínic de Malalties Digestives i Metaboliques (ICMDM), Hospital Clínic de Barcelona, Spain
| | - Emilio Ortega
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Diabetes Unit, Hospital Clínic de Barcelona, Spain,Institut Clínic de Malalties Digestives i Metaboliques (ICMDM), Hospital Clínic de Barcelona, Spain
| | - Marga Gimenez
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Diabetes Unit, Hospital Clínic de Barcelona, Spain,Institut Clínic de Malalties Digestives i Metaboliques (ICMDM), Hospital Clínic de Barcelona, Spain
| | - Alfredo Vellido
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Enrique Romero
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Javier Zarranz-Ventura
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain,Diabetes Unit, Hospital Clínic de Barcelona, Spain,School of Medicine, Universitat de Barcelona, Spain,Correspondence: Javier Zarranz-Ventura, MD, PhD, C/ Sabino Arana 1, Barcelona 08028, Spain.
| |
Collapse
|
126
|
Lemay A, Hoebel K, Bridge CP, Befano B, De Sanjosé S, Egemen D, Rodriguez AC, Schiffman M, Campbell JP, Kalpathy-Cramer J. Improving the repeatability of deep learning models with Monte Carlo dropout. NPJ Digit Med 2022; 5:174. [PMID: 36400939 PMCID: PMC9674698 DOI: 10.1038/s41746-022-00709-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 10/10/2022] [Indexed: 11/19/2022] Open
Abstract
The integration of artificial intelligence into clinical workflows requires reliable and robust models. Repeatability is a key attribute of model robustness. Ideal repeatable models output predictions without variation during independent tests carried out under similar conditions. However, slight variations, though not ideal, may be unavoidable and acceptable in practice. During model development and evaluation, much attention is given to classification performance while model repeatability is rarely assessed, leading to the development of models that are unusable in clinical practice. In this work, we evaluate the repeatability of four model types (binary classification, multi-class classification, ordinal classification, and regression) on images that were acquired from the same patient during the same visit. We study the each model's performance on four medical image classification tasks from public and private datasets: knee osteoarthritis, cervical cancer screening, breast density estimation, and retinopathy of prematurity. Repeatability is measured and compared on ResNet and DenseNet architectures. Moreover, we assess the impact of sampling Monte Carlo dropout predictions at test time on classification performance and repeatability. Leveraging Monte Carlo predictions significantly increases repeatability, in particular at the class boundaries, for all tasks on the binary, multi-class, and ordinal models leading to an average reduction of the 95% limits of agreement by 16% points and of the class disagreement rate by 7% points. The classification accuracy improves in most settings along with the repeatability. Our results suggest that beyond about 20 Monte Carlo iterations, there is no further gain in repeatability. In addition to the higher test-retest agreement, Monte Carlo predictions are better calibrated which leads to output probabilities reflecting more accurately the true likelihood of being correctly classified.
Collapse
Affiliation(s)
- Andreanne Lemay
- Martinos Center for Biomedical Imaging, Boston, MA, USA
- NeuroPoly, Polytechnique Montreal, Montreal, QC, Canada
| | - Katharina Hoebel
- Martinos Center for Biomedical Imaging, Boston, MA, USA
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Christopher P Bridge
- Martinos Center for Biomedical Imaging, Boston, MA, USA
- MGH & BWH Center for Clinical Data Science, Boston, MA, USA
| | - Brian Befano
- Department of Epidemiology, University of Washington School of Public Health, Seattle, WA, USA
| | - Silvia De Sanjosé
- Division of Cancer Epidemiology & Genetics, National Cancer Institute, Rockville, MD, USA
| | - Didem Egemen
- Division of Cancer Epidemiology & Genetics, National Cancer Institute, Rockville, MD, USA
| | - Ana Cecilia Rodriguez
- Division of Cancer Epidemiology & Genetics, National Cancer Institute, Rockville, MD, USA
| | - Mark Schiffman
- Division of Cancer Epidemiology & Genetics, National Cancer Institute, Rockville, MD, USA
| | | | | |
Collapse
|
127
|
Nguyen TX, Ran AR, Hu X, Yang D, Jiang M, Dou Q, Cheung CY. Federated Learning in Ocular Imaging: Current Progress and Future Direction. Diagnostics (Basel) 2022; 12:2835. [PMID: 36428895 PMCID: PMC9689273 DOI: 10.3390/diagnostics12112835] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 11/11/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022] Open
Abstract
Advances in artificial intelligence deep learning (DL) have made tremendous impacts on the field of ocular imaging over the last few years. Specifically, DL has been utilised to detect and classify various ocular diseases on retinal photographs, optical coherence tomography (OCT) images, and OCT-angiography images. In order to achieve good robustness and generalisability of model performance, DL training strategies traditionally require extensive and diverse training datasets from various sites to be transferred and pooled into a "centralised location". However, such a data transferring process could raise practical concerns related to data security and patient privacy. Federated learning (FL) is a distributed collaborative learning paradigm which enables the coordination of multiple collaborators without the need for sharing confidential data. This distributed training approach has great potential to ensure data privacy among different institutions and reduce the potential risk of data leakage from data pooling or centralisation. This review article aims to introduce the concept of FL, provide current evidence of FL in ocular imaging, and discuss potential challenges as well as future applications.
Collapse
Affiliation(s)
- Truong X. Nguyen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Xiaoyan Hu
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Meirui Jiang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
128
|
DeepPDT-Net: predicting the outcome of photodynamic therapy for chronic central serous chorioretinopathy using two-stage multimodal transfer learning. Sci Rep 2022; 12:18689. [PMID: 36333442 PMCID: PMC9636239 DOI: 10.1038/s41598-022-22984-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 10/21/2022] [Indexed: 11/06/2022] Open
Abstract
Central serous chorioretinopathy (CSC), characterized by serous detachment of the macular retina, can cause permanent vision loss in the chronic course. Chronic CSC is generally treated with photodynamic therapy (PDT), which is costly and quite invasive, and the results are unpredictable. In a retrospective case-control study design, we developed a two-stage deep learning model to predict 1-year outcome of PDT using initial multimodal clinical data. The training dataset included 166 eyes with chronic CSC and an additional learning dataset containing 745 healthy control eyes. A pre-trained ResNet50-based convolutional neural network was first trained with normal fundus photographs (FPs) to detect CSC and then adapted to predict CSC treatability through transfer learning. The domain-specific ResNet50 successfully predicted treatable and refractory CSC (accuracy, 83.9%). Then other multimodal clinical data were integrated with the FP deep features using XGBoost.The final combined model (DeepPDT-Net) outperformed the domain-specific ResNet50 (accuracy, 88.0%). The FP deep features had the greatest impact on DeepPDT-Net performance, followed by central foveal thickness and age. In conclusion, DeepPDT-Net could solve the PDT outcome prediction task challenging even to retinal specialists. This two-stage strategy, adopting transfer learning and concatenating multimodal data, can overcome the clinical prediction obstacles arising from insufficient datasets.
Collapse
|
129
|
Yang J, Wu S, Dai R, Yu W, Chen Y. Publication trends of artificial intelligence in retina in 10 years: Where do we stand? Front Med (Lausanne) 2022; 9:1001673. [PMID: 36405613 PMCID: PMC9666394 DOI: 10.3389/fmed.2022.1001673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Accepted: 09/20/2022] [Indexed: 11/25/2022] Open
Abstract
PURPOSE Artificial intelligence (AI) has been applied in the field of retina. The purpose of this study was to analyze the study trends within AI in retina by reporting on publication trends, to identify journals, countries, authors, international collaborations, and keywords involved in AI in retina. MATERIALS AND METHODS A cross-sectional study. Bibliometric methods were used to evaluate global production and development trends in AI in retina since 2012 using Web of Science Core Collection. RESULTS A total of 599 publications were retrieved ultimately. We found that AI in retina is a very attractive topic in scientific and medical community. No journal was found to specialize in AI in retina. The USA, China, and India were the three most productive countries. Authors from Austria, Singapore, and England also had worldwide academic influence. China has shown the greatest rapid increase in publication numbers. International collaboration could increase influence in this field. Keywords revealed that diabetic retinopathy, optical coherence tomography on multiple diseases, algorithm were three popular topics in the field. Most of top journals and top publication on AI in retina were mainly focused on engineering and computing, rather than medicine. CONCLUSION These results helped clarify the current status and future trends in researches of AI in retina. This study may be useful for clinicians and scientists to have a general overview of this field, and better understand the main actors in this field (including authors, journals, and countries). Researches are supposed to focus on more retinal diseases, multiple modal imaging, and performance of AI models in real-world clinical application. Collaboration among countries and institutions is common in current research of AI in retina.
Collapse
Affiliation(s)
- Jingyuan Yang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Shan Wu
- Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
| | - Rongping Dai
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,*Correspondence: Youxin Chen,
| |
Collapse
|
130
|
Lee J, Liu C, Kim J, Chen Z, Sun Y, Rogers JR, Chung WK, Weng C. Deep learning for rare disease: A scoping review. J Biomed Inform 2022; 135:104227. [PMID: 36257483 DOI: 10.1016/j.jbi.2022.104227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 08/22/2022] [Accepted: 10/07/2022] [Indexed: 10/31/2022]
Abstract
Although individually rare, collectively more than 7,000 rare diseases affect about 10% of patients. Each of the rare diseases impacts the quality of life for patients and their families, and incurs significant societal costs. The low prevalence of each rare disease causes formidable challenges in accurately diagnosing and caring for these patients and engaging participants in research to advance treatments. Deep learning has advanced many scientific fields and has been applied to many healthcare tasks. This study reviewed the current uses of deep learning to advance rare disease research. Among the 332 reviewed articles, we found that deep learning has been actively used for rare neoplastic diseases (250/332), followed by rare genetic diseases (170/332) and rare neurological diseases (127/332). Convolutional neural networks (307/332) were the most frequently used deep learning architecture, presumably because image data were the most commonly available data type in rare disease research. Diagnosis is the main focus of rare disease research using deep learning (263/332). We summarized the challenges and future research directions for leveraging deep learning to advance rare disease research.
Collapse
Affiliation(s)
- Junghwan Lee
- Department of Biomedical Informatics, Columbia University, New York, NY 10032, USA
| | - Cong Liu
- Department of Biomedical Informatics, Columbia University, New York, NY 10032, USA
| | - Junyoung Kim
- Department of Biomedical Informatics, Columbia University, New York, NY 10032, USA
| | - Zhehuan Chen
- Department of Biomedical Informatics, Columbia University, New York, NY 10032, USA
| | - Yingcheng Sun
- Department of Biomedical Informatics, Columbia University, New York, NY 10032, USA
| | - James R Rogers
- Department of Biomedical Informatics, Columbia University, New York, NY 10032, USA
| | - Wendy K Chung
- Departments of Medicine and Pediatrics, Columbia University, New York, NY 10032, USA
| | - Chunhua Weng
- Department of Biomedical Informatics, Columbia University, New York, NY 10032, USA.
| |
Collapse
|
131
|
Sheng B, Chen X, Li T, Ma T, Yang Y, Bi L, Zhang X. An overview of artificial intelligence in diabetic retinopathy and other ocular diseases. Front Public Health 2022; 10:971943. [PMID: 36388304 PMCID: PMC9650481 DOI: 10.3389/fpubh.2022.971943] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 10/04/2022] [Indexed: 01/25/2023] Open
Abstract
Artificial intelligence (AI), also known as machine intelligence, is a branch of science that empowers machines using human intelligence. AI refers to the technology of rendering human intelligence through computer programs. From healthcare to the precise prevention, diagnosis, and management of diseases, AI is progressing rapidly in various interdisciplinary fields, including ophthalmology. Ophthalmology is at the forefront of AI in medicine because the diagnosis of ocular diseases heavy reliance on imaging. Recently, deep learning-based AI screening and prediction models have been applied to the most common visual impairment and blindness diseases, including glaucoma, cataract, age-related macular degeneration (ARMD), and diabetic retinopathy (DR). The success of AI in medicine is primarily attributed to the development of deep learning algorithms, which are computational models composed of multiple layers of simulated neurons. These models can learn the representations of data at multiple levels of abstraction. The Inception-v3 algorithm and transfer learning concept have been applied in DR and ARMD to reuse fundus image features learned from natural images (non-medical images) to train an AI system with a fraction of the commonly used training data (<1%). The trained AI system achieved performance comparable to that of human experts in classifying ARMD and diabetic macular edema on optical coherence tomography images. In this study, we highlight the fundamental concepts of AI and its application in these four major ocular diseases and further discuss the current challenges, as well as the prospects in ophthalmology.
Collapse
Affiliation(s)
- Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Xiaosi Chen
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tingyao Li
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Tianxing Ma
- Chongqing University-University of Cincinnati Joint Co-op Institute, Chongqing University, Chongqing, China
| | - Yang Yang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lei Bi
- School of Computer Science, University of Sydney, Sydney, NSW, Australia
| | - Xinyuan Zhang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
132
|
Hossain MS, Syeed MMM, Fatema K, Uddin MF. The Perception of Health Professionals in Bangladesh toward the Digitalization of the Health Sector. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:13695. [PMID: 36294274 PMCID: PMC9602521 DOI: 10.3390/ijerph192013695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 10/11/2022] [Accepted: 10/17/2022] [Indexed: 06/16/2023]
Abstract
Bangladesh is undertaking a major transformation towards digitalization in every sector, and healthcare is no exception. Digitalization of the health sector is expected to improve healthcare services while reducing human effort and ensuring the satisfaction of patients and health professionals. However, for practical and successful digitalization, it is necessary to understand the perceptions of health professionals. Therefore, we conducted a cross-sectional survey in Bangladesh to investigate health professionals' perceptions in relation to various socio-demographic variables such as age, gender, location, profession and institution. We also evaluated their competencies, as digital health-related competencies are required for digitalization. Additionally, we identified major digitalization challenges. Quantitative survey data were analyzed with Python Pandas, and qualitative data were classified using Valence-Aware Dictionary and Sentiment Reasoner (VADER). This study found significant relationships between age χ2(12,N=701)=82.02,p<0.001; location χ2(4,N=701)=18.78,p<0.001; and profession χ2(16,N=701)=71.02,p<0.001; with technical competency. These variables also have similar influences on psychological competency. According to VADER, 88.1% (583/701) of respondents have a positive outlook toward digitalization. The internal consistency of the survey was confirmed by Cronbach's alpha score (0.746). This study assisted in developing a better understanding of how professionals perceive digitalization, categorizes professionals based on competency, and prioritizes the major digitalization challenges.
Collapse
Affiliation(s)
- Md Shakhawat Hossain
- Department of CS, American International University-Bangladesh (AIUB), Dhaka 1229, Bangladesh
- RIoT Research Center, Independent University, Bangladesh, Dhaka 1229, Bangladesh
| | - M. M. Mahbubul Syeed
- RIoT Research Center, Independent University, Bangladesh, Dhaka 1229, Bangladesh
- Department of CSE, Independent University, Bangladesh (IUB), Dhaka 1229, Bangladesh
| | - Kaniz Fatema
- RIoT Research Center, Independent University, Bangladesh, Dhaka 1229, Bangladesh
- Department of CSE, Independent University, Bangladesh (IUB), Dhaka 1229, Bangladesh
| | - Mohammad Faisal Uddin
- RIoT Research Center, Independent University, Bangladesh, Dhaka 1229, Bangladesh
- Department of CSE, Independent University, Bangladesh (IUB), Dhaka 1229, Bangladesh
| |
Collapse
|
133
|
Sharma M, Gunwant H, Saggar P, Gupta L, Gupta D. EfficientNet-B0 Model for Face Mask Detection Based on Social Information Retrieval. INTERNATIONAL JOURNAL OF INFORMATION SYSTEM MODELING AND DESIGN 2022. [DOI: 10.4018/ijismd.313444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The world was introduced to the term coronavirus at the end of 2019, following which everyone was thrown into stress and anxiety. The pandemic has been a complete disaster, wreaking devastation and resulting in a significant loss of human life throughout the world. The governments of various countries have issued guidelines and protocols to be followed for stopping the surge in cases (i.e., wearing masks). Amidst all this chaos, the only weapon is technology. So, the detection of face masks is important. The authors utilized a dataset that included images of individuals in society wearing and not wearing masks. They gathered the information required to train a model by using deep networks like EfficientNetB0, MobileNetV2, ResNet50, and InceptionV3. With EfficientNet-B0, they have been able to achieve an accuracy of 99.70% on a two-class classification issue. These methods make face mask detection easier and help in knowledge discovery. These technological breakthroughs may aid in information retrieval as well as help society and guarantee that such a healthcare disaster does not occur again.
Collapse
|
134
|
Abbasian MH, Ardekani AM, Sobhani N, Roudi R. The Role of Genomics and Proteomics in Lung Cancer Early Detection and Treatment. Cancers (Basel) 2022; 14:5144. [PMID: 36291929 PMCID: PMC9600051 DOI: 10.3390/cancers14205144] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 10/10/2022] [Accepted: 10/18/2022] [Indexed: 08/17/2023] Open
Abstract
Lung cancer is the leading cause of cancer-related death worldwide, with non-small-cell lung cancer (NSCLC) being the primary type. Unfortunately, it is often diagnosed at advanced stages, when therapy leaves patients with a dismal prognosis. Despite the advances in genomics and proteomics in the past decade, leading to progress in developing tools for early diagnosis, targeted therapies have shown promising results; however, the 5-year survival of NSCLC patients is only about 15%. Low-dose computed tomography or chest X-ray are the main types of screening tools. Lung cancer patients without specific, actionable mutations are currently treated with conventional therapies, such as platinum-based chemotherapy; however, resistances and relapses often occur in these patients. More noninvasive, inexpensive, and safer diagnostic methods based on novel biomarkers for NSCLC are of paramount importance. In the current review, we summarize genomic and proteomic biomarkers utilized for the early detection and treatment of NSCLC. We further discuss future opportunities to improve biomarkers for early detection and the effective treatment of NSCLC.
Collapse
Affiliation(s)
- Mohammad Hadi Abbasian
- Department of Medical Genetics, National Institute of Genetic Engineering and Biotechnology (NIGEB), Tehran 1497716316, Iran
| | - Ali M. Ardekani
- Department of Medical Biotechnology, National Institute of Genetic Engineering and Biotechnology (NIGEB), Tehran 1497716316, Iran
| | - Navid Sobhani
- Department of Medicine, Section of Epidemiology and Population Sciences, Baylor College of Medicine, Houston, TX 77030, USA
| | - Raheleh Roudi
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
135
|
Wakabayashi T, Patel SN, Campbell JP, Chang EY, Nudleman ED, Yonekawa Y. Advances in retinopathy of prematurity imaging. Saudi J Ophthalmol 2022; 36:243-250. [PMID: 36276248 PMCID: PMC9583355 DOI: 10.4103/sjopt.sjopt_20_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 02/06/2022] [Accepted: 02/07/2022] [Indexed: 11/18/2022] Open
Abstract
Retinopathy of prematurity (ROP) remains the leading cause of childhood blindness worldwide. Recent advances in ROP imaging have significantly improved our understanding of the pathogenesis and pathophysiological course of ROP including the acute phase, regression, reactivation, and late complications, known as adult ROP. Recent progress includes various contact and noncontact wide-field imaging devices for fundus imaging, smartphone-based fundus photography, wide-field fluorescein angiography, handheld optical coherence tomography (OCT) devices for wide-field en face OCT images, and OCT angiography. Images taken by those devices were incorporated in the recently updated guidelines of ROP, the International Classification of Retinopathy of Prematurity, Third Edition (ICROP3). ROP imaging has also allowed the real-world adoption of telemedicine- and artificial intelligence (AI)-based screening. Recent study demonstrated proof of concept that AI has a high diagnostic performance for the detection of ROP in a real-world screening. Here, we summarize the recent advances in ROP imaging and their application for screening, diagnosis, and management of ROP.
Collapse
Affiliation(s)
- Taku Wakabayashi
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Samir N. Patel
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - J. P. Campbell
- Department of Ophthalmology, Oregon Health and Science University, Portland, Oregon, USA
| | | | - Eric D. Nudleman
- Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, California, USA
| | - Yoshihiro Yonekawa
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania, USA,Address for correspondence: Dr. Yoshihiro Yonekawa, Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania, USA. E-mail:
| |
Collapse
|
136
|
Tan Z, Isaacs M, Zhu Z, Simkin S, He M, Dai S. Retinopathy of prematurity screening: A narrative review of current programs, teleophthalmology, and diagnostic support systems. Saudi J Ophthalmol 2022; 36:283-295. [PMID: 36276257 PMCID: PMC9583350 DOI: 10.4103/sjopt.sjopt_220_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 10/04/2021] [Accepted: 11/12/2021] [Indexed: 01/24/2023] Open
Abstract
PURPOSE Neonatal care in middle-income countries has improved over the last decade, leading to a "third epidemic" of retinopathy of prematurity (ROP). Without concomitant improvements in ROP screening infrastructure, reduction of ROP-associated visual loss remains a challenge worldwide. The emergence of teleophthalmology screening programs and artificial intelligence (AI) technologies represents promising methods to address this growing unmet demand in ROP screening. An improved understanding of current ROP screening programs may inform the adoption of these novel technologies in ROP care. METHODS A critical narrative review of the literature was carried out. Publications that were representative of established or emerging ROP screening programs in high-, middle-, and low-income countries were selected for review. Screening programs were reviewed for inclusion criteria, screening frequency and duration, modality, and published sensitivity and specificity. RESULTS Screening inclusion criteria, including age and birth weight cutoffs, showed significant heterogeneity globally. Countries of similar income tend to have similar criteria. Three primary screening modalities including binocular indirect ophthalmoscopy (BIO), wide-field digital retinal imaging (WFDRI), and teleophthalmology were identified and reviewed. BIO has documented limitations in reduced interoperator agreement, scalability, and geographical access barriers, which are mitigated in part by WFDRI. Teleophthalmology screening may address limitations in ROP screening workforce distribution and training. Opportunities for AI technologies were identified in the context of these limitations, including interoperator reliability and possibilities for point-of-care diagnosis. CONCLUSION Limitations in the current ROP screening include scalability, geographical access, and high screening burden with low treatment yield. These may be addressable through increased adoption of teleophthalmology and AI technologies. As the global incidence of ROP continues to increase, implementation of these novel modalities requires greater consideration.
Collapse
Affiliation(s)
- Zachary Tan
- Centre for Eye Research Australia, University of Melbourne, Melbourne, Brisbane, Australia,Department of Clinical Medicine, Faculty of Medicine, University of Queensland, Brisbane, Australia
| | - Michael Isaacs
- Department of Clinical Medicine, Faculty of Medicine, University of Queensland, Brisbane, Australia,Department of Ophthalmology, Queensland Children's Hospital, Brisbane, Australia
| | - Zhuoting Zhu
- Centre for Eye Research Australia, University of Melbourne, Melbourne, Brisbane, Australia
| | - Samantha Simkin
- Department of Ophthalmology, The University of Auckland, Auckland, New Zealand
| | - Mingguang He
- Centre for Eye Research Australia, University of Melbourne, Melbourne, Brisbane, Australia
| | - Shuan Dai
- Department of Clinical Medicine, Faculty of Medicine, University of Queensland, Brisbane, Australia,Department of Ophthalmology, Queensland Children's Hospital, Brisbane, Australia,Address for correspondence: Dr. Shuan Dai, Assoc. Prof. Shuan Dai, Faculty of Medicine, The University of Queensland, Brisbane, Australia. E-mail:
| |
Collapse
|
137
|
Font O, Torrents-Barrena J, Royo D, García SB, Zarranz-Ventura J, Bures A, Salinas C, Zapata MÁ. Validation of an autonomous artificial intelligence-based diagnostic system for holistic maculopathy screening in a routine occupational health checkup context. Graefes Arch Clin Exp Ophthalmol 2022; 260:3255-3265. [PMID: 35567610 PMCID: PMC9477940 DOI: 10.1007/s00417-022-05653-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 03/15/2022] [Accepted: 03/31/2022] [Indexed: 02/08/2023] Open
Abstract
PURPOSE This study aims to evaluate the ability of an autonomous artificial intelligence (AI) system for detection of the most common central retinal pathologies in fundus photography. METHODS Retrospective diagnostic test evaluation on a raw dataset of 5918 images (2839 individuals) evaluated with non-mydriatic cameras during routine occupational health checkups. Three camera models were employed: Optomed Aurora (field of view - FOV 50º, 88% of the dataset), ZEISS VISUSCOUT 100 (FOV 40º, 9%), and Optomed SmartScope M5 (FOV 40º, 3%). Image acquisition took 2 min per patient. Ground truth for each image of the dataset was determined by 2 masked retina specialists, and disagreements were resolved by a 3rd retina specialist. The specific pathologies considered for evaluation were "diabetic retinopathy" (DR), "Age-related macular degeneration" (AMD), "glaucomatous optic neuropathy" (GON), and "Nevus." Images with maculopathy signs that did not match the described taxonomy were classified as "Other." RESULTS The combination of algorithms to detect any abnormalities had an area under the curve (AUC) of 0.963 with a sensitivity of 92.9% and a specificity of 86.8%. The algorithms individually obtained are as follows: AMD AUC 0.980 (sensitivity 93.8%; specificity 95.7%), DR AUC 0.950 (sensitivity 81.1%; specificity 94.8%), GON AUC 0.889 (sensitivity 53.6% specificity 95.7%), Nevus AUC 0.931 (sensitivity 86.7%; specificity 90.7%). CONCLUSION Our holistic AI approach reaches high diagnostic accuracy at simultaneous detection of DR, AMD, and Nevus. The integration of pathology-specific algorithms permits higher sensitivities with minimal impact on its specificity. It also reduces the risk of missing incidental findings. Deep learning may facilitate wider screenings of eye diseases.
Collapse
Affiliation(s)
- Octavi Font
- Optretina Image Reading Team, Barcelona, Spain
| | - Jordina Torrents-Barrena
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Dídac Royo
- Optretina Image Reading Team, Barcelona, Spain
| | - Sandra Banderas García
- Facultat de Cirurgia i Ciències Morfològiques, Universitat Autònoma de Barcelona (UAB), Barcelona, Spain.
- Ophthalmology Department Hospital Vall d'Hebron, Barcelona, Spain.
| | - Javier Zarranz-Ventura
- Institut Clinic of Ophthalmology (ICOF), Hospital Clinic, Barcelona, Spain
- Institut d'Investigacions Biomediques August Pi I Sunyer (IDIBAPS), Barcelona, Spain
| | - Anniken Bures
- Optretina Image Reading Team, Barcelona, Spain
- Instituto de Microcirugía Ocular (IMO), Barcelona, Spain
| | - Cecilia Salinas
- Optretina Image Reading Team, Barcelona, Spain
- Instituto de Microcirugía Ocular (IMO), Barcelona, Spain
| | - Miguel Ángel Zapata
- Optretina Image Reading Team, Barcelona, Spain
- Ophthalmology Department Hospital Vall d'Hebron, Barcelona, Spain
| |
Collapse
|
138
|
Gil EM, Keppler M, Boretsky A, Yakovlev VV, Bixler JN. Segmentation of laser induced retinal lesions using deep learning (December 2021). Lasers Surg Med 2022; 54:1130-1142. [PMID: 35781887 PMCID: PMC9464686 DOI: 10.1002/lsm.23578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 05/18/2022] [Accepted: 06/13/2022] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Detection of retinal laser lesions is necessary in both the evaluation of the extent of damage from high power laser sources, and in validating treatments involving the placement of laser lesions. However, such lesions are difficult to detect using Color Fundus cameras alone. Deep learning-based segmentation can remedy this, by highlighting potential lesions in the image. METHODS A unique database of images collected at the Air Force Research Laboratory over the past 30 years was used to train deep learning models for classifying images with lesions and for subsequent segmentation. We investigate whether transferring weights from models that learned classification would improve performance of the segmentation models. We use Pearson's correlation coefficient between the initial and final training phases to reveal how the networks are transferring features. RESULTS The segmentation models are able to effectively segment a broad range of lesions and imaging conditions. CONCLUSION Deep learning-based segmentation of lesions can effectively highlight laser lesions, making this a useful tool for aiding clinicians.
Collapse
Affiliation(s)
- Eddie M Gil
- Department of Biomedical Engineering, Texas A&M University, College Station, Texas, USA
- SAIC, JBSA Fort Sam, Houston, Texas, USA
| | - Mark Keppler
- Department of Biomedical Engineering, Texas A&M University, College Station, Texas, USA
- SAIC, JBSA Fort Sam, Houston, Texas, USA
| | | | - Vladislav V Yakovlev
- Department of Biomedical Engineering, Texas A&M University, College Station, Texas, USA
| | - Joel N Bixler
- Air Force Research Laboratory, JBSA Fort Sam, Houston, Texas, USA
| |
Collapse
|
139
|
Patil AD, Biousse V, Newman NJ. Artificial intelligence in ophthalmology: an insight into neurodegenerative disease. Curr Opin Ophthalmol 2022; 33:432-439. [PMID: 35819902 DOI: 10.1097/icu.0000000000000877] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW The aging world population accounts for the increasing prevalence of neurodegenerative diseases such as Alzheimer's and Parkinson's which carry a significant health and economic burden. There is therefore a need for sensitive and specific noninvasive biomarkers for early diagnosis and monitoring. Advances in retinal and optic nerve multimodal imaging as well as the development of artificial intelligence deep learning systems (AI-DLS) have heralded a number of promising advances of which ophthalmologists are at the forefront. RECENT FINDINGS The association among retinal vascular, nerve fiber layer, and macular findings in neurodegenerative disease is well established. In order to optimize the use of these ophthalmic parameters as biomarkers, validated AI-DLS are required to ensure clinical efficacy and reliability. Varied image acquisition methods and protocols as well as variability in neurogenerative disease diagnosis compromise the robustness of ground truths that are paramount to developing high-quality training datasets. SUMMARY In order to produce effective AI-DLS for the diagnosis and monitoring of neurodegenerative disease, multicenter international collaboration is required to prospectively produce large inclusive datasets, acquired through standardized methods and protocols. With a uniform approach, the efficacy of resultant clinical applications will be maximized.
Collapse
Affiliation(s)
| | | | - Nancy J Newman
- Department of Ophthalmology
- Department of Neurology
- Department of Neurological Surgery, Emory University School of Medicine, Atlanta, Georgia, USA
| |
Collapse
|
140
|
Lam TYT, Cheung MFK, Munro YL, Lim KM, Shung D, Sung JJY. Randomized Controlled Trials of Artificial Intelligence in Clinical Practice: Systematic Review. J Med Internet Res 2022; 24:e37188. [PMID: 35904087 PMCID: PMC9459941 DOI: 10.2196/37188] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 06/13/2022] [Accepted: 07/29/2022] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND The number of artificial intelligence (AI) studies in medicine has exponentially increased recently. However, there is no clear quantification of the clinical benefits of implementing AI-assisted tools in patient care. OBJECTIVE This study aims to systematically review all published randomized controlled trials (RCTs) of AI-assisted tools to characterize their performance in clinical practice. METHODS CINAHL, Cochrane Central, Embase, MEDLINE, and PubMed were searched to identify relevant RCTs published up to July 2021 and comparing the performance of AI-assisted tools with conventional clinical management without AI assistance. We evaluated the primary end points of each study to determine their clinical relevance. This systematic review was conducted following the updated PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 guidelines. RESULTS Among the 11,839 articles retrieved, only 39 (0.33%) RCTs were included. These RCTs were conducted in an approximately equal distribution from North America, Europe, and Asia. AI-assisted tools were implemented in 13 different clinical specialties. Most RCTs were published in the field of gastroenterology, with 15 studies on AI-assisted endoscopy. Most RCTs studied biosignal-based AI-assisted tools, and a minority of RCTs studied AI-assisted tools drawn from clinical data. In 77% (30/39) of the RCTs, AI-assisted interventions outperformed usual clinical care, and clinically relevant outcomes improved with AI-assisted intervention in 70% (21/30) of the studies. Small sample size and single-center design limited the generalizability of these studies. CONCLUSIONS There is growing evidence supporting the implementation of AI-assisted tools in daily clinical practice; however, the number of available RCTs is limited and heterogeneous. More RCTs of AI-assisted tools integrated into clinical practice are needed to advance the role of AI in medicine. TRIAL REGISTRATION PROSPERO CRD42021286539; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=286539.
Collapse
Affiliation(s)
- Thomas Y T Lam
- The Jockey Club School of Public Health and Primary Care, The Chinese University of Hong Kong, Hong Kong, Hong Kong
- Stanley Ho Big Data Decision Analytics Research Centre, The Chinese University of Hong Kong., Hong Kong, Hong Kong
| | - Max F K Cheung
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| | - Yasmin L Munro
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| | - Kong Meng Lim
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| | - Dennis Shung
- Department of Medicine (Digestive Diseases), Yale School of Medicine, New Haven, CT, United States
| | - Joseph J Y Sung
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
141
|
Chen JS, Baxter SL. Applications of natural language processing in ophthalmology: present and future. Front Med (Lausanne) 2022; 9:906554. [PMID: 36004369 PMCID: PMC9393550 DOI: 10.3389/fmed.2022.906554] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 05/31/2022] [Indexed: 11/13/2022] Open
Abstract
Advances in technology, including novel ophthalmic imaging devices and adoption of the electronic health record (EHR), have resulted in significantly increased data available for both clinical use and research in ophthalmology. While artificial intelligence (AI) algorithms have the potential to utilize these data to transform clinical care, current applications of AI in ophthalmology have focused mostly on image-based deep learning. Unstructured free-text in the EHR represents a tremendous amount of underutilized data in big data analyses and predictive AI. Natural language processing (NLP) is a type of AI involved in processing human language that can be used to develop automated algorithms using these vast quantities of available text data. The purpose of this review was to introduce ophthalmologists to NLP by (1) reviewing current applications of NLP in ophthalmology and (2) exploring potential applications of NLP. We reviewed current literature published in Pubmed and Google Scholar for articles related to NLP and ophthalmology, and used ancestor search to expand our references. Overall, we found 19 published studies of NLP in ophthalmology. The majority of these publications (16) focused on extracting specific text such as visual acuity from free-text notes for the purposes of quantitative analysis. Other applications included: domain embedding, predictive modeling, and topic modeling. Future ophthalmic applications of NLP may also focus on developing search engines for data within free-text notes, cleaning notes, automated question-answering, and translating ophthalmology notes for other specialties or for patients, especially with a growing interest in open notes. As medicine becomes more data-oriented, NLP offers increasing opportunities to augment our ability to harness free-text data and drive innovations in healthcare delivery and treatment of ophthalmic conditions.
Collapse
Affiliation(s)
- Jimmy S. Chen
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, United States
- Health Department of Biomedical Informatics, University of California San Diego, La Jolla, CA, United States
| | - Sally L. Baxter
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, United States
- Health Department of Biomedical Informatics, University of California San Diego, La Jolla, CA, United States
| |
Collapse
|
142
|
Lu C, Hanif A, Singh P, Chang K, Coyner AS, Brown JM, Ostmo S, Chan RVP, Rubin D, Chiang MF, Campbell JP, Kalpathy-Cramer J. Federated Learning for Multicenter Collaboration in Ophthalmology: Improving Classification Performance in Retinopathy of Prematurity. Ophthalmol Retina 2022; 6:657-663. [PMID: 35296449 DOI: 10.1016/j.oret.2022.02.015] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 02/10/2022] [Accepted: 02/28/2022] [Indexed: 11/29/2022]
Abstract
OBJECTIVE To compare the performance of deep learning classifiers for the diagnosis of plus disease in retinopathy of prematurity (ROP) trained using 2 methods for developing models on multi-institutional data sets: centralizing data versus federated learning (FL) in which no data leave each institution. DESIGN Evaluation of a diagnostic test or technology. SUBJECTS Deep learning models were trained, validated, and tested on 5255 wide-angle retinal images in the neonatal intensive care units of 7 institutions as part of the Imaging and Informatics in ROP study. All images were labeled for the presence of plus, preplus, or no plus disease with a clinical label and a reference standard diagnosis (RSD) determined by 3 image-based ROP graders and the clinical diagnosis. METHODS We compared the area under the receiver operating characteristic curve (AUROC) for models developed on multi-institutional data, using a central approach initially, followed by FL, and compared locally trained models with both approaches. We compared the model performance (κ) with the label agreement (between clinical and RSD), data set size, and number of plus disease cases in each training cohort using the Spearman correlation coefficient (CC). MAIN OUTCOME MEASURES Model performance using AUROC and linearly weighted κ. RESULTS Four settings of experiment were used: FL trained on RSD against central trained on RSD, FL trained on clinical labels against central trained on clinical labels, FL trained on RSD against central trained on clinical labels, and FL trained on clinical labels against central trained on RSD (P = 0.046, P = 0.126, P = 0.224, and P = 0.0173, respectively). Four of the 7 (57%) models trained on local institutional data performed inferiorly to the FL models. The model performance for local models was positively correlated with the label agreement (between clinical and RSD labels, CC = 0.389, P = 0.387), total number of plus cases (CC = 0.759, P = 0.047), and overall training set size (CC = 0.924, P = 0.002). CONCLUSIONS We found that a trained FL model performs comparably to a centralized model, confirming that FL may provide an effective, more feasible solution for interinstitutional learning. Smaller institutions benefit more from collaboration than larger institutions, showing the potential of FL for addressing disparities in resource access.
Collapse
Affiliation(s)
- Charles Lu
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts
| | - Adam Hanif
- Department of Ophthalmology, Oregon Health and Science University, Portland, Oregon
| | - Praveer Singh
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts
| | - Ken Chang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts
| | - Aaron S Coyner
- Department of Ophthalmology, Oregon Health and Science University, Portland, Oregon
| | - James M Brown
- School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Susan Ostmo
- Department of Ophthalmology, Oregon Health and Science University, Portland, Oregon
| | - Robison V Paul Chan
- Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, Illinois
| | - Daniel Rubin
- Center for Biomedical Informatics Research, Stanford University School of Medicine, Stanford, California
| | - Michael F Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - John Peter Campbell
- Department of Ophthalmology, Oregon Health and Science University, Portland, Oregon
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts.
| |
Collapse
|
143
|
Coyner AS, Oh MA, Shah PK, Singh P, Ostmo S, Valikodath NG, Cole E, Al-Khaled T, Bajimaya S, K.C. S, Chuluunbat T, Munkhuu B, Subramanian P, Venkatapathy N, Jonas KE, Hallak JA, Chan RP, Chiang MF, Kalpathy-Cramer J, Campbell JP. External Validation of a Retinopathy of Prematurity Screening Model Using Artificial Intelligence in 3 Low- and Middle-Income Populations. JAMA Ophthalmol 2022; 140:791-798. [PMID: 35797036 PMCID: PMC9264225 DOI: 10.1001/jamaophthalmol.2022.2135] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 04/30/2022] [Indexed: 02/05/2023]
Abstract
Importance Retinopathy of prematurity (ROP) is a leading cause of preventable blindness that disproportionately affects children born in low- and middle-income countries (LMICs). In-person and telemedical screening examinations can reduce this risk but are challenging to implement in LMICs owing to the multitude of at-risk infants and lack of trained ophthalmologists. Objective To implement an ROP risk model using retinal images from a single baseline examination to identify infants who will develop treatment-requiring (TR)-ROP in LMIC telemedicine programs. Design, Setting, and Participants In this diagnostic study conducted from February 1, 2019, to June 30, 2021, retinal fundus images were collected from infants as part of an Indian ROP telemedicine screening program. An artificial intelligence (AI)-derived vascular severity score (VSS) was obtained from images from the first examination after 30 weeks' postmenstrual age. Using 5-fold cross-validation, logistic regression models were trained on 2 variables (gestational age and VSS) for prediction of TR-ROP. The model was externally validated on test data sets from India, Nepal, and Mongolia. Data were analyzed from October 20, 2021, to April 20, 2022. Main Outcomes and Measures Primary outcome measures included sensitivity, specificity, positive predictive value, and negative predictive value for predictions of future occurrences of TR-ROP; the number of weeks before clinical diagnosis when a prediction was made; and the potential reduction in number of examinations required. Results A total of 3760 infants (median [IQR] postmenstrual age, 37 [5] weeks; 1950 male infants [51.9%]) were included in the study. The diagnostic model had a sensitivity and specificity, respectively, for each of the data sets as follows: India, 100.0% (95% CI, 87.2%-100.0%) and 63.3% (95% CI, 59.7%-66.8%); Nepal, 100.0% (95% CI, 54.1%-100.0%) and 77.8% (95% CI, 72.9%-82.2%); and Mongolia, 100.0% (95% CI, 93.3%-100.0%) and 45.8% (95% CI, 39.7%-52.1%). With the AI model, infants with TR-ROP were identified a median (IQR) of 2.0 (0-11) weeks before TR-ROP diagnosis in India, 0.5 (0-2.0) weeks before TR-ROP diagnosis in Nepal, and 0 (0-5.0) weeks before TR-ROP diagnosis in Mongolia. If low-risk infants were never screened again, the population could be effectively screened with 45.0% (India, 664/1476), 38.4% (Nepal, 151/393), and 51.3% (Mongolia, 266/519) fewer examinations required. Conclusions and Relevance Results of this diagnostic study suggest that there were 2 advantages to implementation of this risk model: (1) the number of examinations for low-risk infants could be reduced without missing cases of TR-ROP, and (2) high-risk infants could be identified and closely monitored before development of TR-ROP.
Collapse
Affiliation(s)
- Aaron S. Coyner
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Minn A. Oh
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Parag K. Shah
- Pediatric Retina and Ocular Oncology Division, Aravind Eye Hospital, Coimbatore, India
| | - Praveer Singh
- Massachusetts General Hospital and Brigham and Women’s Hospital Center for Clinical Data Science, Boston, Massachusetts
- Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, Massachusetts
| | - Susan Ostmo
- Casey Eye Institute, Oregon Health & Science University, Portland
| | - Nita G. Valikodath
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago
| | - Emily Cole
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago
| | - Tala Al-Khaled
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago
| | | | - Sagun K.C.
- Helen Keller International, Kathmandu, Nepal
| | | | - Bayalag Munkhuu
- National Center for Maternal and Child Health, Ulaanbaatar, Mongolia
| | - Prema Subramanian
- Pediatric Retina and Ocular Oncology Division, Aravind Eye Hospital, Coimbatore, India
| | | | - Karyn E. Jonas
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago
| | - Joelle A. Hallak
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago
| | - R.V. Paul Chan
- Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Jayashree Kalpathy-Cramer
- Massachusetts General Hospital and Brigham and Women’s Hospital Center for Clinical Data Science, Boston, Massachusetts
- Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, Massachusetts
| | | |
Collapse
|
144
|
Khan NC, Perera C, Dow ER, Chen KM, Mahajan VB, Mruthyunjaya P, Do DV, Leng T, Myung D. Predicting Systemic Health Features from Retinal Fundus Images Using Transfer-Learning-Based Artificial Intelligence Models. Diagnostics (Basel) 2022; 12:diagnostics12071714. [PMID: 35885619 PMCID: PMC9322827 DOI: 10.3390/diagnostics12071714] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 12/02/2022] Open
Abstract
While color fundus photos are used in routine clinical practice to diagnose ophthalmic conditions, evidence suggests that ocular imaging contains valuable information regarding the systemic health features of patients. These features can be identified through computer vision techniques including deep learning (DL) artificial intelligence (AI) models. We aim to construct a DL model that can predict systemic features from fundus images and to determine the optimal method of model construction for this task. Data were collected from a cohort of patients undergoing diabetic retinopathy screening between March 2020 and March 2021. Two models were created for each of 12 systemic health features based on the DenseNet201 architecture: one utilizing transfer learning with images from ImageNet and another from 35,126 fundus images. Here, 1277 fundus images were used to train the AI models. Area under the receiver operating characteristics curve (AUROC) scores were used to compare the model performance. Models utilizing the ImageNet transfer learning data were superior to those using retinal images for transfer learning (mean AUROC 0.78 vs. 0.65, p-value < 0.001). Models using ImageNet pretraining were able to predict systemic features including ethnicity (AUROC 0.93), age > 70 (AUROC 0.90), gender (AUROC 0.85), ACE inhibitor (AUROC 0.82), and ARB medication use (AUROC 0.78). We conclude that fundus images contain valuable information about the systemic characteristics of a patient. To optimize DL model performance, we recommend that even domain specific models consider using transfer learning from more generalized image sets to improve accuracy.
Collapse
Affiliation(s)
- Nergis C. Khan
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Chandrashan Perera
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
- Department of Ophthalmology, Fremantle Hospital, Perth, WA 6004, Australia
| | - Eliot R. Dow
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Karen M. Chen
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Vinit B. Mahajan
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Prithvi Mruthyunjaya
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Diana V. Do
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - Theodore Leng
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
| | - David Myung
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA 94305, USA; (N.C.K.); (C.P.); (E.R.D.); (K.M.C.); (V.B.M.); (P.M.); (D.V.D.); (T.L.)
- VA Palo Alto Health Care System, Palo Alto, CA 94304, USA
- Correspondence: ; Tel.: +1-650-724-3948
| |
Collapse
|
145
|
Young LH, Kim J, Yakin M, Lin H, Dao DT, Kodati S, Sharma S, Lee AY, Lee CS, Sen HN. Automated Detection of Vascular Leakage in Fluorescein Angiography - A Proof of Concept. Transl Vis Sci Technol 2022; 11:19. [PMID: 35877095 PMCID: PMC9339697 DOI: 10.1167/tvst.11.7.19] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose The purpose of this paper was to develop a deep learning algorithm to detect retinal vascular leakage (leakage) in fluorescein angiography (FA) of patients with uveitis and use the trained algorithm to determine clinically notable leakage changes. Methods An algorithm was trained and tested to detect leakage on a set of 200 FA images (61 patients) and evaluated on a separate 50-image test set (21 patients). The ground truth was leakage segmentation by two clinicians. The Dice Similarity Coefficient (DSC) was used to measure concordance. Results During training, the algorithm achieved a best average DSC of 0.572 (95% confidence interval [CI] = 0.548–0.596). The trained algorithm achieved a DSC of 0.563 (95% CI = 0.543–0.582) when tested on an additional set of 50 images. The trained algorithm was then used to detect leakage on pairs of FA images from longitudinal patient visits. Longitudinal leakage follow-up showed a >2.21% change in the visible retina area covered by leakage (as detected by the algorithm) had a sensitivity and specificity of 90% (area under the curve [AUC] = 0.95) of detecting a clinically notable change compared to the gold standard, an expert clinician's assessment. Conclusions This deep learning algorithm showed modest concordance in identifying vascular leakage compared to ground truth but was able to aid in identifying vascular FA leakage changes over time. Translational Relevance This is a proof-of-concept study that vascular leakage can be detected in a more standardized way and that tools can be developed to help clinicians more objectively compare vascular leakage between FAs.
Collapse
Affiliation(s)
- LeAnne H Young
- National Eye Institute, Bethesda, MD, USA.,Cleveland Clinic Lerner College of Medicine, Cleveland, OH, USA
| | - Jongwoo Kim
- National Library of Medicine, Bethesda, MD, USA
| | | | - Henry Lin
- National Eye Institute, Bethesda, MD, USA
| | | | | | - Sumit Sharma
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | | | | | - H Nida Sen
- National Eye Institute, Bethesda, MD, USA
| |
Collapse
|
146
|
Lin KY, Urban G, Yang MC, Lee LC, Lu DW, Alward WLM, Baldi P. Accurate Identification of the Trabecular Meshwork under Gonioscopic View in Real Time Using Deep Learning. Ophthalmol Glaucoma 2022; 5:402-412. [PMID: 34798322 DOI: 10.1016/j.ogla.2021.11.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 10/27/2021] [Accepted: 11/10/2021] [Indexed: 06/13/2023]
Abstract
PURPOSE Accurate identification of iridocorneal structures on gonioscopy is difficult to master, and errors can lead to grave surgical complications. This study aimed to develop and train convolutional neural networks (CNNs) to accurately identify the trabecular meshwork (TM) in gonioscopic videos in real time for eventual clinical integrations. DESIGN Cross-sectional study. PARTICIPANTS Adult patients with open angle were identified in academic glaucoma clinics in both Taipei, Taiwan, and Irvine, California. METHODS Neural Encoder-Decoder CNNs (U-nets) were trained to predict a curve marking the TM using an expert-annotated data set of 378 gonioscopy images. The model was trained and evaluated with stratified cross-validation grouped by patients to ensure uncorrelated training and testing sets, as well as on a separate test set and 3 intraoperative gonioscopic videos of ab interno trabeculotomy with Trabectome (totaling 90 seconds long, 30 frames per second). We also evaluated our model's performance by comparing its accuracy against ophthalmologists. MAIN OUTCOME MEASURES Successful development of real-time-capable CNNs that are accurate in predicting and marking the TM's position in video frames of gonioscopic views. Models were evaluated in comparison with human expert annotations of static images and video data. RESULTS The best CNN model produced test set predictions with a median deviation of 0.8% of the video frame's height (15.25 μm) from the human experts' annotations. This error is less than the average vertical height of the TM. The worst test frame prediction of this model had an average deviation of 4% of the frame height (76.28 μm), which is still considered a successful prediction. When challenged with unseen images, the CNN model scored greater than 2 standard deviations above the mean performance of the surveyed general ophthalmologists. CONCLUSIONS Our CNN model can identify the TM in gonioscopy videos in real time with remarkable accuracy, allowing it to be used in connection with a video camera intraoperatively. This model can have applications in surgical training, automated screenings, and intraoperative guidance. The dataset developed in this study is one of the first publicly available gonioscopy image banks (https://lin.hs.uci.edu/research), which may encourage future investigations in this topic.
Collapse
Affiliation(s)
- Ken Y Lin
- Gavin Herbert Eye Institute, Department of Ophthalmology, University of California, Irvine, California; Department of Biomedical Engineering, University of California, Irvine, California.
| | - Gregor Urban
- Department of Computer Science, University of California, Irvine, California
| | - Michael C Yang
- Gavin Herbert Eye Institute, Department of Ophthalmology, University of California, Irvine, California
| | - Lung-Chi Lee
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Da-Wen Lu
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Wallace L M Alward
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, Iowa
| | - Pierre Baldi
- Department of Biomedical Engineering, University of California, Irvine, California; Department of Computer Science, University of California, Irvine, California.
| |
Collapse
|
147
|
Campbell JP, Chiang MF, Chen JS, Moshfeghi DM, Nudleman E, Ruambivoonsuk P, Cherwek H, Cheung CY, Singh P, Kalpathy-Cramer J, Ostmo S, Eydelman M, Chan RP, Capone A. Artificial Intelligence for Retinopathy of Prematurity: Validation of a Vascular Severity Scale against International Expert Diagnosis. Ophthalmology 2022; 129:e69-e76. [PMID: 35157950 PMCID: PMC9232863 DOI: 10.1016/j.ophtha.2022.02.008] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 01/31/2022] [Accepted: 02/03/2022] [Indexed: 01/07/2023] Open
Abstract
PURPOSE To validate a vascular severity score as an appropriate output for artificial intelligence (AI) Software as a Medical Device (SaMD) for retinopathy of prematurity (ROP) through comparison with ordinal disease severity labels for stage and plus disease assigned by the International Classification of Retinopathy of Prematurity, Third Edition (ICROP3), committee. DESIGN Validation study of an AI-based ROP vascular severity score. PARTICIPANTS A total of 34 ROP experts from the ICROP3 committee. METHODS Two separate datasets of 30 fundus photographs each for stage (0-5) and plus disease (plus, preplus, neither) were labeled by members of the ICROP3 committee using an open-source platform. Averaging these results produced a continuous label for plus (1-9) and stage (1-3) for each image. Experts were also asked to compare each image to each other in terms of relative severity for plus disease. Each image was also labeled with a vascular severity score from the Imaging and Informatics in ROP deep learning system, which was compared with each grader's diagnostic labels for correlation, as well as the ophthalmoscopic diagnosis of stage. MAIN OUTCOME MEASURES Weighted kappa and Pearson correlation coefficients (CCs) were calculated between each pair of grader classification labels for stage and plus disease. The Elo algorithm was also used to convert pairwise comparisons for each expert into an ordered set of images from least to most severe. RESULTS The mean weighted kappa and CC for all interobserver pairs for plus disease image comparison were 0.67 and 0.88, respectively. The vascular severity score was found to be highly correlated with both the average plus disease classification (CC = 0.90, P < 0.001) and the ophthalmoscopic diagnosis of stage (P < 0.001 by analysis of variance) among all experts. CONCLUSIONS The ROP vascular severity score correlates well with the International Classification of Retinopathy of Prematurity committee member's labels for plus disease and stage, which had significant intergrader variability. Generation of a consensus for a validated scoring system for ROP SaMD can facilitate global innovation and regulatory authorization of these technologies.
Collapse
Affiliation(s)
- J. Peter Campbell
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR
| | | | - Jimmy S. Chen
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR
| | - Darius M. Moshfeghi
- Byers Eye Institute, Horngren Family Vitreoretinal Center,Department of Ophthalmology, Stanford University, Palo Alto, CA
| | - Eric Nudleman
- Department of Ophthalmology, University of California, San Diego
| | | | | | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, Faculty of Medicine, The Chinese University of Hong
| | - Praveer Singh
- Department of Radiology, MGH/Harvard Medical School, Charlestown, MA;,Massachusetts General Hospital & Brigham and Women’s Hospital Center for Clinical Data Science, Boston, MA
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, MGH/Harvard Medical School, Charlestown, MA;,Massachusetts General Hospital & Brigham and Women’s Hospital Center for Clinical Data Science, Boston, MA
| | - Susan Ostmo
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, OR
| | - Malvina Eydelman
- Center for Devices and Radiological Health, US Food and Drug Administration, Silver Spring, Maryland
| | - R.V. Paul Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL
| | - Antonio Capone
- Associated Retinal Consultants, Oakland University William Beaumont School of Medicine, Royal Oak, Michigan, USA
| | | | | |
Collapse
|
148
|
Wang J, Liu C, Wu H, Ng TK, Zhang M. Diagnostic Accuracy of Wide-Field Digital Retinal Images in Retinopathy of Prematurity Detection: Systematic Review and Meta-Analysis. Curr Eye Res 2022; 47:1024-1033. [PMID: 35435102 DOI: 10.1080/02713683.2022.2050262] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
PURPOSE To evaluate the diagnostic accuracy of the wide-field digital retinal imaging (WFDRI) for the detection of Retinopathy of Prematurity (ROP) in premature infants as compared to the binocular indirect ophthalmoscopy (BIO). METHODS This systematic review and meta-analysis included the publications searched through PubMed (Medline), EMBASE, Scopus, Web of Science, Cochrane Library databases and Clinical Trials. The Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-2, the hierarchical summary receiver operating characteristic, meta-regression, publication bias analyses, and the GRADE methodology for the certainty of the overall evidence were conducted. The pooled effect sizes of the sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR) and diagnostic odds ratio (DOR) were calculated. RESULTS Total sixteen eligible studies from 10 articles were included with total 2,537 image interpretations from 697 premature infants. Less than 50% risk of bias and low concern were found in each domain across all articles by QUADAS-2. The pooled effect sizes showed the sensitivity of 0.77 (95% confidence interval (C.I.): 0.69-0.84), specificity of 0.96 (95% C.I.: 0.92-0.98), PLR of 20.9 (95% C.I.: 10.2-42.5), NLR of 0.23 (95% C.I.: 0.17-0.33) and DOR of 89 (95% C.I.: 43-185) as compared to BIO. The income level, setting, mean/median birth weight and gestational age contributed to the significant differences in sensitivity (p < 0.001). No publication bias was found among these 16 studies. The GRADE quality of evidence showed moderate for the pooled sensitivity and high for the pooled specificity. CONCLUSIONS The diagnostic accuracy based on WFDRI is substantial and comparable to BIO, supporting its application in the ROP screening programs.
Collapse
Affiliation(s)
- Ji Wang
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
| | - Cui Liu
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
| | - Huan Wu
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
| | - Tsz Kin Ng
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Mingzhi Zhang
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou, Guangdong, China
| |
Collapse
|
149
|
Yaghy A, Lee AY, Keane PA, Keenan TDL, Mendonca LSM, Lee CS, Cairns AM, Carroll J, Chen H, Clark J, Cukras CA, de Sisternes L, Domalpally A, Durbin MK, Goetz KE, Grassmann F, Haines JL, Honda N, Hu ZJ, Mody C, Orozco LD, Owsley C, Poor S, Reisman C, Ribeiro R, Sadda SR, Sivaprasad S, Staurenghi G, Ting DS, Tumminia SJ, Zalunardo L, Waheed NK. Artificial intelligence-based strategies to identify patient populations and advance analysis in age-related macular degeneration clinical trials. Exp Eye Res 2022; 220:109092. [PMID: 35525297 PMCID: PMC9405680 DOI: 10.1016/j.exer.2022.109092] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 03/18/2022] [Accepted: 04/20/2022] [Indexed: 11/04/2022]
Affiliation(s)
- Antonio Yaghy
- New England Eye Center, Tufts University Medical Center, Boston, MA, USA
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA; Karalis Johnson Retina Center, Seattle, WA, USA
| | - Pearse A Keane
- Moorfields Eye Hospital & UCL Institute of Ophthalmology, London, UK
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA; Karalis Johnson Retina Center, Seattle, WA, USA
| | | | - Joseph Carroll
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, 925 N 87th Street, Milwaukee, WI, 53226, USA
| | - Hao Chen
- Genentech, South San Francisco, CA, USA
| | | | - Catherine A Cukras
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Amitha Domalpally
- Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA
| | | | - Kerry E Goetz
- Office of the Director, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Jonathan L Haines
- Department of Population and Quantitative Health Sciences, Case Western Reserve University School of Medicine, Cleveland, OH, USA; Cleveland Institute of Computational Biology, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | | | - Zhihong Jewel Hu
- Doheny Eye Institute, University of California, Los Angeles, CA, USA
| | | | - Luz D Orozco
- Department of Bioinformatics, Genentech, South San Francisco, CA, 94080, USA
| | - Cynthia Owsley
- Department of Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA
| | - Stephen Poor
- Department of Ophthalmology, Novartis Institutes for Biomedical Research, Cambridge, MA, USA
| | | | | | - Srinivas R Sadda
- Doheny Eye Institute, David Geffen School of Medicine, University of California-Los Angeles, Los Angeles, CA, USA
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London, UK
| | - Giovanni Staurenghi
- Department of Biomedical and Clinical Sciences Luigi Sacco, Luigi Sacco Hospital, University of Milan, Italy
| | - Daniel Sw Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore
| | - Santa J Tumminia
- Office of the Director, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Nadia K Waheed
- New England Eye Center, Tufts University Medical Center, Boston, MA, USA.
| |
Collapse
|
150
|
Bai A, Carty C, Dai S. Performance of deep-learning artificial intelligence algorithms in detecting retinopathy of prematurity: A systematic review. SAUDI JOURNAL OF OPHTHALMOLOGY : OFFICIAL JOURNAL OF THE SAUDI OPHTHALMOLOGICAL SOCIETY 2022; 36:296-307. [PMID: 36276252 DOI: 10.4103/sjopt.sjopt_219_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 11/09/2021] [Accepted: 11/12/2021] [Indexed: 11/04/2022]
Abstract
PURPOSE Artificial intelligence (AI) offers considerable promise for retinopathy of prematurity (ROP) screening and diagnosis. The development of deep-learning algorithms to detect the presence of disease may contribute to sufficient screening, early detection, and timely treatment for this preventable blinding disease. This review aimed to systematically examine the literature in AI algorithms in detecting ROP. Specifically, we focused on the performance of deep-learning algorithms through sensitivity, specificity, and area under the receiver operating curve (AUROC) for both the detection and grade of ROP. METHODS We searched Medline OVID, PubMed, Web of Science, and Embase for studies published from January 1, 2012, to September 20, 2021. Studies evaluating the diagnostic performance of deep-learning models based on retinal fundus images with expert ophthalmologists' judgment as reference standard were included. Studies which did not investigate the presence or absence of disease were excluded. Risk of bias was assessed using the QUADAS-2 tool. RESULTS Twelve studies out of the 175 studies identified were included. Five studies measured the performance of detecting the presence of ROP and seven studies determined the presence of plus disease. The average AUROC out of 11 studies was 0.98. The average sensitivity and specificity for detecting ROP was 95.72% and 98.15%, respectively, and for detecting plus disease was 91.13% and 95.92%, respectively. CONCLUSION The diagnostic performance of deep-learning algorithms in published studies was high. Few studies presented externally validated results or compared performance to expert human graders. Large scale prospective validation alongside robust study design could improve future studies.
Collapse
Affiliation(s)
- Amelia Bai
- Department of Ophthalmology, Queensland Children's Hospital, Brisbane, Australia.,Centre for Children's Health Research, Brisbane, Australia.,School of Medical Science, Griffith University, Gold Coast, Australia
| | - Christopher Carty
- Griffith Centre of Biomedical and Rehabilitation Engineering (GCORE), Menzies Health Institute Queensland, Griffith University Gold Coast, Australia.,Department of Orthopaedics, Children's Health Queensland Hospital and Health Service, Queensland Children's Hospital, Brisbane, Australia
| | - Shuan Dai
- Department of Ophthalmology, Queensland Children's Hospital, Brisbane, Australia.,School of Medical Science, Griffith University, Gold Coast, Australia.,University of Queensland, Australia
| |
Collapse
|