1
|
Yew SME, Lei X, Chen Y, Goh JHL, Pushpanathan K, Xue CC, Wang YX, Jonas JB, Sabanayagam C, Koh VTC, Xu X, Liu Y, Cheng CY, Tham YC. Deep Imbalanced Regression Model for Predicting Refractive Error from Retinal Photos. OPHTHALMOLOGY SCIENCE 2025; 5:100659. [PMID: 39931359 PMCID: PMC11808727 DOI: 10.1016/j.xops.2024.100659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 11/17/2024] [Accepted: 11/18/2024] [Indexed: 02/13/2025]
Abstract
Purpose Recent studies utilized ocular images and deep learning (DL) to predict refractive error and yielded notable results. However, most studies did not address biases from imbalanced datasets or conduct external validations. To address these gaps, this study aimed to integrate the deep imbalanced regression (DIR) technique into ResNet and Vision Transformer models to predict refractive error from retinal photographs. Design Retrospective study. Subjects We developed the DL models using up to 103 865 images from the Singapore Epidemiology of Eye Diseases Study and the United Kingdom Biobank, with internal testing on up to 8067 images. External testing was conducted on 7043 images from the Singapore Prospective Study and 5539 images from the Beijing Eye Study. Retinal images and corresponding refractive error data were extracted. Methods This retrospective study developed regression-based models, including ResNet34 with DIR, and SwinV2 (Swin Transformer) with DIR, incorporating Label Distribution Smoothing and Feature Distribution Smoothing. These models were compared against their baseline versions, ResNet34 and SwinV2, in predicting spherical and spherical equivalent (SE) power. Main Outcome Measures Mean absolute error (MAE) and coefficient of determination were used to evaluate the models' performances. The Wilcoxon signed-rank test was performed to assess statistical significance between DIR-integrated models and their baseline versions. Results For prediction of the spherical power, ResNet34 with DIR (MAE: 0.84D) and SwinV2 with DIR (MAE: 0.77D) significantly outperformed their baseline-ResNet34 (MAE: 0.88D; P < 0.001) and SwinV2 (MAE: 0.87D; P < 0.001) in internal test. For prediction of the SE power, ResNet34 with DIR (MAE: 0.78D) and SwinV2 with DIR (MAE: 0.75D) consistently significantly outperformed its baseline-ResNet34 (MAE: 0.81D; P < 0.001) and SwinV2 (MAE: 0.78D; P < 0.05) in internal test. Similar trends were observed in external test sets for both spherical and SE power prediction. Conclusions Deep imbalanced regressed-integrated DL models showed potential in addressing data imbalances and improving the prediction of refractive error. These findings highlight the potential utility of combining DL models with retinal imaging for opportunistic screening of refractive errors, particularly in settings where retinal cameras are already in use. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Samantha Min Er Yew
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Xiaofeng Lei
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A∗STAR), Singapore, Singapore
| | - Yibing Chen
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Jocelyn Hui Lin Goh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Krithi Pushpanathan
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Can Can Xue
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ya Xing Wang
- Beijing Ophthalmology and Visual Science Key Lab, Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jost B. Jonas
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
- Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Charumathi Sabanayagam
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Victor Teck Chang Koh
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Xinxing Xu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A∗STAR), Singapore, Singapore
| | - Yong Liu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A∗STAR), Singapore, Singapore
| | - Ching-Yu Cheng
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Yih-Chung Tham
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| |
Collapse
|
2
|
Wong YL, Yu M, Chong C, Yang D, Xu D, Lee ML, Hsu W, Wong TY, Cheng C, Cheung CY. Association between deep learning measured retinal vessel calibre and incident myocardial infarction in a retrospective cohort from the UK Biobank. BMJ Open 2024; 14:e079311. [PMID: 38514140 PMCID: PMC10961540 DOI: 10.1136/bmjopen-2023-079311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 02/19/2024] [Indexed: 03/23/2024] Open
Abstract
BACKGROUND Cardiovascular disease is a leading cause of global death. Prospective population-based studies have found that changes in retinal microvasculature are associated with the development of coronary artery disease. Recently, artificial intelligence deep learning (DL) algorithms have been developed for the fully automated assessment of retinal vessel calibres. METHODS In this study, we validate the association between retinal vessel calibres measured by a DL system (Singapore I Vessel Assessment) and incident myocardial infarction (MI) and assess its incremental performance in discriminating patients with and without MI when added to risk prediction models, using a large UK Biobank cohort. RESULTS Retinal arteriolar narrowing was significantly associated with incident MI in both the age, gender and fellow calibre-adjusted (HR=1.67 (95% CI: 1.19 to 2.36)) and multivariable models (HR=1.64 (95% CI: 1.16 to 2.32)) adjusted for age, gender and other cardiovascular risk factors such as blood pressure, diabetes mellitus (DM) and cholesterol status. The area under the receiver operating characteristic curve increased from 0.738 to 0.745 (p=0.018) in the age-gender-adjusted model and from 0.782 to 0.787 (p=0.010) in the multivariable model. The continuous net reclassification improvements (NRIs) were significant in the age and gender-adjusted (NRI=21.56 (95% CI: 3.33 to 33.42)) and the multivariable models (NRI=18.35 (95% CI: 6.27 to 32.61)). In the subgroup analysis, similar associations between retinal arteriolar narrowing and incident MI were observed, particularly for men (HR=1.62 (95% CI: 1.07 to 2.46)), non-smokers (HR=1.65 (95% CI: 1.13 to 2.42)), patients without DM (HR=1.73 (95% CI: 1.19 to 2.51)) and hypertensive patients (HR=1.95 (95% CI: 1.30 to 2.93)) in the multivariable models. CONCLUSION Our results support DL-based retinal vessel measurements as markers of incident MI in a predominantly Caucasian population.
Collapse
Affiliation(s)
- Yiu Lun Wong
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Marco Yu
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Crystal Chong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Dejiang Xu
- School of Computing, National University of Singapore, Singapore
| | - Mong Li Lee
- School of Computing, National University of Singapore, Singapore
| | - Wynne Hsu
- School of Computing, National University of Singapore, Singapore
| | - Tien Y Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Chingyu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
- Centre for Innovation and Precision Eye Health; and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| |
Collapse
|
3
|
Vilela MAP, Arrigo A, Parodi MB, da Silva Mengue C. Smartphone Eye Examination: Artificial Intelligence and Telemedicine. Telemed J E Health 2024; 30:341-353. [PMID: 37585566 DOI: 10.1089/tmj.2023.0041] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/18/2023] Open
Abstract
Background: The current medical scenario is closely linked to recent progress in telecommunications, photodocumentation, and artificial intelligence (AI). Smartphone eye examination may represent a promising tool in the technological spectrum, with special interest for primary health care services. Obtaining fundus imaging with this technique has improved and democratized the teaching of fundoscopy, but in particular, it contributes greatly to screening diseases with high rates of blindness. Eye examination using smartphones essentially represents a cheap and safe method, thus contributing to public policies on population screening. This review aims to provide an update on the use of this resource and its future prospects, especially as a screening and ophthalmic diagnostic tool. Methods: In this review, we surveyed major published advances in retinal and anterior segment analysis using AI. We performed an electronic search on the Medical Literature Analysis and Retrieval System Online (MEDLINE), EMBASE, and Cochrane Library for published literature without a deadline. We included studies that compared the diagnostic accuracy of smartphone ophthalmoscopy for detecting prevalent diseases with an accurate or commonly employed reference standard. Results: There are few databases with complete metadata, providing demographic data, and few databases with sufficient images involving current or new therapies. It should be taken into consideration that these are databases containing images captured using different systems and formats, with information often being excluded without essential detailing of the reasons for exclusion, which further distances them from real-life conditions. The safety, portability, low cost, and reproducibility of smartphone eye images are discussed in several studies, with encouraging results. Conclusions: The high level of agreement between conventional and a smartphone method shows a powerful arsenal for screening and early diagnosis of the main causes of blindness, such as cataract, glaucoma, diabetic retinopathy, and age-related macular degeneration. In addition to streamlining the medical workflow and bringing benefits for public health policies, smartphone eye examination can make safe and quality assessment available to the population.
Collapse
Affiliation(s)
| | - Alessandro Arrigo
- Department of Ophthalmology, Scientific Institute San Raffaele, Milan, Italy
- University Vita-Salute, Milan, Italy
| | - Maurizio Battaglia Parodi
- Department of Ophthalmology, Scientific Institute San Raffaele, Milan, Italy
- University Vita-Salute, Milan, Italy
| | - Carolina da Silva Mengue
- Post-Graduation Ophthalmological School, Ivo Corrêa-Meyer/Cardiology Institute, Porto Alegre, Brazil
| |
Collapse
|
4
|
Guo T, Liu K, Zou H, Xu X, Yang J, Yu Q. Refined image quality assessment for color fundus photography based on deep learning. Digit Health 2024; 10:20552076231207582. [PMID: 38425654 PMCID: PMC10903193 DOI: 10.1177/20552076231207582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/26/2023] [Indexed: 03/02/2024] Open
Abstract
Purpose Color fundus photography is widely used in clinical and screening settings for eye diseases. Poor image quality greatly affects the reliability of further evaluation and diagnosis. In this study, we developed an automated assessment module for color fundus photography image quality assessment using deep learning. Methods A total of 55,931 color fundus photography images from multiple centers in Shanghai and the public database were collected and annotated as training, validation, and testing data sets. The pre-diagnosis image quality assessment module based on the multi-task deep neural network was designed. The detailed criterion of color fundus photography image quality including three subcategories with three levels of grading was applied to improve precision and objectivity. The auxiliary tasks such as the localization of the optic nerve head and macula, the classification of laterality, and the field of view were also included to assist the quality assessment. Finally, we validated our module internally and externally by evaluating the area under the receiver operating characteristic curve, sensitivity, specificity, accuracy, and quadratic weighted Kappa. Results The "Location" subcategory achieved area under the receiver operating characteristic curves of 0.991, 0.920, and 0.946 for the three grades, respectively. The "Clarity" subcategory achieved area under the receiver operating characteristic curves of 0.980, 0.917, and 0.954 for the three grades, respectively. The "Artifact" subcategory achieved area under the receiver operating characteristic curves of 0.976, 0.952, and 0.986 for the three grades, respectively. The accuracy and Kappa of overall quality reach 88.15% and 89.70%, respectively, on the internal set. These two indicators on the external set were 86.63% and 88.55%, respectively, which were very close to that of the internal set. Conclusions This work showed that our deep module was able to evaluate the color fundus photography image quality using more detailed three subcategories with three grade criteria. The promising results on both internal and external validation indicated the strength and generalizability of our module.
Collapse
Affiliation(s)
- Tianjiao Guo
- Institute of Medical Robotics, Shanghai Jiao Tong University, China
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Kun Liu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Clinical Research Center for Eye Diseases, China
| | - Haidong Zou
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Clinical Research Center for Eye Diseases, China
| | - Xun Xu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Clinical Research Center for Eye Diseases, China
| | - Jie Yang
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, China
| | - Qi Yu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Clinical Research Center for Eye Diseases, China
| |
Collapse
|
5
|
Domalpally A, Slater R, Barrett N, Voland R, Balaji R, Heathcote J, Channa R, Blodi B. Implementation of a Large-Scale Image Curation Workflow Using Deep Learning Framework. OPHTHALMOLOGY SCIENCE 2022; 2:100198. [PMID: 36531570 PMCID: PMC9754974 DOI: 10.1016/j.xops.2022.100198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 06/09/2022] [Accepted: 07/07/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE The curation of images using human resources is time intensive but an essential step for developing artificial intelligence (AI) algorithms. Our goal was to develop and implement an AI algorithm for image curation in a high-volume setting. We also explored AI tools that will assist in deploying a tiered approach, in which the AI model labels images and flags potential mislabels for human review. DESIGN Implementation of an AI algorithm. PARTICIPANTS Seven-field stereoscopic images from multiple clinical trials. METHODS The 7-field stereoscopic image protocol includes 7 pairs of images from various parts of the central retina along with images of the anterior part of the eye. All images were labeled for field number by reading center graders. The model output included classification of the retinal images into 8 field numbers. Probability scores (0-1) were generated to identify misclassified images, with 1 indicating a high probability of a correct label. MAIN OUTCOME MEASURES Agreement of AI prediction with grader classification of field number and the use of probability scores to identify mislabeled images. RESULTS The AI model was trained and validated on 17 529 images and tested on 3004 images. The pooled agreement of field numbers between grader classification and the AI model was 88.3% (kappa, 0.87). The pooled mean probability score was 0.97 (standard deviation [SD], 0.08) for images for which the graders agreed with the AI-generated labels and 0.77 (SD, 0.19) for images for which the graders disagreed with the AI-generated labels (P < 0.0001). Using receiver operating characteristic curves, a probability score of 0.99 was identified as a cutoff for distinguishing mislabeled images. A tiered workflow using a probability score of < 0.99 as a cutoff would include 27.6% of the 3004 images for human review and reduce the error rate from 11.7% to 1.5%. CONCLUSIONS The implementation of AI algorithms requires measures in addition to model validation. Tools to flag potential errors in the labels generated by AI models will reduce inaccuracies, increase trust in the system, and provide data for continuous model development.
Collapse
Affiliation(s)
- Amitha Domalpally
- A-EYE Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Robert Slater
- A-EYE Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Nancy Barrett
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Rick Voland
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Rohit Balaji
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Jennifer Heathcote
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Roomasa Channa
- A-EYE Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Barbara Blodi
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| |
Collapse
|
6
|
Nderitu P, Nunez do Rio JM, Webster ML, Mann SS, Hopkins D, Cardoso MJ, Modat M, Bergeles C, Jackson TL. Automated image curation in diabetic retinopathy screening using deep learning. Sci Rep 2022; 12:11196. [PMID: 35778615 PMCID: PMC9249740 DOI: 10.1038/s41598-022-15491-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 06/24/2022] [Indexed: 11/20/2022] Open
Abstract
Diabetic retinopathy (DR) screening images are heterogeneous and contain undesirable non-retinal, incorrect field and ungradable samples which require curation, a laborious task to perform manually. We developed and validated single and multi-output laterality, retinal presence, retinal field and gradability classification deep learning (DL) models for automated curation. The internal dataset comprised of 7743 images from DR screening (UK) with 1479 external test images (Portugal and Paraguay). Internal vs external multi-output laterality AUROC were right (0.994 vs 0.905), left (0.994 vs 0.911) and unidentifiable (0.996 vs 0.680). Retinal presence AUROC were (1.000 vs 1.000). Retinal field AUROC were macula (0.994 vs 0.955), nasal (0.995 vs 0.962) and other retinal field (0.997 vs 0.944). Gradability AUROC were (0.985 vs 0.918). DL effectively detects laterality, retinal presence, retinal field and gradability of DR screening images with generalisation between centres and populations. DL models could be used for automated image curation within DR screening.
Collapse
Affiliation(s)
- Paul Nderitu
- Section of Ophthalmology, King's College London, London, UK.
- King's Ophthalmology Research Unit, King's College Hospital, London, UK.
| | | | - Ms Laura Webster
- South East London Diabetic Eye Screening Programme, Guy's and St Thomas' Foundation Trust, London, UK
| | - Samantha S Mann
- South East London Diabetic Eye Screening Programme, Guy's and St Thomas' Foundation Trust, London, UK
- Department of Ophthalmology, Guy's and St Thomas' Foundation Trust, London, UK
| | - David Hopkins
- Department of Diabetes, School of Life Course Sciences, King's College London, London, UK
- Institute of Diabetes, Endocrinology and Obesity, King's Health Partners, London, UK
| | - M Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Christos Bergeles
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Timothy L Jackson
- Section of Ophthalmology, King's College London, London, UK
- King's Ophthalmology Research Unit, King's College Hospital, London, UK
| |
Collapse
|
7
|
Rahman L, Hafejee A, Anantharanjit R, Wei W, Cordeiro MF. Accelerating precision ophthalmology: recent advances. EXPERT REVIEW OF PRECISION MEDICINE AND DRUG DEVELOPMENT 2022. [DOI: 10.1080/23808993.2022.2154146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Affiliation(s)
- Loay Rahman
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Ammaarah Hafejee
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Rajeevan Anantharanjit
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Wei Wei
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | | |
Collapse
|
8
|
Yuen V, Ran A, Shi J, Sham K, Yang D, Chan VTT, Chan R, Yam JC, Tham CC, McKay GJ, Williams MA, Schmetterer L, Cheng CY, Mok V, Chen CL, Wong TY, Cheung CY. Deep-Learning-Based Pre-Diagnosis Assessment Module for Retinal Photographs: A Multicenter Study. Transl Vis Sci Technol 2021; 10:16. [PMID: 34524409 PMCID: PMC8444486 DOI: 10.1167/tvst.10.11.16] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 08/12/2021] [Indexed: 12/23/2022] Open
Abstract
Purpose Artificial intelligence (AI) deep learning (DL) has been shown to have significant potential for eye disease detection and screening on retinal photographs in different clinical settings, particular in primary care. However, an automated pre-diagnosis image assessment is essential to streamline the application of the developed AI-DL algorithms. In this study, we developed and validated a DL-based pre-diagnosis assessment module for retinal photographs, targeting image quality (gradable vs. ungradable), field of view (macula-centered vs. optic-disc-centered), and laterality of the eye (right vs. left). Methods A total of 21,348 retinal photographs from 1914 subjects from various clinical settings in Hong Kong, Singapore, and the United Kingdom were used for training, internal validation, and external testing for the DL module, developed by two DL-based algorithms (EfficientNet-B0 and MobileNet-V2). Results For image-quality assessment, the pre-diagnosis module achieved area under the receiver operating characteristic curve (AUROC) values of 0.975, 0.999, and 0.987 in the internal validation dataset and the two external testing datasets, respectively. For field-of-view assessment, the module had an AUROC value of 1.000 in all of the datasets. For laterality-of-the-eye assessment, the module had AUROC values of 1.000, 0.999, and 0.985 in the internal validation dataset and the two external testing datasets, respectively. Conclusions Our study showed that this three-in-one DL module for assessing image quality, field of view, and laterality of the eye of retinal photographs achieved excellent performance and generalizability across different centers and ethnicities. Translational Relevance The proposed DL-based pre-diagnosis module realized accurate and automated assessments of image quality, field of view, and laterality of the eye of retinal photographs, which could be further integrated into AI-based models to improve operational flow for enhancing disease screening and diagnosis.
Collapse
Affiliation(s)
- Vincent Yuen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Anran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jian Shi
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Kaiser Sham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Victor T. T. Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Raymond Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jason C. Yam
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Clement C. Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Gareth J. McKay
- Center for Public Health, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Michael A. Williams
- Center for Medical Education, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Vincent Mok
- Gerald Choa Neuroscience Center, Therese Pei Fong Chow Research Center for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Hong Kong
| | - Christopher L. Chen
- Memory, Aging and Cognition Center, Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tien Y. Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
9
|
Betzler BK, Yang HHS, Thakur S, Yu M, Quek TC, Soh ZD, Lee G, Tham YC, Wong TY, Rim TH, Cheng CY. Gender Prediction for a Multiethnic Population via Deep Learning Across Different Retinal Fundus Photograph Fields: Retrospective Cross-sectional Study. JMIR Med Inform 2021; 9:e25165. [PMID: 34402800 PMCID: PMC8408758 DOI: 10.2196/25165] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 04/06/2021] [Accepted: 06/22/2021] [Indexed: 11/26/2022] Open
Abstract
Background Deep learning algorithms have been built for the detection of systemic and eye diseases based on fundus photographs. The retina possesses features that can be affected by gender differences, and the extent to which these features are captured via photography differs depending on the retinal image field. Objective We aimed to compare deep learning algorithms’ performance in predicting gender based on different fields of fundus photographs (optic disc–centered, macula-centered, and peripheral fields). Methods This retrospective cross-sectional study included 172,170 fundus photographs of 9956 adults aged ≥40 years from the Singapore Epidemiology of Eye Diseases Study. Optic disc–centered, macula-centered, and peripheral field fundus images were included in this study as input data for a deep learning model for gender prediction. Performance was estimated at the individual level and image level. Receiver operating characteristic curves for binary classification were calculated. Results The deep learning algorithms predicted gender with an area under the receiver operating characteristic curve (AUC) of 0.94 at the individual level and an AUC of 0.87 at the image level. Across the three image field types, the best performance was seen when using optic disc–centered field images (younger subgroups: AUC=0.91; older subgroups: AUC=0.86), and algorithms that used peripheral field images had the lowest performance (younger subgroups: AUC=0.85; older subgroups: AUC=0.76). Across the three ethnic subgroups, algorithm performance was lowest in the Indian subgroup (AUC=0.88) compared to that in the Malay (AUC=0.91) and Chinese (AUC=0.91) subgroups when the algorithms were tested on optic disc–centered images. Algorithms’ performance in gender prediction at the image level was better in younger subgroups (aged <65 years; AUC=0.89) than in older subgroups (aged ≥65 years; AUC=0.82). Conclusions We confirmed that gender among the Asian population can be predicted with fundus photographs by using deep learning, and our algorithms’ performance in terms of gender prediction differed according to the field of fundus photographs, age subgroups, and ethnic groups. Our work provides a further understanding of using deep learning models for the prediction of gender-related diseases. Further validation of our findings is still needed.
Collapse
Affiliation(s)
- Bjorn Kaijun Betzler
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Henrik Hee Seung Yang
- Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
| | - Sahil Thakur
- Singapore Eye Research Institute, Singapore, Singapore
| | - Marco Yu
- Singapore Eye Research Institute, Singapore, Singapore
| | | | - Zhi Da Soh
- Singapore Eye Research Institute, Singapore, Singapore
| | | | - Yih-Chung Tham
- Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.,Singapore Eye Research Institute, Singapore, Singapore
| | - Tien Yin Wong
- Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.,Singapore Eye Research Institute, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.,Singapore Eye Research Institute, Singapore, Singapore
| | - Ching-Yu Cheng
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore.,Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.,Singapore Eye Research Institute, Singapore, Singapore
| |
Collapse
|
10
|
Ishii K, Asaoka R, Omoto T, Mitaki S, Fujino Y, Murata H, Onoda K, Nagai A, Yamaguchi S, Obana A, Tanito M. Predicting intraocular pressure using systemic variables or fundus photography with deep learning in a health examination cohort. Sci Rep 2021; 11:3687. [PMID: 33574359 PMCID: PMC7878799 DOI: 10.1038/s41598-020-80839-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 12/21/2020] [Indexed: 12/17/2022] Open
Abstract
The purpose of the current study was to predict intraocular pressure (IOP) using color fundus photography with a deep learning (DL) model, or, systemic variables with a multivariate linear regression model (MLM), along with least absolute shrinkage and selection operator regression (LASSO), support vector machine (SVM), and Random Forest: (RF). Training dataset included 3883 examinations from 3883 eyes of 1945 subjects and testing dataset 289 examinations from 289 eyes from 146 subjects. With the training dataset, MLM was constructed to predict IOP using 35 systemic variables and 25 blood measurements. A DL model was developed to predict IOP from color fundus photographs. The prediction accuracy of each model was evaluated through the absolute error and the marginal R-squared (mR2), using the testing dataset. The mean absolute error with MLM was 2.29 mmHg, which was significantly smaller than that with DL (2.70 dB). The mR2 with MLM was 0.15, whereas that with DL was 0.0066. The mean absolute error (between 2.24 and 2.30 mmHg) and mR2 (between 0.11 and 0.15) with LASSO, SVM and RF were similar to or poorer than MLM. A DL model to predict IOP using color fundus photography proved far less accurate than MLM using systemic variables.
Collapse
Affiliation(s)
- Kaori Ishii
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan
| | - Ryo Asaoka
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan.
- Seirei Christopher University, Hamamatsu, Shizuoka, Japan.
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan.
| | - Takashi Omoto
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan
| | - Shingo Mitaki
- Department of Neurology, Shimane University Faculty of Medicine, Izumo, Japan
| | - Yuri Fujino
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan
- Department of Ophthalmology, Shimane University Faculty of Medicine, Izumo, Japan
| | - Hiroshi Murata
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan
| | - Keiichi Onoda
- Department of Neurology, Shimane University Faculty of Medicine, Izumo, Japan
- Faculty of Psychology, Outemon Gakuin University, Osaka, Japan
| | - Atsushi Nagai
- Department of Neurology, Shimane University Faculty of Medicine, Izumo, Japan
| | - Shuhei Yamaguchi
- Department of Neurology, Shimane University Faculty of Medicine, Izumo, Japan
| | - Akira Obana
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan
- Hamamatsu BioPhotonics Innovation Chair, Institute for Medical Photonics Research, Preeminent Medical Photonics Education & Research Center, Hamamatsu University School of Medicine, Hamamatsu, Shizuoka, Japan
| | - Masaki Tanito
- Department of Ophthalmology, Shimane University Faculty of Medicine, Izumo, Japan
| |
Collapse
|
11
|
Prediction of systemic biomarkers from retinal photographs: development and validation of deep-learning algorithms. LANCET DIGITAL HEALTH 2020; 2:e526-e536. [DOI: 10.1016/s2589-7500(20)30216-8] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 08/13/2020] [Accepted: 08/16/2020] [Indexed: 01/01/2023]
|
12
|
Accelerating ophthalmic artificial intelligence research: the role of an open access data repository. Curr Opin Ophthalmol 2020; 31:337-350. [PMID: 32740059 DOI: 10.1097/icu.0000000000000678] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
PURPOSE OF REVIEW Artificial intelligence has already provided multiple clinically relevant applications in ophthalmology. Yet, the explosion of nonstandardized reporting of high-performing algorithms are rendered useless without robust and streamlined implementation guidelines. The development of protocols and checklists will accelerate the translation of research publications to impact on patient care. RECENT FINDINGS Beyond technological scepticism, we lack uniformity in analysing algorithmic performance generalizability, and benchmarking impacts across clinical settings. No regulatory guardrails have been set to minimize bias or optimize interpretability; no consensus clinical acceptability thresholds or systematized postdeployment monitoring has been set. Moreover, stakeholders with misaligned incentives deepen the landscape complexity especially when it comes to the requisite data integration and harmonization to advance the field. Therefore, despite increasing algorithmic accuracy and commoditization, the infamous 'implementation gap' persists. Open clinical data repositories have been shown to rapidly accelerate research, minimize redundancies and disseminate the expertise and knowledge required to overcome existing barriers. Drawing upon the longstanding success of existing governance frameworks and robust data use and sharing agreements, the ophthalmic community has tremendous opportunity in ushering artificial intelligence into medicine. By collaboratively building a powerful resource of open, anonymized multimodal ophthalmic data, the next generation of clinicians can advance data-driven eye care in unprecedented ways. SUMMARY This piece demonstrates that with readily accessible data, immense progress can be achieved clinically and methodologically to realize artificial intelligence's impact on clinical care. Exponentially progressive network effects can be seen by consolidating, curating and distributing data amongst both clinicians and data scientists.
Collapse
|