101
|
Huang Y, Cheung CY, Li D, Tham YC, Sheng B, Cheng CY, Wang YX, Wong TY. AI-integrated ocular imaging for predicting cardiovascular disease: advancements and future outlook. Eye (Lond) 2024; 38:464-472. [PMID: 37709926 PMCID: PMC10858189 DOI: 10.1038/s41433-023-02724-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 07/26/2023] [Accepted: 08/25/2023] [Indexed: 09/16/2023] Open
Abstract
Cardiovascular disease (CVD) remains the leading cause of death worldwide. Assessing of CVD risk plays an essential role in identifying individuals at higher risk and enables the implementation of targeted intervention strategies, leading to improved CVD prevalence reduction and patient survival rates. The ocular vasculature, particularly the retinal vasculature, has emerged as a potential means for CVD risk stratification due to its anatomical similarities and physiological characteristics shared with other vital organs, such as the brain and heart. The integration of artificial intelligence (AI) into ocular imaging has the potential to overcome limitations associated with traditional semi-automated image analysis, including inefficiency and manual measurement errors. Furthermore, AI techniques may uncover novel and subtle features that contribute to the identification of ocular biomarkers associated with CVD. This review provides a comprehensive overview of advancements made in AI-based ocular image analysis for predicting CVD, including the prediction of CVD risk factors, the replacement of traditional CVD biomarkers (e.g., CT-scan measured coronary artery calcium score), and the prediction of symptomatic CVD events. The review covers a range of ocular imaging modalities, including colour fundus photography, optical coherence tomography, and optical coherence tomography angiography, and other types of images like external eye images. Additionally, the review addresses the current limitations of AI research in this field and discusses the challenges associated with translating AI algorithms into clinical practice.
Collapse
Affiliation(s)
- Yu Huang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Dawei Li
- College of Future Technology, Peking University, Beijing, China
| | - Yih Chung Tham
- Centre for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ching Yu Cheng
- Centre for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore.
- Tsinghua Medicine, Tsinghua University, Beijing, China.
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, China.
| |
Collapse
|
102
|
Dai L, Sheng B, Chen T, Wu Q, Liu R, Cai C, Wu L, Yang D, Hamzah H, Liu Y, Wang X, Guan Z, Yu S, Li T, Tang Z, Ran A, Che H, Chen H, Zheng Y, Shu J, Huang S, Wu C, Lin S, Liu D, Li J, Wang Z, Meng Z, Shen J, Hou X, Deng C, Ruan L, Lu F, Chee M, Quek TC, Srinivasan R, Raman R, Sun X, Wang YX, Wu J, Jin H, Dai R, Shen D, Yang X, Guo M, Zhang C, Cheung CY, Tan GSW, Tham YC, Cheng CY, Li H, Wong TY, Jia W. A deep learning system for predicting time to progression of diabetic retinopathy. Nat Med 2024; 30:584-594. [PMID: 38177850 PMCID: PMC10878973 DOI: 10.1038/s41591-023-02702-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 11/10/2023] [Indexed: 01/06/2024]
Abstract
Diabetic retinopathy (DR) is the leading cause of preventable blindness worldwide. The risk of DR progression is highly variable among different individuals, making it difficult to predict risk and personalize screening intervals. We developed and validated a deep learning system (DeepDR Plus) to predict time to DR progression within 5 years solely from fundus images. First, we used 717,308 fundus images from 179,327 participants with diabetes to pretrain the system. Subsequently, we trained and validated the system with a multiethnic dataset comprising 118,868 images from 29,868 participants with diabetes. For predicting time to DR progression, the system achieved concordance indexes of 0.754-0.846 and integrated Brier scores of 0.153-0.241 for all times up to 5 years. Furthermore, we validated the system in real-world cohorts of participants with diabetes. The integration with clinical workflow could potentially extend the mean screening interval from 12 months to 31.97 months, and the percentage of participants recommended to be screened at 1-5 years was 30.62%, 20.00%, 19.63%, 11.85% and 17.89%, respectively, while delayed detection of progression to vision-threatening DR was 0.18%. Altogether, the DeepDR Plus system could predict individualized risk and time to DR progression over 5 years, potentially allowing personalized screening intervals.
Collapse
Grants
- the National Key Research and Development Program of China (2022YFA1004804), the Shanghai Municipal Key Clinical Specialty, Shanghai Research Center for Endocrine and Metabolic Diseases (2022ZZ01002), and the Chinese Academy of Engineering (2022-XY-08)
- the General Program of NSFC (62272298), the National Key Research and Development Program of China (2022YFC2407000), the Interdisciplinary Program of Shanghai Jiao Tong University (YG2023LC11 and YG2022ZD007), National Natural Science Foundation of China (62272298 and 62077037), the College-level Project Fund of Shanghai Jiao Tong University Affiliated Sixth People’s Hospital (ynlc201909), and the Medical-industrial Cross-fund of Shanghai Jiao Tong University (YG2022QN089)
- the Clinical Special Program of Shanghai Municipal Health Commission (20224044) and Three-year action plan to strengthen the construction of public health system in Shanghai (GWVI-11.1-28)
- the National Natural Science Foundation of China (82100879)
- the National Key Research and Development Program of China (2022YFA1004804), Excellent Young Scientists Fund of NSFC (82022012), General Fund of NSFC (81870598), Innovative research team of high-level local universities in Shanghai (SHSMU-ZDCX20212700)
- the National Key R & D Program of China (2022YFC2502800) and National Natural Science Fund of China (8238810007)
Collapse
Affiliation(s)
- Ling Dai
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Bin Sheng
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China.
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Tingli Chen
- Department of Ophthalmology, Huadong Sanatorium, Wuxi, China
| | - Qiang Wu
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ruhan Liu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chun Cai
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Liang Wu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Haslina Hamzah
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Yuexing Liu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Xiangning Wang
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhouyu Guan
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Shujie Yu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Tingyao Li
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ziqi Tang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Anran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Haoxuan Che
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Jia Shu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Shan Huang
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chan Wu
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Shiqun Lin
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Dan Liu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Jiajia Li
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zheyuan Wang
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ziyao Meng
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jie Shen
- Medical Records and Statistics Office, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xuhong Hou
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Chenxin Deng
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Lei Ruan
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Feng Lu
- National Engineering Research Center for Big Data Technology and System, Services Computing Technology and System Lab, Cluster and Grid Computing Lab, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Miaoli Chee
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ten Cheer Quek
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ramyaa Srinivasan
- Shri Bhagwan Mahavir Vitreoretinal Services, Medical Research Foundation, Sankara Nethralaya, Chennai, India
| | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Medical Research Foundation, Sankara Nethralaya, Chennai, India
| | - Xiaodong Sun
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Science Key Laboratory, Beijing, China
| | - Jiarui Wu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- Center for Excellence in Molecular Science, Chinese Academy of Sciences, Shanghai, China
| | - Hai Jin
- National Engineering Research Center for Big Data Technology and System, Services Computing Technology and System Lab, Cluster and Grid Computing Lab, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Rongping Dai
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Dinggang Shen
- School of Biomedical Engineering, Shanghai Tech University, Shanghai, China
- Shanghai United Imaging Intelligence, Shanghai, China
- Shanghai Clinical Research and Trial Center, Shanghai, China
| | - Xiaokang Yang
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Minyi Guo
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Cuntai Zhang
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Gavin Siew Wei Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Centre for Innovation and Precision Eye Health; and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
- Centre for Innovation and Precision Eye Health; and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Huating Li
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China.
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Tsinghua Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua University, Beijing, China.
| | - Weiping Jia
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China.
| |
Collapse
|
103
|
Skevas C, de Olaguer NP, Lleó A, Thiwa D, Schroeter U, Lopes IV, Mautone L, Linke SJ, Spitzer MS, Yap D, Xiao D. Implementing and evaluating a fully functional AI-enabled model for chronic eye disease screening in a real clinical environment. BMC Ophthalmol 2024; 24:51. [PMID: 38302908 PMCID: PMC10832120 DOI: 10.1186/s12886-024-03306-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 01/16/2024] [Indexed: 02/03/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to increase the affordability and accessibility of eye disease screening, especially with the recent approval of AI-based diabetic retinopathy (DR) screening programs in several countries. METHODS This study investigated the performance, feasibility, and user experience of a seamless hardware and software solution for screening chronic eye diseases in a real-world clinical environment in Germany. The solution integrated AI grading for DR, age-related macular degeneration (AMD), and glaucoma, along with specialist auditing and patient referral decision. The study comprised several components: (1) evaluating the entire system solution from recruitment to eye image capture and AI grading for DR, AMD, and glaucoma; (2) comparing specialist's grading results with AI grading results; (3) gathering user feedback on the solution. RESULTS A total of 231 patients were recruited, and their consent forms were obtained. The sensitivity, specificity, and area under the curve for DR grading were 100.00%, 80.10%, and 90.00%, respectively. For AMD grading, the values were 90.91%, 78.79%, and 85.00%, and for glaucoma grading, the values were 93.26%, 76.76%, and 85.00%. The analysis of all false positive cases across the three diseases and their comparison with the final referral decisions revealed that only 17 patients were falsely referred among the 231 patients. The efficacy analysis of the system demonstrated the effectiveness of the AI grading process in the study's testing environment. Clinical staff involved in using the system provided positive feedback on the disease screening process, particularly praising the seamless workflow from patient registration to image transmission and obtaining the final result. Results from a questionnaire completed by 12 participants indicated that most found the system easy, quick, and highly satisfactory. The study also revealed room for improvement in the AMD model, suggesting the need to enhance its training data. Furthermore, the performance of the glaucoma model grading could be improved by incorporating additional measures such as intraocular pressure. CONCLUSIONS The implementation of the AI-based approach for screening three chronic eye diseases proved effective in real-world settings, earning positive feedback on the usability of the integrated platform from both the screening staff and auditors. The auditing function has proven valuable for obtaining efficient second opinions from experts, pointing to its potential for enhancing remote screening capabilities. TRIAL REGISTRATION Institutional Review Board of the Hamburg Medical Chamber (Ethik-Kommission der Ärztekammer Hamburg): 2021-10574-BO-ff.
Collapse
Affiliation(s)
- Christos Skevas
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | | | - Albert Lleó
- TeleMedC GmbH, Raboisen 32, 20095, Hamburg, Germany
| | - David Thiwa
- Department of Otorhinolaryngology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Ulrike Schroeter
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Inês Valente Lopes
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany.
| | - Luca Mautone
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Stephan J Linke
- Zentrum Sehestaerke, Martinistraße 64, 20251, Hamburg, Germany
| | - Martin Stephan Spitzer
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Daniel Yap
- TeleMedC Pty Ltd, 61 Ubi Avenue 1, #06-11 UBPoint, Singapore, 40894, Singapore
| | - Di Xiao
- TeleMedC Pty Ltd, Brisbane Technology Park, Level 2, 1 Westlink Court, Darra, QLD 4076, Australia
| |
Collapse
|
104
|
Chia MA, Hersch F, Sayres R, Bavishi P, Tiwari R, Keane PA, Turner AW. Validation of a deep learning system for the detection of diabetic retinopathy in Indigenous Australians. Br J Ophthalmol 2024; 108:268-273. [PMID: 36746615 PMCID: PMC10850716 DOI: 10.1136/bjo-2022-322237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 12/31/2022] [Indexed: 02/08/2023]
Abstract
BACKGROUND/AIMS Deep learning systems (DLSs) for diabetic retinopathy (DR) detection show promising results but can underperform in racial and ethnic minority groups, therefore external validation within these populations is critical for health equity. This study evaluates the performance of a DLS for DR detection among Indigenous Australians, an understudied ethnic group who suffer disproportionately from DR-related blindness. METHODS We performed a retrospective external validation study comparing the performance of a DLS against a retinal specialist for the detection of more-than-mild DR (mtmDR), vision-threatening DR (vtDR) and all-cause referable DR. The validation set consisted of 1682 consecutive, single-field, macula-centred retinal photographs from 864 patients with diabetes (mean age 54.9 years, 52.4% women) at an Indigenous primary care service in Perth, Australia. Three-person adjudication by a panel of specialists served as the reference standard. RESULTS For mtmDR detection, sensitivity of the DLS was superior to the retina specialist (98.0% (95% CI, 96.5 to 99.4) vs 87.1% (95% CI, 83.6 to 90.6), McNemar's test p<0.001) with a small reduction in specificity (95.1% (95% CI, 93.6 to 96.4) vs 97.0% (95% CI, 95.9 to 98.0), p=0.006). For vtDR, the DLS's sensitivity was again superior to the human grader (96.2% (95% CI, 93.4 to 98.6) vs 84.4% (95% CI, 79.7 to 89.2), p<0.001) with a slight drop in specificity (95.8% (95% CI, 94.6 to 96.9) vs 97.8% (95% CI, 96.9 to 98.6), p=0.002). For all-cause referable DR, there was a substantial increase in sensitivity (93.7% (95% CI, 91.8 to 95.5) vs 74.4% (95% CI, 71.1 to 77.5), p<0.001) and a smaller reduction in specificity (91.7% (95% CI, 90.0 to 93.3) vs 96.3% (95% CI, 95.2 to 97.4), p<0.001). CONCLUSION The DLS showed improved sensitivity and similar specificity compared with a retina specialist for DR detection. This demonstrates its potential to support DR screening among Indigenous Australians, an underserved population with a high burden of diabetic eye disease.
Collapse
Affiliation(s)
- Mark A Chia
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Lions Outback Vision, Lions Eye Institute, Nedlands, Western Australia, Australia
| | | | | | | | | | - Pearse A Keane
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Angus W Turner
- Lions Outback Vision, Lions Eye Institute, Nedlands, Western Australia, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Nedlands, Western Australia, Australia
| |
Collapse
|
105
|
Zhu Y, Salowe R, Chow C, Li S, Bastani O, O'Brien JM. Advancing Glaucoma Care: Integrating Artificial Intelligence in Diagnosis, Management, and Progression Detection. Bioengineering (Basel) 2024; 11:122. [PMID: 38391608 PMCID: PMC10886285 DOI: 10.3390/bioengineering11020122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 01/23/2024] [Accepted: 01/24/2024] [Indexed: 02/24/2024] Open
Abstract
Glaucoma, the leading cause of irreversible blindness worldwide, comprises a group of progressive optic neuropathies requiring early detection and lifelong treatment to preserve vision. Artificial intelligence (AI) technologies are now demonstrating transformative potential across the spectrum of clinical glaucoma care. This review summarizes current capabilities, future outlooks, and practical translation considerations. For enhanced screening, algorithms analyzing retinal photographs and machine learning models synthesizing risk factors can identify high-risk patients needing diagnostic workup and close follow-up. To augment definitive diagnosis, deep learning techniques detect characteristic glaucomatous patterns by interpreting results from optical coherence tomography, visual field testing, fundus photography, and other ocular imaging. AI-powered platforms also enable continuous monitoring, with algorithms that analyze longitudinal data alerting physicians about rapid disease progression. By integrating predictive analytics with patient-specific parameters, AI can also guide precision medicine for individualized glaucoma treatment selections. Advances in robotic surgery and computer-based guidance demonstrate AI's potential to improve surgical outcomes and surgical training. Beyond the clinic, AI chatbots and reminder systems could provide patient education and counseling to promote medication adherence. However, thoughtful approaches to clinical integration, usability, diversity, and ethical implications remain critical to successfully implementing these emerging technologies. This review highlights AI's vast capabilities to transform glaucoma care while summarizing key achievements, future prospects, and practical considerations to progress from bench to bedside.
Collapse
Affiliation(s)
- Yan Zhu
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Rebecca Salowe
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Caven Chow
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Shuo Li
- Department of Computer & Information Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Osbert Bastani
- Department of Computer & Information Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Joan M O'Brien
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
106
|
Verma AA, Trbovich P, Mamdani M, Shojania KG. Grand rounds in methodology: key considerations for implementing machine learning solutions in quality improvement initiatives. BMJ Qual Saf 2024; 33:121-131. [PMID: 38050138 DOI: 10.1136/bmjqs-2022-015713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 11/04/2023] [Indexed: 12/06/2023]
Abstract
Machine learning (ML) solutions are increasingly entering healthcare. They are complex, sociotechnical systems that include data inputs, ML models, technical infrastructure and human interactions. They have promise for improving care across a wide range of clinical applications but if poorly implemented, they may disrupt clinical workflows, exacerbate inequities in care and harm patients. Many aspects of ML solutions are similar to other digital technologies, which have well-established approaches to implementation. However, ML applications present distinct implementation challenges, given that their predictions are often complex and difficult to understand, they can be influenced by biases in the data sets used to develop them, and their impacts on human behaviour are poorly understood. This manuscript summarises the current state of knowledge about implementing ML solutions in clinical care and offers practical guidance for implementation. We propose three overarching questions for potential users to consider when deploying ML solutions in clinical care: (1) Is a clinical or operational problem likely to be addressed by an ML solution? (2) How can an ML solution be evaluated to determine its readiness for deployment? (3) How can an ML solution be deployed and maintained optimally? The Quality Improvement community has an essential role to play in ensuring that ML solutions are translated into clinical practice safely, effectively, and ethically.
Collapse
Affiliation(s)
- Amol A Verma
- Unity Health Toronto, Toronto, Ontario, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
- Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON, Canada
- Medicine, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
| | - Patricia Trbovich
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
- Centre for Quality Improvement and Patient Safety, Department of Medicine, University of Toronto, Toronto, ON, Canada
- North York General Hospital, Toronto, ON, Canada
| | - Muhammad Mamdani
- Unity Health Toronto, Toronto, Ontario, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
- Medicine, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
| | - Kaveh G Shojania
- Medicine, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
- Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| |
Collapse
|
107
|
Adigwe OP, Onavbavba G, Sanyaolu SE. Exploring the matrix: knowledge, perceptions and prospects of artificial intelligence and machine learning in Nigerian healthcare. Front Artif Intell 2024; 6:1293297. [PMID: 38314120 PMCID: PMC10834749 DOI: 10.3389/frai.2023.1293297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 11/21/2023] [Indexed: 02/06/2024] Open
Abstract
Background Artificial intelligence technology can be applied in several aspects of healthcare delivery and its integration into the Nigerian healthcare value chain is expected to bring about new opportunities. This study aimed at assessing the knowledge and perception of healthcare professionals in Nigeria regarding the application of artificial intelligence and machine learning in the health sector. Methods A cross-sectional study was undertaken amongst healthcare professionals in Nigeria with the use of a questionnaire. Data were collected across the six geopolitical zones in the Country using a stratified multistage sampling method. Descriptive and inferential statistical analyses were undertaken for the data obtained. Results Female participants (55.7%) were slightly higher in proportion compared to the male respondents (44.3%). Pharmacists accounted for 27.7% of the participants, and this was closely followed by medical doctors (24.5%) and nurses (19.3%). The majority of the respondents (57.2%) reported good knowledge regarding artificial intelligence and machine learning, about a third of the participants (32.2%) were of average knowledge, and 10.6% of the sample had poor knowledge. More than half of the respondents (57.8%) disagreed with the notion that the adoption of artificial intelligence in the Nigerian healthcare sector could result in job losses. Two-thirds of the participants (66.7%) were of the view that the integration of artificial intelligence in healthcare will augment human intelligence. Three-quarters (77%) of the respondents agreed that the use of machine learning in Nigerian healthcare could facilitate efficient service delivery. Conclusion This study provides novel insights regarding healthcare professionals' knowledge and perception with respect to the application of artificial intelligence and machine learning in healthcare. The emergent findings from this study can guide government and policymakers in decision-making as regards deployment of artificial intelligence and machine learning for healthcare delivery.
Collapse
Affiliation(s)
- Obi Peter Adigwe
- National Institute for Pharmaceutical Research and Development, Abuja, Nigeria
| | - Godspower Onavbavba
- National Institute for Pharmaceutical Research and Development, Abuja, Nigeria
| | | |
Collapse
|
108
|
Li B, Chen H, Yu W, Zhang M, Lu F, Ma J, Hao Y, Li X, Hu B, Shen L, Mao J, He X, Wang H, Ding D, Li X, Chen Y. The performance of a deep learning system in assisting junior ophthalmologists in diagnosing 13 major fundus diseases: a prospective multi-center clinical trial. NPJ Digit Med 2024; 7:8. [PMID: 38212607 PMCID: PMC10784504 DOI: 10.1038/s41746-023-00991-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 12/11/2023] [Indexed: 01/13/2024] Open
Abstract
Artificial intelligence (AI)-based diagnostic systems have been reported to improve fundus disease screening in previous studies. This multicenter prospective self-controlled clinical trial aims to evaluate the diagnostic performance of a deep learning system (DLS) in assisting junior ophthalmologists in detecting 13 major fundus diseases. A total of 1493 fundus images from 748 patients were prospectively collected from five tertiary hospitals in China. Nine junior ophthalmologists were trained and annotated the images with or without the suggestions proposed by the DLS. The diagnostic performance was evaluated among three groups: DLS-assisted junior ophthalmologist group (test group), junior ophthalmologist group (control group) and DLS group. The diagnostic consistency was 84.9% (95%CI, 83.0% ~ 86.9%), 72.9% (95%CI, 70.3% ~ 75.6%) and 85.5% (95%CI, 83.5% ~ 87.4%) in the test group, control group and DLS group, respectively. With the help of the proposed DLS, the diagnostic consistency of junior ophthalmologists improved by approximately 12% (95% CI, 9.1% ~ 14.9%) with statistical significance (P < 0.001). For the detection of 13 diseases, the test group achieved significant higher sensitivities (72.2% ~ 100.0%) and comparable specificities (90.8% ~ 98.7%) comparing with the control group (sensitivities, 50% ~ 100%; specificities 96.7 ~ 99.8%). The DLS group presented similar performance to the test group in the detection of any fundus abnormality (sensitivity, 95.7%; specificity, 87.2%) and each of the 13 diseases (sensitivity, 83.3% ~ 100.0%; specificity, 89.0 ~ 98.0%). The proposed DLS provided a novel approach for the automatic detection of 13 major fundus diseases with high diagnostic consistency and assisted to improve the performance of junior ophthalmologists, resulting especially in reducing the risk of missed diagnoses. ClinicalTrials.gov NCT04723160.
Collapse
Affiliation(s)
- Bing Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Huan Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Ming Zhang
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Fang Lu
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Jingxue Ma
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Yuhua Hao
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Xiaorong Li
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Bojie Hu
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Lijun Shen
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Jianbo Mao
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Xixi He
- School of Information Science and Technology, North China University of Technology, Beijing, China
- Beijing Key Laboratory on Integration and Analysis of Large-scale Stream Data, Beijing, China
| | - Hao Wang
- Visionary Intelligence Ltd., Beijing, China
| | | | - Xirong Li
- MoE Key Lab of DEKE, Renmin University of China, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China.
| |
Collapse
|
109
|
Li W, Bian L, Ma B, Sun T, Liu Y, Sun Z, Zhao L, Feng K, Yang F, Wang X, Chan S, Dou H, Qi H. Interpretable Detection of Diabetic Retinopathy, Retinal Vein Occlusion, Age-Related Macular Degeneration, and Other Fundus Conditions. Diagnostics (Basel) 2024; 14:121. [PMID: 38247998 DOI: 10.3390/diagnostics14020121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 12/23/2023] [Accepted: 12/27/2023] [Indexed: 01/23/2024] Open
Abstract
Diabetic retinopathy (DR), retinal vein occlusion (RVO), and age-related macular degeneration (AMD) pose significant global health challenges, often resulting in vision impairment and blindness. Automatic detection of these conditions is crucial, particularly in underserved rural areas with limited access to ophthalmic services. Despite remarkable advancements in artificial intelligence, especially convolutional neural networks (CNNs), their complexity can make interpretation difficult. In this study, we curated a dataset consisting of 15,089 color fundus photographs (CFPs) obtained from 8110 patients who underwent fundus fluorescein angiography (FFA) examination. The primary objective was to construct integrated models that merge CNNs with an attention mechanism. These models were designed for a hierarchical multilabel classification task, focusing on the detection of DR, RVO, AMD, and other fundus conditions. Furthermore, our approach extended to the detailed classification of DR, RVO, and AMD according to their respective subclasses. We employed a methodology that entails the translation of diagnostic information obtained from FFA results into CFPs. Our investigation focused on evaluating the models' ability to achieve precise diagnoses solely based on CFPs. Remarkably, our models showcased improvements across diverse fundus conditions, with the ConvNeXt-base + attention model standing out for its exceptional performance. The ConvNeXt-base + attention model achieved remarkable metrics, including an area under the receiver operating characteristic curve (AUC) of 0.943, a referable F1 score of 0.870, and a Cohen's kappa of 0.778 for DR detection. For RVO, it attained an AUC of 0.960, a referable F1 score of 0.854, and a Cohen's kappa of 0.819. Furthermore, in AMD detection, the model achieved an AUC of 0.959, an F1 score of 0.727, and a Cohen's kappa of 0.686. Impressively, the model demonstrated proficiency in subclassifying RVO and AMD, showcasing commendable sensitivity and specificity. Moreover, our models enhanced interpretability by visualizing attention weights on fundus images, aiding in the identification of disease findings. These outcomes underscore the substantial impact of our models in advancing the detection of DR, RVO, and AMD, offering the potential for improved patient outcomes and positively influencing the healthcare landscape.
Collapse
Affiliation(s)
- Wenlong Li
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Linbo Bian
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Baikai Ma
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Tong Sun
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Yiyun Liu
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Zhengze Sun
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Lin Zhao
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Kang Feng
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Fan Yang
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Xiaona Wang
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Szyyann Chan
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Hongliang Dou
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| | - Hong Qi
- Department of Ophthalmology, Peking University Third Hospital, Beijing 100191, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing 100191, China
| |
Collapse
|
110
|
Nam Y, Kim J, Kim K, Park KA, Kang M, Cho BH, Oh SY, Kee C, Han J, Lee GI, Kang MC, Lee D, Choi Y, Yun HJ, Park H, Kim J, Cho SJ, Chang DK. Deep learning-based optic disc classification is affected by optic-disc tilt. Sci Rep 2024; 14:498. [PMID: 38177229 PMCID: PMC10767025 DOI: 10.1038/s41598-023-50256-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 12/18/2023] [Indexed: 01/06/2024] Open
Abstract
We aimed to determine the effect of optic disc tilt on deep learning-based optic disc classification. A total of 2507 fundus photographs were acquired from 2236 eyes of 1809 subjects (mean age of 46 years; 53% men). Among all photographs, 1010 (40.3%) had tilted optic discs. Image annotation was performed to label pathologic changes of the optic disc (normal, glaucomatous optic disc changes, disc swelling, and disc pallor). Deep learning-based classification modeling was implemented to develop optic-disc appearance classification models with the photographs of all subjects and those with and without tilted optic discs. Regardless of deep learning algorithms, the classification models showed better overall performance when developed based on data from subjects with non-tilted discs (AUC, 0.988 ± 0.002, 0.991 ± 0.003, and 0.986 ± 0.003 for VGG16, VGG19, and DenseNet121, respectively) than when developed based on data with tilted discs (AUC, 0.924 ± 0.046, 0.928 ± 0.017, and 0.935 ± 0.008). In classification of each pathologic change, non-tilted disc models had better sensitivity and specificity than the tilted disc models. The optic disc appearance classification models developed based all-subject data demonstrated lower accuracy in patients with the appearance of tilted discs than in those with non-tilted discs. Our findings suggested the need to identify and adjust for the effect of optic disc tilt on the optic disc classification algorithm in future development.
Collapse
Affiliation(s)
- Youngwoo Nam
- Medical AI Research Center, Institute of Smart Healthcare, Samsung Medical Center, Seoul, Republic of Korea
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
| | - Joonhyoung Kim
- Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Kyunga Kim
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
- Biomedical Statistics Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul, Republic of Korea
- Department of Data Convergence & Future Medicine, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Kyung-Ah Park
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea.
| | - Mira Kang
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea.
- Health Promotion Center, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea.
- Digital Innovation Center, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
| | - Baek Hwan Cho
- Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
- Department of Biomedical Informatics, CHA University School of Medicine, CHA University, Seongam, Republic of Korea
| | - Sei Yeul Oh
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Changwon Kee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Jongchul Han
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
- Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
| | - Ga-In Lee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Min Chae Kang
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Dongyoung Lee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Yeeun Choi
- Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Hee Jee Yun
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Hansol Park
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Jiho Kim
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Soo Jin Cho
- Health Promotion Center, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Dong Kyung Chang
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
- Division of Gastroenterology, Department of Internal Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| |
Collapse
|
111
|
Zhu Y, Lyu X, Tao X, Wu L, Yin A, Liao F, Hu S, Wang Y, Zhang M, Huang L, Wang J, Zhang C, Gong D, Jiang X, Zhao L, Yu H. A newly developed deep learning-based system for automatic detection and classification of small bowel lesions during double-balloon enteroscopy examination. BMC Gastroenterol 2024; 24:10. [PMID: 38166722 PMCID: PMC10759410 DOI: 10.1186/s12876-023-03067-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Accepted: 11/28/2023] [Indexed: 01/05/2024] Open
Abstract
BACKGROUND Double-balloon enteroscopy (DBE) is a standard method for diagnosing and treating small bowel disease. However, DBE may yield false-negative results due to oversight or inexperience. We aim to develop a computer-aided diagnostic (CAD) system for the automatic detection and classification of small bowel abnormalities in DBE. DESIGN AND METHODS A total of 5201 images were collected from Renmin Hospital of Wuhan University to construct a detection model for localizing lesions during DBE, and 3021 images were collected to construct a classification model for classifying lesions into four classes, protruding lesion, diverticulum, erosion & ulcer and angioectasia. The performance of the two models was evaluated using 1318 normal images and 915 abnormal images and 65 videos from independent patients and then compared with that of 8 endoscopists. The standard answer was the expert consensus. RESULTS For the image test set, the detection model achieved a sensitivity of 92% (843/915) and an area under the curve (AUC) of 0.947, and the classification model achieved an accuracy of 86%. For the video test set, the accuracy of the system was significantly better than that of the endoscopists (85% vs. 77 ± 6%, p < 0.01). For the video test set, the proposed system was superior to novices and comparable to experts. CONCLUSIONS We established a real-time CAD system for detecting and classifying small bowel lesions in DBE with favourable performance. ENDOANGEL-DBE has the potential to help endoscopists, especially novices, in clinical practice and may reduce the miss rate of small bowel lesions.
Collapse
Affiliation(s)
- Yijie Zhu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiaoguang Lyu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiao Tao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Lianlian Wu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Anning Yin
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Fei Liao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Shan Hu
- School of Computer Science, Wuhan University, Wuhan, China
| | - Yang Wang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Mengjiao Zhang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Li Huang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Junxiao Wang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Chenxia Zhang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Dexin Gong
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiaoda Jiang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Liang Zhao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China.
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China.
| | - Honggang Yu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China.
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China.
| |
Collapse
|
112
|
Zhang J, Zou H. Insights into artificial intelligence in myopia management: from a data perspective. Graefes Arch Clin Exp Ophthalmol 2024; 262:3-17. [PMID: 37231280 PMCID: PMC10212230 DOI: 10.1007/s00417-023-06101-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 03/23/2023] [Accepted: 05/06/2023] [Indexed: 05/27/2023] Open
Abstract
Given the high incidence and prevalence of myopia, the current healthcare system is struggling to handle the task of myopia management, which is worsened by home quarantine during the ongoing COVID-19 pandemic. The utilization of artificial intelligence (AI) in ophthalmology is thriving, yet not enough in myopia. AI can serve as a solution for the myopia pandemic, with application potential in early identification, risk stratification, progression prediction, and timely intervention. The datasets used for developing AI models are the foundation and determine the upper limit of performance. Data generated from clinical practice in managing myopia can be categorized into clinical data and imaging data, and different AI methods can be used for analysis. In this review, we comprehensively review the current application status of AI in myopia with an emphasis on data modalities used for developing AI models. We propose that establishing large public datasets with high quality, enhancing the model's capability of handling multimodal input, and exploring novel data modalities could be of great significance for the further application of AI for myopia.
Collapse
Affiliation(s)
- Juzhao Zhang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haidong Zou
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- Shanghai Eye Diseases Prevention & Treatment Center, Shanghai Eye Hospital, Shanghai, China.
- National Clinical Research Center for Eye Diseases, Shanghai, China.
- Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China.
| |
Collapse
|
113
|
Lin LY, Zhou P, Shi M, Lu JE, Jeon S, Kim D, Liu JM, Wang M, Do S, Lee NG. A Deep Learning Model for Screening Computed Tomography Imaging for Thyroid Eye Disease and Compressive Optic Neuropathy. OPHTHALMOLOGY SCIENCE 2024; 4:100412. [PMID: 38046559 PMCID: PMC10692956 DOI: 10.1016/j.xops.2023.100412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 10/07/2023] [Accepted: 10/09/2023] [Indexed: 12/05/2023]
Abstract
Purpose Thyroid eye disease (TED) is an autoimmune condition with an array of clinical manifestations, which can be complicated by compressive optic neuropathy. It is important to identify patients with TED early to ensure close monitoring and treatment to prevent potential permanent disability or vision loss. Deep learning artificial intelligence (AI) algorithms have been utilized in ophthalmology and in other fields of medicine to detect disease. This study aims to introduce a deep learning model to evaluate orbital computed tomography (CT) images for the presence of TED and potential compressive optic neuropathy. Design Retrospective review and deep learning algorithm modeling. Subjects Patients with TED with dedicated orbital CT scans and with an examination by an oculoplastic surgeon over a 10-year period at a single academic institution. Patients with no TED and normal CTs were used as normal controls. Those with other diagnoses, such as tumors or other inflammatory processes, were excluded. Methods Orbital CTs were preprocessed and adopted for the Visual Geometry Group-16 network to distinguish patients with no TED, mild TED, and severe TED with compressive optic neuropathy. The primary model included training and testing of all 3 conditions. Binary model performance was also evaluated. An oculoplastic surgeon was also similarly tested with single and serial images for comparison. Main Outcome Measures Accuracy of deep learning model discernment of region of interest for CT scans to distinguish TED versus normal control, as well as TED with clinical signs of optic neuropathy. Results A total of 1187 photos from 141 patients were used to develop the AI model. The primary model trained on patients with no TED, mild TED, and severe TED had 89.5% accuracy (area under the curve: range, 0.96-0.99) in distinguishing patients with these clinical categories. In comparison, testing of an oculoplastic surgeon in these 3 categories showed decreased accuracy (70.0% accuracy in serial image testing). Conclusions The deep learning model developed in the study can accurately detect TED and further detect TED with clinical signs of optic neuropathy based on orbital CT. The model proved superior compared with human expert grading. With further optimization and validation, this TED deep learning model could help guide frontline health care providers in the detection of TED and help stratify the urgency of a referral to an oculoplastic surgeon and endocrinologist. Financial Disclosures The authors have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Lisa Y. Lin
- Department of Ophthalmology, Ophthalmic Plastic Surgery Service, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts
| | - Paul Zhou
- Department of Ophthalmology, Gavin Herbert Eye Institute, University of California Irvine, Irvine, California
| | - Min Shi
- Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts
| | - Jonathan E. Lu
- Department of Ophthalmology, Ophthalmic Plastic Surgery Service, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts
| | - Soomin Jeon
- Department of Information Sciences and Mathematics, Dong-A University, Busan, Republic of Korea
| | - Doyun Kim
- Data Science, Athenahealth, Watertown, Massachusetts
| | - Josephine M. Liu
- Department of Radiology, Lab of Medical Imaging and Computation, Massachusetts General Brigham and Harvard Medical School, Boston, Massachusetts
| | - Mengyu Wang
- Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts
| | - Synho Do
- Department of Radiology, Lab of Medical Imaging and Computation, Massachusetts General Brigham and Harvard Medical School, Boston, Massachusetts
| | - Nahyoung Grace Lee
- Department of Ophthalmology, Ophthalmic Plastic Surgery Service, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
114
|
Song A, Borkar DS. Advances in Teleophthalmology Screening for Diabetic Retinopathy. Int Ophthalmol Clin 2024; 64:97-113. [PMID: 38146884 DOI: 10.1097/iio.0000000000000505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2023]
|
115
|
Acharyya M, Moharana B, Jain S, Tandon M. A double-blinded study for quantifiable assessment of the diagnostic accuracy of AI tool "ADVEN-i" in identifying diseased fundus images including diabetic retinopathy on a retrospective data. Indian J Ophthalmol 2024; 72:S46-S52. [PMID: 38131542 PMCID: PMC10833153 DOI: 10.4103/ijo.ijo_3342_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 04/15/2023] [Accepted: 07/28/2023] [Indexed: 12/23/2023] Open
Abstract
PURPOSE To quantifiably assess the diagnostic accuracy of Adven-I, a proprietary artificial intelligence (AI)-driven diagnostic system that automatically detects diseases from fundus images. The purpose is to quantify the performance of Adven-i in differentiating a nonreferable (within normal limits) image from a referable (diseased fundus) image and further segregating diabetic retinopathy (DR) from the rest of the abnormalities (non-DR) encompassing the wide spectrum of abnormal pathologies. The assessment is carried out in comparison to manual reading as the reference gold standard. Adven-i is the only AI system classifying retinal abnormalities into DR and non-DR classes separately, apart from predicting nonreferable fundus, while most existing systems classify fundus images into referable and nonreferable DR. METHODS The double-blinded study was conducted on retrospective data collected over the course of a year in the ophthalmology outpatient department (OPD) at a top Tier II eyecare hospital in Chandigarh, India. Three vitreoretina specialists who were blinded to one another read the images. The ground-truth was generated on the basis of majority agreement among the readers. An arbitrator's decision was regarded final if all three readers disagreed. RESULTS 2261 fundus images were analyzed by Adven-i. The sensitivity and specificity of Adven-i in diagnosing images with abnormalities were 95.12% and 85.77%, respectively, and for segregating DR from rest of the retinal abnormalities were 91.87% and 85.12%, respectively. CONCLUSIONS AND RELEVANCE Adven-i shows definite promise in automated screening for early diagnosis of referable fundus images including DR. Adven-i can be adopted to scale for mass screening in resource-limited settings.
Collapse
Affiliation(s)
| | - Bruttendu Moharana
- Department of Ophthalmology, Drishti Eye Hospital, Panchkula, Haryana, India
| | - Sahil Jain
- Department of Vitreo-retina Services, Mirchia Laser Eye Clinic, Chandigarh, India
| | - Manjari Tandon
- Department of Retina and Uvea Services, Mirchia Laser Eye Clinic, Chandigarh, India
| |
Collapse
|
116
|
Hu W, Joseph S, Li R, Woods E, Sun J, Shen M, Jan CL, Zhu Z, He M, Zhang L. Population impact and cost-effectiveness of artificial intelligence-based diabetic retinopathy screening in people living with diabetes in Australia: a cost effectiveness analysis. EClinicalMedicine 2024; 67:102387. [PMID: 38314061 PMCID: PMC10837545 DOI: 10.1016/j.eclinm.2023.102387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 11/29/2023] [Accepted: 12/05/2023] [Indexed: 02/06/2024] Open
Abstract
Background We aimed to evaluate the cost-effectiveness of an artificial intelligence-(AI) based diabetic retinopathy (DR) screening system in the primary care setting for both non-Indigenous and Indigenous people living with diabetes in Australia. Methods We performed a cost-effectiveness analysis between January 01, 2022 and August 01, 2023. A decision-analytic Markov model was constructed to simulate DR progression in a population of 1,197,818 non-Indigenous and 65,160 Indigenous Australians living with diabetes aged ≥20 years over 40 years. From a healthcare provider's perspective, we compared current practice to three primary care AI-based screening scenarios-(A) substitution of current manual grading, (B) scaling up to patient acceptance level, and (C) achieving universal screening. Study results were presented as incremental cost-effectiveness ratio (ICER), benefit-cost ratio (BCR), and net monetary benefits (NMB). A Willingness-to-pay (WTP) threshold of AU$50,000 per quality-adjusted life year (QALY) and a discount rate of 3.5% were adopted in this study. Findings With the status quo, the non-Indigenous diabetic population was projected to develop 96,269 blindness cases, resulting in AU$13,039.6 m spending on DR screening and treatment during 2020-2060. In comparison, all three intervention scenarios were effective and cost-saving. In particular, if a universal screening program was to be implemented (Scenario C), it would prevent 38,347 blindness cases, gain 172,090 QALYs and save AU$595.8 m, leading to a BCR of 3.96 and NMB of AU$9,200 m. Similar findings were also reported in the Indigenous population. With the status quo, 3,396 Indigenous individuals would develop blindness, which would cost the health system AU$796.0 m during 2020-2060. All three intervention scenarios were cost-saving for the Indigenous population. Notably, universal AI-based DR screening (Scenario C) would prevent 1,211 blindness cases and gain 9,800 QALYs in the Indigenous population, leading to a saving of AU$19.2 m with a BCR of 1.62 and NMB of AU$509 m. Interpretation Our findings suggest that implementing AI-based DR screening in primary care is highly effective and cost-saving in both Indigenous and non-Indigenous populations. Funding This project received grant funding from the Australian Government: the National Critical Research Infrastructure Initiative, Medical Research Future Fund (MRFAI00035) and the NHMRC Investigator Grant (APP1175405). The contents of the published material are solely the responsibility of the Administering Institution, a participating institution or individual authors and do not reflect the views of the NHMRC. This work was supported by the Global STEM Professorship Scheme (P0046113), the Fundamental Research Funds of the State Key Laboratory of Ophthalmology, Project of Investigation on Health Status of Employees in Financial Industry in Guangzhou, China (Z012014075). The Centre for Eye Research Australia receives Operational Infrastructure Support from the Victorian State Government. W.H. is supported by the Melbourne Research Scholarship established by the University of Melbourne. The funding source had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Collapse
Affiliation(s)
- Wenyi Hu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Sanil Joseph
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Rui Li
- Central Clinical School, Faculty of Medicine, Monash University, Melbourne, VIC, Australia
- Artificial Intelligence and Modelling in Epidemiology Program, Melbourne Sexual Health Centre, Alfred Health, Melbourne, VIC, Australia
- China-Australia Joint Research Center for Infectious Diseases, School of Public Health, Xi’an Jiaotong University Health Science Center, Xi’an, Shaanxi, 710061, PR China
| | - Ekaterina Woods
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Jason Sun
- Eyetelligence Pty Ltd., Melbourne, Australia
| | - Mingwang Shen
- China-Australia Joint Research Center for Infectious Diseases, School of Public Health, Xi’an Jiaotong University Health Science Center, Xi’an, Shaanxi, 710061, PR China
| | - Catherine Lingxue Jan
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Zhuoting Zhu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Mingguang He
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
- School of Optometry, The Hong Kong Polytechnic University, Hong Kong, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Lei Zhang
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Clinical Medical Research Center, Children's Hospital of Nanjing Medical University, Nanjing, Jiangsu Province 210008, China
- Central Clinical School, Faculty of Medicine, Monash University, Melbourne, VIC, Australia
- Artificial Intelligence and Modelling in Epidemiology Program, Melbourne Sexual Health Centre, Alfred Health, Melbourne, VIC, Australia
| |
Collapse
|
117
|
Heger KA, Waldstein SM. Artificial intelligence in retinal imaging: current status and future prospects. Expert Rev Med Devices 2024; 21:73-89. [PMID: 38088362 DOI: 10.1080/17434440.2023.2294364] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 12/09/2023] [Indexed: 12/19/2023]
Abstract
INTRODUCTION The steadily growing and aging world population, in conjunction with continuously increasing prevalences of vision-threatening retinal diseases, is placing an increasing burden on the global healthcare system. The main challenges within retinology involve identifying the comparatively few patients requiring therapy within the large mass, the assurance of comprehensive screening for retinal disease and individualized therapy planning. In order to sustain high-quality ophthalmic care in the future, the incorporation of artificial intelligence (AI) technologies into our clinical practice represents a potential solution. AREAS COVERED This review sheds light onto already realized and promising future applications of AI techniques in retinal imaging. The main attention is directed at the application in diabetic retinopathy and age-related macular degeneration. The principles of use in disease screening, grading, therapeutic planning and prediction of future developments are explained based on the currently available literature. EXPERT OPINION The recent accomplishments of AI in retinal imaging indicate that its implementation into our daily practice is likely to fundamentally change the ophthalmic healthcare system and bring us one step closer to the goal of individualized treatment. However, it must be emphasized that the aim is to optimally support clinicians by gradually incorporating AI approaches, rather than replacing ophthalmologists.
Collapse
Affiliation(s)
- Katharina A Heger
- Department of Ophthalmology, Landesklinikum Mistelbach-Gaenserndorf, Mistelbach, Austria
| | - Sebastian M Waldstein
- Department of Ophthalmology, Landesklinikum Mistelbach-Gaenserndorf, Mistelbach, Austria
| |
Collapse
|
118
|
Talcott KE, Valentim CCS, Perkins SW, Ren H, Manivannan N, Zhang Q, Bagherinia H, Lee G, Yu S, D'Souza N, Jarugula H, Patel K, Singh RP. Automated Detection of Abnormal Optical Coherence Tomography B-scans Using a Deep Learning Artificial Intelligence Neural Network Platform. Int Ophthalmol Clin 2024; 64:115-127. [PMID: 38146885 DOI: 10.1097/iio.0000000000000519] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2023]
|
119
|
He HL, Liu YX, Song H, Xu TZ, Wong TY, Jin ZB. Initiation of China Alliance of Research in High Myopia (CHARM): protocol for an AI-based multimodal high myopia research biobank. BMJ Open 2023; 13:e076418. [PMID: 38151272 PMCID: PMC10753734 DOI: 10.1136/bmjopen-2023-076418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 11/27/2023] [Indexed: 12/29/2023] Open
Abstract
INTRODUCTION High myopia is a pressing public health concern due to its increasing prevalence, younger trend and the high risk of blindness, particularly in East Asian countries, including China. The China Alliance of Research in High Myopia (CHARM) is a newly established consortium that includes more than 100 hospitals and institutions participating across the nation, aiming to promote collaboration and data sharing in the field of high myopia screening, classification, diagnosis and therapeutic development. METHODS AND ANALYSIS The CHARM project is an ongoing study, and its initiation is distinguished by its unprecedented scale, encompassing plans to involve over 100 000 Chinese patients. This initiative stands out not only for its extensive scope but also for its innovative application of artificial intelligence (AI) to assist in diagnosis and treatment decisions. The CHARM project has been carried out using a 'three-step' strategy. The first step involves the collection of basic information, refraction, axial length and fundus photographs from participants with high myopia. In the second step, we will collect multimodal imaging data to expand the scope of clinical information, for example, optical coherence tomography and ultra-widefield fundus images. In the final step, genetic testing will be conducted by incorporating patient family histories and blood samples. The majority of data collected by CHARM is in the form of images that will be used to detect and predict the progression of high myopia through the identification and quantification of biomarkers such as fundus tessellation, optic nerve head and vascular parameters. ETHICS AND DISSEMINATION The study has received approval from the Ethics Committee of Beijing Tongren Hospital (TREC2022-KY045). The establishment of CHARM represents an opportunity to create a collaborative platform for myopia experts and facilitate the dissemination of research findings to the global community through peer-reviewed publications and conference presentations. These insights can inform clinical decision-making and contribute to the development of new treatment modalities that may benefit patients worldwide. TRIAL REGISTRATION NUMBER ChiCTR2300071219.
Collapse
Affiliation(s)
- Hai-Long He
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Yi-Xin Liu
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Hao Song
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Tian-Ze Xu
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Tien-Yin Wong
- Tsinghua Medicine, Tsinghua University, Beijing, People's Republic of China
- Duke-National University of Singapore Medical School, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Zi-Bing Jin
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, People's Republic of China
| |
Collapse
|
120
|
Huang X, Islam MR, Akter S, Ahmed F, Kazami E, Serhan HA, Abd-Alrazaq A, Yousefi S. Artificial intelligence in glaucoma: opportunities, challenges, and future directions. Biomed Eng Online 2023; 22:126. [PMID: 38102597 PMCID: PMC10725017 DOI: 10.1186/s12938-023-01187-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 12/01/2023] [Indexed: 12/17/2023] Open
Abstract
Artificial intelligence (AI) has shown excellent diagnostic performance in detecting various complex problems related to many areas of healthcare including ophthalmology. AI diagnostic systems developed from fundus images have become state-of-the-art tools in diagnosing retinal conditions and glaucoma as well as other ocular diseases. However, designing and implementing AI models using large imaging data is challenging. In this study, we review different machine learning (ML) and deep learning (DL) techniques applied to multiple modalities of retinal data, such as fundus images and visual fields for glaucoma detection, progression assessment, staging and so on. We summarize findings and provide several taxonomies to help the reader understand the evolution of conventional and emerging AI models in glaucoma. We discuss opportunities and challenges facing AI application in glaucoma and highlight some key themes from the existing literature that may help to explore future studies. Our goal in this systematic review is to help readers and researchers to understand critical aspects of AI related to glaucoma as well as determine the necessary steps and requirements for the successful development of AI models in glaucoma.
Collapse
Affiliation(s)
- Xiaoqin Huang
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA
| | - Md Rafiqul Islam
- Business Information Systems, Australian Institute of Higher Education, Sydney, Australia
| | - Shanjita Akter
- School of Computer Science, Taylors University, Subang Jaya, Malaysia
| | - Fuad Ahmed
- Department of Computer Science & Engineering, Islamic University of Technology (IUT), Gazipur, Bangladesh
| | - Ehsan Kazami
- Ophthalmology, General Hospital of Mahabad, Urmia University of Medical Sciences, Urmia, Iran
| | - Hashem Abu Serhan
- Department of Ophthalmology, Hamad Medical Corporations, Doha, Qatar
| | - Alaa Abd-Alrazaq
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Doha, Qatar
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA.
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, USA.
| |
Collapse
|
121
|
Soleimani M, Esmaili K, Rahdar A, Aminizadeh M, Cheraqpour K, Tabatabaei SA, Mirshahi R, Bibak Z, Mohammadi SF, Koganti R, Yousefi S, Djalilian AR. From the diagnosis of infectious keratitis to discriminating fungal subtypes; a deep learning-based study. Sci Rep 2023; 13:22200. [PMID: 38097753 PMCID: PMC10721811 DOI: 10.1038/s41598-023-49635-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 12/10/2023] [Indexed: 12/17/2023] Open
Abstract
Infectious keratitis (IK) is a major cause of corneal opacity. IK can be caused by a variety of microorganisms. Typically, fungal ulcers carry the worst prognosis. Fungal cases can be subdivided into filamentous and yeasts, which shows fundamental differences. Delays in diagnosis or initiation of treatment increase the risk of ocular complications. Currently, the diagnosis of IK is mainly based on slit-lamp examination and corneal scrapings. Notably, these diagnostic methods have their drawbacks, including experience-dependency, tissue damage, and time consumption. Artificial intelligence (AI) is designed to mimic and enhance human decision-making. An increasing number of studies have utilized AI in the diagnosis of IK. In this paper, we propose to use AI to diagnose IK (model 1), differentiate between bacterial keratitis and fungal keratitis (model 2), and discriminate the filamentous type from the yeast type of fungal cases (model 3). Overall, 9329 slit-lamp photographs gathered from 977 patients were enrolled in the study. The models exhibited remarkable accuracy, with model 1 achieving 99.3%, model 2 at 84%, and model 3 reaching 77.5%. In conclusion, our study offers valuable support in the early identification of potential fungal and bacterial keratitis cases and helps enable timely management.
Collapse
Affiliation(s)
- Mohammad Soleimani
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| | - Kosar Esmaili
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Amir Rahdar
- Department of Telecommunication, Faculty of Electrical Engineering, Shahid Beheshti University, Tehran, Iran
| | - Mehdi Aminizadeh
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Kasra Cheraqpour
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Seyed Ali Tabatabaei
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Reza Mirshahi
- Eye Research Center, The Five Senses Health Institute, Rasoul Akram Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Zahra Bibak
- Translational Ophthalmology Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Seyed Farzad Mohammadi
- Translational Ophthalmology Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Raghuram Koganti
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, USA
| | - Ali R Djalilian
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA.
- Cornea Service, Stem Cell Therapy and Corneal Tissue Engineering Laboratory, Illinois Eye and Ear Infirmary, 1855 W. Taylor Street, M/C 648, Chicago, IL, 60612, USA.
| |
Collapse
|
122
|
Shimizu E, Tanji M, Nakayama S, Ishikawa T, Agata N, Yokoiwa R, Nishimura H, Khemlani RJ, Sato S, Hanyuda A, Sato Y. AI-based diagnosis of nuclear cataract from slit-lamp videos. Sci Rep 2023; 13:22046. [PMID: 38086904 PMCID: PMC10716159 DOI: 10.1038/s41598-023-49563-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 12/09/2023] [Indexed: 12/18/2023] Open
Abstract
In ophthalmology, the availability of many fundus photographs and optical coherence tomography images has spurred consideration of using artificial intelligence (AI) for diagnosing retinal and optic nerve disorders. However, AI application for diagnosing anterior segment eye conditions remains unfeasible due to limited standardized images and analysis models. We addressed this limitation by augmenting the quantity of standardized optical images using a video-recordable slit-lamp device. We then investigated whether our proposed machine learning (ML) AI algorithm could accurately diagnose cataracts from videos recorded with this device. We collected 206,574 cataract frames from 1812 cataract eye videos. Ophthalmologists graded the nuclear cataracts (NUCs) using the cataract grading scale of the World Health Organization. These gradings were used to train and validate an ML algorithm. A validation dataset was used to compare the NUC diagnosis and grading of AI and ophthalmologists. The results of individual cataract gradings were: NUC 0: area under the curve (AUC) = 0.967; NUC 1: AUC = 0.928; NUC 2: AUC = 0.923; and NUC 3: AUC = 0.949. Our ML-based cataract diagnostic model achieved performance comparable to a conventional device, presenting a promising and accurate auto diagnostic AI tool.
Collapse
Affiliation(s)
- Eisuke Shimizu
- OUI Inc., Tokyo, Japan.
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan.
- Yokohama Keiai Eye Clinic, Yokohama, Japan.
| | - Makoto Tanji
- OUI Inc., Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | - Shintato Nakayama
- OUI Inc., Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | - Toshiki Ishikawa
- OUI Inc., Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | | | | | - Hiroki Nishimura
- OUI Inc., Tokyo, Japan
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
- Yokohama Keiai Eye Clinic, Yokohama, Japan
| | | | - Shinri Sato
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
- Yokohama Keiai Eye Clinic, Yokohama, Japan
| | - Akiko Hanyuda
- Department of Ophthalmology, Keio University School of Medicine, Tokyo, Japan
| | - Yasunori Sato
- Department of Preventive Medicine and Public Health, School of Medicine, Keio University, Tokyo, Japan
| |
Collapse
|
123
|
Than J, Sim PY, Muttuvelu D, Ferraz D, Koh V, Kang S, Huemer J. Teleophthalmology and retina: a review of current tools, pathways and services. Int J Retina Vitreous 2023; 9:76. [PMID: 38053188 DOI: 10.1186/s40942-023-00502-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 10/02/2023] [Indexed: 12/07/2023] Open
Abstract
Telemedicine, the use of telecommunication and information technology to deliver healthcare remotely, has evolved beyond recognition since its inception in the 1970s. Advances in telecommunication infrastructure, the advent of the Internet, exponential growth in computing power and associated computer-aided diagnosis, and medical imaging developments have created an environment where telemedicine is more accessible and capable than ever before, particularly in the field of ophthalmology. Ever-increasing global demand for ophthalmic services due to population growth and ageing together with insufficient supply of ophthalmologists requires new models of healthcare provision integrating telemedicine to meet present day challenges, with the recent COVID-19 pandemic providing the catalyst for the widespread adoption and acceptance of teleophthalmology. In this review we discuss the history, present and future application of telemedicine within the field of ophthalmology, and specifically retinal disease. We consider the strengths and limitations of teleophthalmology, its role in screening, community and hospital management of retinal disease, patient and clinician attitudes, and barriers to its adoption.
Collapse
Affiliation(s)
- Jonathan Than
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Peng Y Sim
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Danson Muttuvelu
- Department of Ophthalmology, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark
- MitØje ApS/Danske Speciallaeger Aps, Aarhus, Denmark
| | - Daniel Ferraz
- D'Or Institute for Research and Education (IDOR), São Paulo, Brazil
- Institute of Ophthalmology, University College London, London, UK
| | - Victor Koh
- Department of Ophthalmology, National University Hospital, Singapore, Singapore
| | - Swan Kang
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK
| | - Josef Huemer
- Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, UK.
- Department of Ophthalmology and Optometry, Kepler University Hospital, Johannes Kepler University, Linz, Austria.
| |
Collapse
|
124
|
Wong TY, Tan TE. The Diabetic Retinopathy "Pandemic" and Evolving Global Strategies: The 2023 Friedenwald Lecture. Invest Ophthalmol Vis Sci 2023; 64:47. [PMID: 38153754 PMCID: PMC10756246 DOI: 10.1167/iovs.64.15.47] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 07/30/2023] [Indexed: 12/29/2023] Open
Affiliation(s)
- Tien Yin Wong
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
- Duke-National University of Singapore, Singapore
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Tien-En Tan
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
- Duke-National University of Singapore, Singapore
| |
Collapse
|
125
|
Dow ER, Khan NC, Chen KM, Mishra K, Perera C, Narala R, Basina M, Dang J, Kim M, Levine M, Phadke A, Tan M, Weng K, Do DV, Moshfeghi DM, Mahajan VB, Mruthyunjaya P, Leng T, Myung D. AI-Human Hybrid Workflow Enhances Teleophthalmology for the Detection of Diabetic Retinopathy. OPHTHALMOLOGY SCIENCE 2023; 3:100330. [PMID: 37449051 PMCID: PMC10336195 DOI: 10.1016/j.xops.2023.100330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 05/04/2023] [Accepted: 05/08/2023] [Indexed: 07/18/2023]
Abstract
Objective Detection of diabetic retinopathy (DR) outside of specialized eye care settings is an important means of access to vision-preserving health maintenance. Remote interpretation of fundus photographs acquired in a primary care or other nonophthalmic setting in a store-and-forward manner is a predominant paradigm of teleophthalmology screening programs. Artificial intelligence (AI)-based image interpretation offers an alternative means of DR detection. IDx-DR (Digital Diagnostics Inc) is a Food and Drug Administration-authorized autonomous testing device for DR. We evaluated the diagnostic performance of IDx-DR compared with human-based teleophthalmology over 2 and a half years. Additionally, we evaluated an AI-human hybrid workflow that combines AI-system evaluation with human expert-based assessment for referable cases. Design Prospective cohort study and retrospective analysis. Participants Diabetic patients ≥ 18 years old without a prior DR diagnosis or DR examination in the past year presenting for routine DR screening in a primary care clinic. Methods Macula-centered and optic nerve-centered fundus photographs were evaluated by an AI algorithm followed by consensus-based overreading by retina specialists at the Stanford Ophthalmic Reading Center. Detection of more-than-mild diabetic retinopathy (MTMDR) was compared with in-person examination by a retina specialist. Main Outcome Measures Sensitivity, specificity, accuracy, positive predictive value, and gradability achieved by the AI algorithm and retina specialists. Results The AI algorithm had higher sensitivity (95.5% sensitivity; 95% confidence interval [CI], 86.7%-100%) but lower specificity (60.3% specificity; 95% CI, 47.7%-72.9%) for detection of MTMDR compared with remote image interpretation by retina specialists (69.5% sensitivity; 95% CI, 50.7%-88.3%; 96.9% specificity; 95% CI, 93.5%-100%). Gradability of encounters was also lower for the AI algorithm (62.5%) compared with retina specialists (93.1%). A 2-step AI-human hybrid workflow in which the AI algorithm initially rendered an assessment followed by overread by a retina specialist of MTMDR-positive encounters resulted in a sensitivity of 95.5% (95% CI, 86.7%-100%) and a specificity of 98.2% (95% CI, 94.6%-100%). Similarly, a 2-step overread by retina specialists of AI-ungradable encounters improved gradability from 63.5% to 95.6% of encounters. Conclusions Implementation of an AI-human hybrid teleophthalmology workflow may both decrease reliance on human specialist effort and improve diagnostic accuracy. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Eliot R. Dow
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
- Veterans Affairs Palo Alto Health Care System, Palo Alto, California
| | - Nergis C. Khan
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Karen M. Chen
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Kapil Mishra
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Chandrashan Perera
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Ramsudha Narala
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Marina Basina
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Jimmy Dang
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Michael Kim
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Marcie Levine
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Anuradha Phadke
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Marilyn Tan
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Kirsti Weng
- Stanford Healthcare, Stanford University, Palo Alto, California
| | - Diana V. Do
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Darius M. Moshfeghi
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Vinit B. Mahajan
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
- Veterans Affairs Palo Alto Health Care System, Palo Alto, California
| | - Prithvi Mruthyunjaya
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Theodore Leng
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - David Myung
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| |
Collapse
|
126
|
Betzler BK, Chen H, Cheng CY, Lee CS, Ning G, Song SJ, Lee AY, Kawasaki R, van Wijngaarden P, Grzybowski A, He M, Li D, Ran Ran A, Ting DSW, Teo K, Ruamviboonsuk P, Sivaprasad S, Chaudhary V, Tadayoni R, Wang X, Cheung CY, Zheng Y, Wang YX, Tham YC, Wong TY. Large language models and their impact in ophthalmology. Lancet Digit Health 2023; 5:e917-e924. [PMID: 38000875 PMCID: PMC11003328 DOI: 10.1016/s2589-7500(23)00201-7] [Citation(s) in RCA: 22] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 08/28/2023] [Accepted: 09/21/2023] [Indexed: 11/26/2023]
Abstract
The advent of generative artificial intelligence and large language models has ushered in transformative applications within medicine. Specifically in ophthalmology, large language models offer unique opportunities to revolutionise digital eye care, address clinical workflow inefficiencies, and enhance patient experiences across diverse global eye care landscapes. Yet alongside these prospects lie tangible and ethical challenges, encompassing data privacy, security, and the intricacies of embedding large language models into clinical routines. This Viewpoint highlights the promising applications of large language models in ophthalmology, while weighing up the practical and ethical barriers towards their real-world implementation. This Viewpoint seeks to stimulate broader discourse on the potential of large language models in ophthalmology and to galvanise both clinicians and researchers into tackling the prevailing challenges and optimising the benefits of large language models while curtailing the associated risks.
Collapse
Affiliation(s)
| | - Haichao Chen
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Ching-Yu Cheng
- Centre for Innovation and Precision Eye Health, National University of Singapore, Singapore; Department of Ophthalmology, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Cecilia S Lee
- University of Washington School of Medicine, Department of Ophthalmology, Seattle, WA, USA
| | - Guochen Ning
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Su Jeong Song
- Department of Ophthalmology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Aaron Y Lee
- University of Washington School of Medicine, Department of Ophthalmology, Seattle, WA, USA
| | - Ryo Kawasaki
- Division of Public Health, Department of Social Medicine, Graduate School of Medicine, Osaka University, Osaka, Japan; Artificial Intelligence Center for Medical Research and Application, Osaka University Hospital, Osaka, Japan
| | - Peter van Wijngaarden
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Melbourne, VA, Australia; Ophthalmology, University of Melbourne Department of Surgery, East Melbourne, Melbourne, VA, Australia
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland
| | - Mingguang He
- Hong Kong Polytechnic University, Hong Kong Special Administrative Region, China
| | - Dawei Li
- College of Future Technology, Peking University, Beijing, China
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Kelvin Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore
| | | | - Sobha Sivaprasad
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital, London, UK
| | - Varun Chaudhary
- Department of Surgery, McMaster University, Hamilton, ON, Canada
| | - Ramin Tadayoni
- Université Paris Cité, AP-HP, Lariboisière, Saint Louis, and Rothschild Foundation Hospitals, Paris, France
| | - Xiaofei Wang
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China; Guangzhou Regenerative Medicine and Health Guangdong Laboratory, Guangzhou, China
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Ophthalmology and Visual Science Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yih Chung Tham
- Centre for Innovation and Precision Eye Health, National University of Singapore, Singapore; Department of Ophthalmology, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore.
| | - Tien Yin Wong
- Tsinghua Medicine, Tsinghua University, Beijing, China; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| |
Collapse
|
127
|
Bora A, Tiwari R, Bavishi P, Virmani S, Huang R, Traynis I, Corrado GS, Peng L, Webster DR, Varadarajan AV, Pattanapongpaiboon W, Chopra R, Ruamviboonsuk P. Risk Stratification for Diabetic Retinopathy Screening Order Using Deep Learning: A Multicenter Prospective Study. Transl Vis Sci Technol 2023; 12:11. [PMID: 38079169 PMCID: PMC10715315 DOI: 10.1167/tvst.12.12.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 10/23/2023] [Indexed: 12/18/2023] Open
Abstract
Purpose Real-world evaluation of a deep learning model that prioritizes patients based on risk of progression to moderate or worse (MOD+) diabetic retinopathy (DR). Methods This nonrandomized, single-arm, prospective, interventional study included patients attending DR screening at four centers across Thailand from September 2019 to January 2020, with mild or no DR. Fundus photographs were input into the model, and patients were scheduled for their subsequent screening from September 2020 to January 2021 in order of predicted risk. Evaluation focused on model sensitivity, defined as correctly ranking patients that developed MOD+ within the first 50% of subsequent screens. Results We analyzed 1,757 patients, of which 52 (3.0%) developed MOD+. Using the model-proposed order, the model's sensitivity was 90.4%. Both the model-proposed order and mild/no DR plus HbA1c had significantly higher sensitivity than the random order (P < 0.001). Excluding one major (rural) site that had practical implementation challenges, the remaining sites included 567 patients and 15 (2.6%) developed MOD+. Here, the model-proposed order achieved 86.7% versus 73.3% for the ranking that used DR grade and hemoglobin A1c. Conclusions The model can help prioritize follow-up visits for the largest subgroups of DR patients (those with no or mild DR). Further research is needed to evaluate the impact on clinical management and outcomes. Translational Relevance Deep learning demonstrated potential for risk stratification in DR screening. However, real-world practicalities must be resolved to fully realize the benefit.
Collapse
Affiliation(s)
| | | | | | | | | | - Ilana Traynis
- Work done at Google via Advanced Clinical, Deerfield, IL, USA
| | | | | | | | | | | | | | - Paisan Ruamviboonsuk
- Department of Ophthalmology, College of Medicine, Rangsit University, Rajavithi Hospital, Bangkok, Thailand
| |
Collapse
|
128
|
Moynihan A, Hardy N, Dalli J, Aigner F, Arezzo A, Hompes R, Knol J, Tuynman J, Cucek J, Rojc J, Rodríguez-Luna MR, Cahill R. CLASSICA: Validating artificial intelligence in classifying cancer in real time during surgery. Colorectal Dis 2023; 25:2392-2402. [PMID: 37932915 DOI: 10.1111/codi.16769] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/14/2023] [Accepted: 09/16/2023] [Indexed: 11/08/2023]
Abstract
AIM Treatment pathways for significant rectal polyps differ depending on the underlying pathology, but pre-excision profiling is imperfect. It has been demonstrated that differences in fluorescence perfusion signals following injection of indocyanine green (ICG) can be analysed mathematically and, with the assistance of artificial intelligence (AI), used to classify tumours endoscopically as benign or malignant. This study aims to validate this method of characterization across multiple clinical sites regarding its generalizability, usability and accuracy while developing clinical-grade software to enable it to become a useful method. METHODS The CLASSICA study is a prospective, unblinded multicentre European observational study aimed to validate the use of AI analysis of ICG fluorescence for intra-operative tissue characterization. Six hundred patients undergoing transanal endoscopic evaluation of significant rectal polyps and tumours will be enrolled in at least five clinical sites across the European Union over a 4-year period. Video recordings will be analysed regarding dynamic fluorescence patterns centrally as software is developed to enable analysis with automatic classification to happen locally. AI-based classification and subsequently guided intervention will be compared with the current standard of care including biopsies, final specimen pathology and patient outcomes. DISCUSSION CLASSICA will validate the use of AI in the analysis of ICG fluorescence for the purposes of classifying significant rectal polyps and tumours endoscopically. Follow-on studies will compare AI-guided targeted biopsy or, indeed, AI characterization alone with traditional biopsy and AI-guided local excision versus traditional excision with regard to marginal clearance and recurrence.
Collapse
Affiliation(s)
- A Moynihan
- University College Dublin, Dublin, Ireland
| | - N Hardy
- University College Dublin, Dublin, Ireland
| | - J Dalli
- University College Dublin, Dublin, Ireland
| | - F Aigner
- Krankenhaus der Barmherzigen Brüder Graz, Graz, Austria
| | - A Arezzo
- Department of Surgical Sciences, University of Torino, Torino, Italy
- European Association for Endoscopic Surgery, Eindhoven, The Netherlands
| | - R Hompes
- Ziekenhuis Oost-Limburg Autonome Verzorgingsinstelling, Genk, Belgium
| | - J Knol
- Ziekenhuis Oost-Limburg Autonome Verzorgingsinstelling, Genk, Belgium
| | - J Tuynman
- Stitching VUMC, Amsterdam, The Netherlands
| | - J Cucek
- Arctur, Nova Gorica, Slovenia
| | - J Rojc
- Arctur, Nova Gorica, Slovenia
| | | | - R Cahill
- University College Dublin, Dublin, Ireland
- Mater Misericordiae University Hospital, Dublin, Ireland
| |
Collapse
|
129
|
Yamashita T, Asaoka R, Terasaki H, Yoshihara N, Kakiuchi N, Sakamoto T. Three-year changes in sex judgment using color fundus parameters in elementary school students. PLoS One 2023; 18:e0295123. [PMID: 38033010 PMCID: PMC10688721 DOI: 10.1371/journal.pone.0295123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 11/14/2023] [Indexed: 12/02/2023] Open
Abstract
PURPOSE In a previous cross-sectional study, we reported that the sexes can be distinguished using known factors obtained from color fundus photography (CFP). However, it is not clear how sex differences in fundus parameters appear across the human lifespan. Therefore, we conducted a cohort study to investigate sex determination based on fundus parameters in elementary school students. METHODS This prospective observational longitudinal study investigated 109 right eyes of elementary school students over 4 years (age, 8.5 to 11.5 years). From each CFP, the tessellation fundus index was calculated as red/red + green + blue (R/[R+G+B]) using the mean value of red-green-blue intensity in eight locations around the optic disc and macular region. Optic disc area, ovality ratio, papillomacular angle, and retinal vessel angles and distances were quantified according to the data in our previous report. Using 54 fundus parameters, sex was predicted by L2 regularized binomial logistic regression for each grade. RESULTS The right eyes of 53 boys and 56 girls were analyzed. The discrimination accuracy rate significantly increased with age: 56.3% at 8.5 years, 46.1% at 9.5 years, 65.5% at 10.5 years and 73.1% at 11.5 years. CONCLUSIONS The accuracy of sex discrimination by fundus photography improved during a 3-year cohort study of elementary school students.
Collapse
Affiliation(s)
- Takehiro Yamashita
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Ryo Asaoka
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan
- School of Nursing, Seirei Christopher University, Hamamatsu, Shizuoka, Japan
- Nanovision Research Division, Research Institute of Electronics, Shizuoka University, Hamamatsu, Shizuoka, Japan
- The Graduate School for the Creation of New Photonics Industries, Hamamatsu, Shizuoka, Japan
| | - Hiroto Terasaki
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Naoya Yoshihara
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Naoko Kakiuchi
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| |
Collapse
|
130
|
Sarayar R, Lestari YD, Setio AAA, Sitompul R. Accuracy of artificial intelligence model for infectious keratitis classification: a systematic review and meta-analysis. Front Public Health 2023; 11:1239231. [PMID: 38074720 PMCID: PMC10704127 DOI: 10.3389/fpubh.2023.1239231] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 11/03/2023] [Indexed: 12/18/2023] Open
Abstract
Background Infectious keratitis (IK) is a sight-threatening condition requiring immediate definite treatment. The need for prompt treatment heavily depends on timely diagnosis. The diagnosis of IK, however, is challenged by the drawbacks of the current "gold standard." The poorly differentiated clinical features, the possibility of low microbial culture yield, and the duration for culture are the culprits of delayed IK treatment. Deep learning (DL) is a recent artificial intelligence (AI) advancement that has been demonstrated to be highly promising in making automated diagnosis in IK with high accuracy. However, its exact accuracy is not yet elucidated. This article is the first systematic review and meta-analysis that aims to assess the accuracy of available DL models to correctly classify IK based on etiology compared to the current gold standards. Methods A systematic search was carried out in PubMed, Google Scholars, Proquest, ScienceDirect, Cochrane and Scopus. The used keywords are: "Keratitis," "Corneal ulcer," "Corneal diseases," "Corneal lesions," "Artificial intelligence," "Deep learning," and "Machine learning." Studies including slit lamp photography of the cornea and validity study on DL performance were considered. The primary outcomes reviewed were the accuracy and classification capability of the AI machine learning/DL algorithm. We analyzed the extracted data with the MetaXL 5.2 Software. Results A total of eleven articles from 2002 to 2022 were included with a total dataset of 34,070 images. All studies used convolutional neural networks (CNNs), with ResNet and DenseNet models being the most used models across studies. Most AI models outperform the human counterparts with a pooled area under the curve (AUC) of 0.851 and accuracy of 96.6% in differentiating IK vs. non-IK and pooled AUC 0.895 and accuracy of 64.38% for classifying bacterial keratitis (BK) vs. fungal keratitis (FK). Conclusion This study demonstrated that DL algorithms have high potential in diagnosing and classifying IK with accuracy that, if not better, is comparable to trained corneal experts. However, various factors, such as the unique architecture of DL model, the problem with overfitting, image quality of the datasets, and the complex nature of IK itself, still hamper the universal applicability of DL in daily clinical practice.
Collapse
Affiliation(s)
- Randy Sarayar
- Residency Program in Ophthalmology Faculty of Medicine, Universitas Indonesia, Jakarta, Indonesia
| | - Yeni Dwi Lestari
- Department of Ophthalmology, Faculty of Medicine Universitas Indonesia – Cipto Mangunkusumo General Hospital, Jakarta, Indonesia
| | - Arnaud A. A. Setio
- Digital Technology and Innovation, Siemens Healthineers, Erlangen, Germany
| | - Ratna Sitompul
- Department of Ophthalmology, Faculty of Medicine Universitas Indonesia – Cipto Mangunkusumo General Hospital, Jakarta, Indonesia
| |
Collapse
|
131
|
Lee PK, Ra H, Baek J. Automated segmentation of ultra-widefield fluorescein angiography of diabetic retinopathy using deep learning. Br J Ophthalmol 2023; 107:1859-1863. [PMID: 36241374 DOI: 10.1136/bjo-2022-321063] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 09/27/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND/AIMS Retinal capillary non-perfusion (NP) and neovascularisation (NV) are two of the most important angiographic changes in diabetic retinopathy (DR). This study investigated the feasibility of using deep learning (DL) models to automatically segment NP and NV on ultra-widefield fluorescein angiography (UWFA) images from patients with DR. METHODS Retrospective cross-sectional chart review study. In total, 951 UWFA images were collected from patients with severe non-proliferative DR (NPDR) or proliferative DR (PDR). Each image was segmented and labelled for NP, NV, disc, background and outside areas. Using the labelled images, DL models were trained and validated (80%) using convolutional neural networks (CNNs) for automated segmentation and tested (20%) on test sets. Accuracy of each model and each label were assessed. RESULTS The best accuracy from CNN models for each label was 0.8208, 0.8338, 0.9801, 0.9253 and 0.9766 for NP, NV, disc, background and outside areas, respectively. The best Intersection over Union for each label was 0.6806, 0.5675, 0.7107, 0.8551 and 0.924 and mean mean boundary F1 score (BF score) was 0.6702, 0.8742, 0.9092, 0.8103 and 0.9006, respectively. CONCLUSIONS DL models can detect NV and NP as well as disc and outer margins on UWFA with good performance. This automated segmentation of important UWFA features will aid physicians in DR clinics and in overcoming grader subjectivity.
Collapse
Affiliation(s)
- Phil-Kyu Lee
- Department of Ophthalmology, Bucheon St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Bucheon, Republic of Korea
| | - Ho Ra
- Department of Ophthalmology, Bucheon St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Bucheon, Republic of Korea
| | - Jiwon Baek
- Department of Ophthalmology, Bucheon St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Bucheon, Republic of Korea
- Department of Ophthalmology, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| |
Collapse
|
132
|
Chikumba S, Hu Y, Luo J. Deep learning-based fundus image analysis for cardiovascular disease: a review. Ther Adv Chronic Dis 2023; 14:20406223231209895. [PMID: 38028950 PMCID: PMC10657535 DOI: 10.1177/20406223231209895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 10/03/2023] [Indexed: 12/01/2023] Open
Abstract
It is well established that the retina provides insights beyond the eye. Through observation of retinal microvascular changes, studies have shown that the retina contains information related to cardiovascular disease. Despite the tremendous efforts toward reducing the effects of cardiovascular diseases, they remain a global challenge and a significant public health concern. Conventionally, predicting the risk of cardiovascular disease involves the assessment of preclinical features, risk factors, or biomarkers. However, they are associated with cost implications, and tests to acquire predictive parameters are invasive. Artificial intelligence systems, particularly deep learning (DL) methods applied to fundus images have been generating significant interest as an adjunct assessment tool with the potential of enhancing efforts to prevent cardiovascular disease mortality. Risk factors such as age, gender, smoking status, hypertension, and diabetes can be predicted from fundus images using DL applications with comparable performance to human beings. A clinical change to incorporate DL systems for the analysis of fundus images as an equally good test over more expensive and invasive procedures may require conducting prospective clinical trials to mitigate all the possible ethical challenges and medicolegal implications. This review presents current evidence regarding the use of DL applications on fundus images to predict cardiovascular disease.
Collapse
Affiliation(s)
- Symon Chikumba
- Department of Ophthalmology, The Second Xiangya Hospital of Central South University, Changsha, Hunan, China
- Department of Optometry, Faculty of Healthy Sciences, Mzuzu University, Luwinga, Mzuzu, Malawi
| | - Yuqian Hu
- Department of Ophthalmology, The Second Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Jing Luo
- Department of Ophthalmology, The Second Xiangya Hospital of Central South University, 139 Middle Renmin RD, Changsha, Hunan, China
| |
Collapse
|
133
|
Zsidai B, Hilkert AS, Kaarre J, Narup E, Senorski EH, Grassi A, Ley C, Longo UG, Herbst E, Hirschmann MT, Kopf S, Seil R, Tischer T, Samuelsson K, Feldt R. A practical guide to the implementation of AI in orthopaedic research - part 1: opportunities in clinical application and overcoming existing challenges. J Exp Orthop 2023; 10:117. [PMID: 37968370 PMCID: PMC10651597 DOI: 10.1186/s40634-023-00683-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 10/21/2023] [Indexed: 11/17/2023] Open
Abstract
Artificial intelligence (AI) has the potential to transform medical research by improving disease diagnosis, clinical decision-making, and outcome prediction. Despite the rapid adoption of AI and machine learning (ML) in other domains and industry, deployment in medical research and clinical practice poses several challenges due to the inherent characteristics and barriers of the healthcare sector. Therefore, researchers aiming to perform AI-intensive studies require a fundamental understanding of the key concepts, biases, and clinical safety concerns associated with the use of AI. Through the analysis of large, multimodal datasets, AI has the potential to revolutionize orthopaedic research, with new insights regarding the optimal diagnosis and management of patients affected musculoskeletal injury and disease. The article is the first in a series introducing fundamental concepts and best practices to guide healthcare professionals and researcher interested in performing AI-intensive orthopaedic research studies. The vast potential of AI in orthopaedics is illustrated through examples involving disease- or injury-specific outcome prediction, medical image analysis, clinical decision support systems and digital twin technology. Furthermore, it is essential to address the role of human involvement in training unbiased, generalizable AI models, their explainability in high-risk clinical settings and the implementation of expert oversight and clinical safety measures for failure. In conclusion, the opportunities and challenges of AI in medicine are presented to ensure the safe and ethical deployment of AI models for orthopaedic research and clinical application. Level of evidence IV.
Collapse
Affiliation(s)
- Bálint Zsidai
- Sahlgrenska Sports Medicine Center, Gothenburg, Sweden.
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.
| | - Ann-Sophie Hilkert
- Department of Computer Science and Engineering, Chalmers University of Technology, Gothenburg, Sweden
- Medfield Diagnostics AB, Gothenburg, Sweden
| | - Janina Kaarre
- Sahlgrenska Sports Medicine Center, Gothenburg, Sweden
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Orthopaedic Surgery, UPMC Freddie Fu Sports Medicine Center, University of Pittsburgh, Pittsburgh, USA
| | - Eric Narup
- Sahlgrenska Sports Medicine Center, Gothenburg, Sweden
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Eric Hamrin Senorski
- Sahlgrenska Sports Medicine Center, Gothenburg, Sweden
- Department of Health and Rehabilitation, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Sportrehab Sports Medicine Clinic, Gothenburg, Sweden
| | - Alberto Grassi
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- IIa Clinica Ortopedica E Traumatologica, IRCCS Istituto Ortopedico Rizzoli, Bologna, Italy
| | - Christophe Ley
- Department of Mathematics, University of Luxembourg, Esch-Sur-Alzette, Luxembourg
| | - Umile Giuseppe Longo
- Department of Orthopaedic and Trauma Surgery, Campus Bio-Medico University, Rome, Italy
| | - Elmar Herbst
- Department of Trauma, Hand and Reconstructive Surgery, University Hospital Münster, Münster, Germany
| | - Michael T Hirschmann
- Department of Orthopedic Surgery and Traumatology, Head Knee Surgery and DKF Head of Research, Kantonsspital Baselland, 4101, Bruderholz, Switzerland
| | - Sebastian Kopf
- Center of Orthopaedics and Traumatology, University Hospital Brandenburg a.d.H., Brandenburg Medical School Theodor Fontane, 14770, Brandenburg a.d.H., Germany
- Faculty of Health Sciences Brandenburg, Brandenburg Medical School Theodor Fontane, 14770, Brandenburg a.d.H., Germany
| | - Romain Seil
- Department of Orthopaedic Surgery, Centre Hospitalier Luxembourg and Luxembourg Institute of Health, Luxembourg, Luxembourg
| | - Thomas Tischer
- Clinic for Orthopaedics and Trauma Surgery, Malteser Waldkrankenhaus St. Marien, Erlangen, Germany
| | - Kristian Samuelsson
- Sahlgrenska Sports Medicine Center, Gothenburg, Sweden
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Orthopaedics, Sahlgrenska University Hospital, Mölndal, Sweden
| | - Robert Feldt
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| |
Collapse
|
134
|
Vandevenne MM, Favuzza E, Veta M, Lucenteforte E, Berendschot TT, Mencucci R, Nuijts RM, Virgili G, Dickman MM. Artificial intelligence for detecting keratoconus. Cochrane Database Syst Rev 2023; 11:CD014911. [PMID: 37965960 PMCID: PMC10646985 DOI: 10.1002/14651858.cd014911.pub2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2023]
Abstract
BACKGROUND Keratoconus remains difficult to diagnose, especially in the early stages. It is a progressive disorder of the cornea that starts at a young age. Diagnosis is based on clinical examination and corneal imaging; though in the early stages, when there are no clinical signs, diagnosis depends on the interpretation of corneal imaging (e.g. topography and tomography) by trained cornea specialists. Using artificial intelligence (AI) to analyse the corneal images and detect cases of keratoconus could help prevent visual acuity loss and even corneal transplantation. However, a missed diagnosis in people seeking refractive surgery could lead to weakening of the cornea and keratoconus-like ectasia. There is a need for a reliable overview of the accuracy of AI for detecting keratoconus and the applicability of this automated method to the clinical setting. OBJECTIVES To assess the diagnostic accuracy of artificial intelligence (AI) algorithms for detecting keratoconus in people presenting with refractive errors, especially those whose vision can no longer be fully corrected with glasses, those seeking corneal refractive surgery, and those suspected of having keratoconus. AI could help ophthalmologists, optometrists, and other eye care professionals to make decisions on referral to cornea specialists. Secondary objectives To assess the following potential causes of heterogeneity in diagnostic performance across studies. • Different AI algorithms (e.g. neural networks, decision trees, support vector machines) • Index test methodology (preprocessing techniques, core AI method, and postprocessing techniques) • Sources of input to train algorithms (topography and tomography images from Placido disc system, Scheimpflug system, slit-scanning system, or optical coherence tomography (OCT); number of training and testing cases/images; label/endpoint variable used for training) • Study setting • Study design • Ethnicity, or geographic area as its proxy • Different index test positivity criteria provided by the topography or tomography device • Reference standard, topography or tomography, one or two cornea specialists • Definition of keratoconus • Mean age of participants • Recruitment of participants • Severity of keratoconus (clinically manifest or subclinical) SEARCH METHODS: We searched CENTRAL (which contains the Cochrane Eyes and Vision Trials Register), Ovid MEDLINE, Ovid Embase, OpenGrey, the ISRCTN registry, ClinicalTrials.gov, and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP). There were no date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 29 November 2022. SELECTION CRITERIA We included cross-sectional and diagnostic case-control studies that investigated AI for the diagnosis of keratoconus using topography, tomography, or both. We included studies that diagnosed manifest keratoconus, subclinical keratoconus, or both. The reference standard was the interpretation of topography or tomography images by at least two cornea specialists. DATA COLLECTION AND ANALYSIS Two review authors independently extracted the study data and assessed the quality of studies using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. When an article contained multiple AI algorithms, we selected the algorithm with the highest Youden's index. We assessed the certainty of evidence using the GRADE approach. MAIN RESULTS We included 63 studies, published between 1994 and 2022, that developed and investigated the accuracy of AI for the diagnosis of keratoconus. There were three different units of analysis in the studies: eyes, participants, and images. Forty-four studies analysed 23,771 eyes, four studies analysed 3843 participants, and 15 studies analysed 38,832 images. Fifty-four articles evaluated the detection of manifest keratoconus, defined as a cornea that showed any clinical sign of keratoconus. The accuracy of AI seems almost perfect, with a summary sensitivity of 98.6% (95% confidence interval (CI) 97.6% to 99.1%) and a summary specificity of 98.3% (95% CI 97.4% to 98.9%). However, accuracy varied across studies and the certainty of the evidence was low. Twenty-eight articles evaluated the detection of subclinical keratoconus, although the definition of subclinical varied. We grouped subclinical keratoconus, forme fruste, and very asymmetrical eyes together. The tests showed good accuracy, with a summary sensitivity of 90.0% (95% CI 84.5% to 93.8%) and a summary specificity of 95.5% (95% CI 91.9% to 97.5%). However, the certainty of the evidence was very low for sensitivity and low for specificity. In both groups, we graded most studies at high risk of bias, with high applicability concerns, in the domain of patient selection, since most were case-control studies. Moreover, we graded the certainty of evidence as low to very low due to selection bias, inconsistency, and imprecision. We could not explain the heterogeneity between the studies. The sensitivity analyses based on study design, AI algorithm, imaging technique (topography versus tomography), and data source (parameters versus images) showed no differences in the results. AUTHORS' CONCLUSIONS AI appears to be a promising triage tool in ophthalmologic practice for diagnosing keratoconus. Test accuracy was very high for manifest keratoconus and slightly lower for subclinical keratoconus, indicating a higher chance of missing a diagnosis in people without clinical signs. This could lead to progression of keratoconus or an erroneous indication for refractive surgery, which would worsen the disease. We are unable to draw clear and reliable conclusions due to the high risk of bias, the unexplained heterogeneity of the results, and high applicability concerns, all of which reduced our confidence in the evidence. Greater standardization in future research would increase the quality of studies and improve comparability between studies.
Collapse
Affiliation(s)
- Magali Ms Vandevenne
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| | - Eleonora Favuzza
- Department of Neurosciences, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
| | - Mitko Veta
- Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Ersilia Lucenteforte
- Department of Statistics, Computer Science and Applications «G. Parenti», University of Florence, Florence, Italy
| | - Tos Tjm Berendschot
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| | - Rita Mencucci
- Department of Neurosciences, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
| | - Rudy Mma Nuijts
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| | - Gianni Virgili
- Department of Neurosciences, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy
- Queen's University Belfast, Belfast, UK
| | - Mor M Dickman
- University Eye Clinic Maastricht, Maastricht University Medical Center (MUMC+), Maastricht, Netherlands
| |
Collapse
|
135
|
Jayaram H, Kolko M, Friedman DS, Gazzard G. Glaucoma: now and beyond. Lancet 2023; 402:1788-1801. [PMID: 37742700 DOI: 10.1016/s0140-6736(23)01289-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 06/11/2023] [Accepted: 06/19/2023] [Indexed: 09/26/2023]
Abstract
The glaucomas are a group of conditions leading to irreversible sight loss and characterised by progressive loss of retinal ganglion cells. Although not always elevated, intraocular pressure is the only modifiable risk factor demonstrated by large clinical trials. It remains the leading cause of irreversible blindness, but timely treatment to lower intraocular pressure is effective at slowing the rate of vision loss from glaucoma. Methods for lowering intraocular pressure include laser treatments, topical medications, and surgery. Although modern surgical innovations aim to be less invasive, many have been introduced with little supporting evidence from randomised controlled trials. Many cases remain undiagnosed until the advanced stages of disease due to the limitations of screening and poor access to opportunistic case finding. Future research aims to generate evidence for intraocular pressure-independent neuroprotective treatments, personalised treatment through genetic risk profiling, and exploration of potential advanced cellular and gene therapies.
Collapse
Affiliation(s)
- Hari Jayaram
- Glaucoma Service, Moorfields Eye Hospital NHS Foundation Trust, London, UK; UCL Institute of Ophthalmology, London, UK; National Institute for Health and Care Research Moorfields Biomedical Research Centre, London, UK
| | - Miriam Kolko
- Copenhagen University Hospital, Rigshospitalet, Glostrup, Denmark; University of Copenhagen, Department of Drug Design and Pharmacology, Copenhagen, Denmark
| | - David S Friedman
- Massachusetts Eye and Ear Hospital, Glaucoma Center of Excellence, Boston, MA, USA; Harvard University, Boston, MA, USA
| | - Gus Gazzard
- Glaucoma Service, Moorfields Eye Hospital NHS Foundation Trust, London, UK; UCL Institute of Ophthalmology, London, UK; National Institute for Health and Care Research Moorfields Biomedical Research Centre, London, UK.
| |
Collapse
|
136
|
Wang Y, Liu L, Wang C. Trends in using deep learning algorithms in biomedical prediction systems. Front Neurosci 2023; 17:1256351. [PMID: 38027475 PMCID: PMC10665494 DOI: 10.3389/fnins.2023.1256351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 09/25/2023] [Indexed: 12/01/2023] Open
Abstract
In the domain of using DL-based methods in medical and healthcare prediction systems, the utilization of state-of-the-art deep learning (DL) methodologies assumes paramount significance. DL has attained remarkable achievements across diverse domains, rendering its efficacy particularly noteworthy in this context. The integration of DL with health and medical prediction systems enables real-time analysis of vast and intricate datasets, yielding insights that significantly enhance healthcare outcomes and operational efficiency in the industry. This comprehensive literature review systematically investigates the latest DL solutions for the challenges encountered in medical healthcare, with a specific emphasis on DL applications in the medical domain. By categorizing cutting-edge DL approaches into distinct categories, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), generative adversarial networks (GANs), long short-term memory (LSTM) models, support vector machine (SVM), and hybrid models, this study delves into their underlying principles, merits, limitations, methodologies, simulation environments, and datasets. Notably, the majority of the scrutinized articles were published in 2022, underscoring the contemporaneous nature of the research. Moreover, this review accentuates the forefront advancements in DL techniques and their practical applications within the realm of medical prediction systems, while simultaneously addressing the challenges that hinder the widespread implementation of DL in image segmentation within the medical healthcare domains. These discerned insights serve as compelling impetuses for future studies aimed at the progressive advancement of using DL-based methods in medical and health prediction systems. The evaluation metrics employed across the reviewed articles encompass a broad spectrum of features, encompassing accuracy, precision, specificity, F-score, adoptability, adaptability, and scalability.
Collapse
Affiliation(s)
- Yanbu Wang
- School of Strength and Conditioning, Beijing Sport University, Beijing, China
| | - Linqing Liu
- Department of Physical Education, Peking University, Beijing, China
| | - Chao Wang
- Institute of Competitive Sports, Beijing Sport University, Beijing, China
| |
Collapse
|
137
|
Mohammadzadeh V, Vepa A, Li C, Wu S, Chew L, Mahmoudinezhad G, Maltz E, Sahin S, Mylavarapu A, Edalati K, Martinyan J, Yalzadeh D, Scalzo F, Caprioli J, Nouri-Mahdavi K. Prediction of Central Visual Field Measures From Macular OCT Volume Scans With Deep Learning. Transl Vis Sci Technol 2023; 12:5. [PMID: 37917086 PMCID: PMC10627306 DOI: 10.1167/tvst.12.11.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Accepted: 09/15/2023] [Indexed: 11/03/2023] Open
Abstract
Purpose Predict central 10° global and local visual field (VF) measurements from macular optical coherence tomography (OCT) volume scans with deep learning (DL). Methods This study included 1121 OCT volume scans and 10-2 VFs from 289 eyes (257 patients). Macular scans were used to estimate 10-2 VF mean deviation (MD), threshold sensitivities (TS), and total deviation (TD) values at 68 locations. A three-dimensional (3D) convolutional neural network based on the 3D DenseNet121 architecture was used for prediction. We compared DL predictions to those from baseline linear models. We carried out 10-fold stratified cross-validation to optimize generalizability. The performance of the DL and baseline models was compared based on correlations between ground truth and predicted VF measures and mean absolute error (MAE; ground truth - predicted values). Results Average (SD) MD was -9.3 (7.7) dB. Average (SD) correlations between predicted and ground truth MD and MD MAE were 0.74 (0.09) and 3.5 (0.4) dB, respectively. Estimation accuracy deteriorated with worsening MD. Average (SD) Pearson correlations between predicted and ground truth TS and MAEs for DL and baseline model were 0.71 (0.05) and 0.52 (0.05) (P < 0.001) and 6.5 (0.6) and 7.5 (0.5) dB (P < 0.001), respectively. For TD, correlation (SD) and MAE (SD) for DL and baseline models were 0.69 (0.02) and 0.48 (0.05) (P < 0.001) and 6.1 (0.5) and 7.8 (0.5) dB (P < 0.001), respectively. Conclusions Macular OCT volume scans can be used to predict global central VF parameters with clinically relevant accuracy. Translational Relevance Macular OCT imaging may be used to confirm and supplement central VF findings using deep learning.
Collapse
Affiliation(s)
- Vahid Mohammadzadeh
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Arvind Vepa
- Department of Computer Science, University of California Los Angeles, Los Angeles, CA, USA
| | - Chuanlong Li
- Department of Neurology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Sean Wu
- Department of Computer Science, Pepperdine University, Malibu, CA, USA
| | - Leila Chew
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Golnoush Mahmoudinezhad
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Evan Maltz
- Department of Chemistry and Biochemistry, University of California Los Angeles, Los Angeles, CA, USA
| | - Serhat Sahin
- Department of Computer Science, University of California Los Angeles, Los Angeles, CA, USA
| | - Apoorva Mylavarapu
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Kiumars Edalati
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Jack Martinyan
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Dariush Yalzadeh
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Fabien Scalzo
- Department of Computer Science, University of California Los Angeles, Los Angeles, CA, USA
| | - Joseph Caprioli
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Kouros Nouri-Mahdavi
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
138
|
Loewenstein A, Berger A, Daly A, Creuzot-Garcher C, Gale R, Ricci F, Zarranz-Ventura J, Guymer R. Save our Sight (SOS): a collective call-to-action for enhanced retinal care across health systems in high income countries. Eye (Lond) 2023; 37:3351-3359. [PMID: 37280350 PMCID: PMC10630379 DOI: 10.1038/s41433-023-02540-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 04/05/2023] [Accepted: 04/11/2023] [Indexed: 06/08/2023] Open
Abstract
With a growing aging population, the prevalence of age-related eye disease and associated eye care is expected to increase. The anticipated growth in demand, coupled with recent medical advances that have transformed eye care for people living with retinal diseases, particularly neovascular age-related macular degeneration (nAMD) and diabetic eye disease, has presented an opportunity for health systems to proactively manage the expected burden of these diseases. To do so, we must take collective action to address existing and anticipated capacity limitations by designing and implementing sustainable strategies that enable health systems to provide an optimal standard of care. Sufficient capacity will enable us to streamline and personalize the patient experience, reduce treatment burden, enable more equitable access to care and ensure optimal health outcomes. Through a multi-modal approach that gathered unbiased perspectives from clinical experts and patient advocates from eight high-income countries, substantiated perspectives with evidence from the published literature and validated findings with the broader eye care community, we have exposed capacity challenges that are motivating the community to take action and advocate for change. Herein, we propose a collective call-to-action for the future management of retinal diseases and potential strategies to achieve better health outcomes for individuals at-risk of, or living with, retinal disease.
Collapse
Affiliation(s)
- Anat Loewenstein
- Ophthalmology Division, Tel Aviv Medical Center, Tel Aviv University, Tel Aviv, Israel.
| | - Alan Berger
- St. Michael's Hospital, University of Toronto, Toronto, ON, Canada
- Toronto Retina Institute, Toronto, ON, Canada
| | | | | | - Richard Gale
- Hull York Medical School, University of York, York, UK
- York and Scarborough Teaching Hospitals NHS Foundation Trust, York, UK
| | - Federico Ricci
- Dept. Experimental Medicine - University Tor Vergata of Rome, Rome, Italy
| | - Javier Zarranz-Ventura
- Hospital Clinic of Barcelona, University of Barcelona, Barcelona, Spain
- August Pi and Sunyer Biomedical Research Institute, University of Barcelona, Barcelona, Spain
| | - Robyn Guymer
- Centre for Eye Research, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
139
|
Cui T, Lin D, Yu S, Zhao X, Lin Z, Zhao L, Xu F, Yun D, Pang J, Li R, Xie L, Zhu P, Huang Y, Huang H, Hu C, Huang W, Liang X, Lin H. Deep Learning Performance of Ultra-Widefield Fundus Imaging for Screening Retinal Lesions in Rural Locales. JAMA Ophthalmol 2023; 141:1045-1051. [PMID: 37856107 PMCID: PMC10587822 DOI: 10.1001/jamaophthalmol.2023.4650] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 08/27/2023] [Indexed: 10/20/2023]
Abstract
Importance Retinal diseases are the leading cause of irreversible blindness worldwide, and timely detection contributes to prevention of permanent vision loss, especially for patients in rural areas with limited medical resources. Deep learning systems (DLSs) based on fundus images with a 45° field of view have been extensively applied in population screening, while the feasibility of using ultra-widefield (UWF) fundus image-based DLSs to detect retinal lesions in patients in rural areas warrants exploration. Objective To explore the performance of a DLS for multiple retinal lesion screening using UWF fundus images from patients in rural areas. Design, Setting, and Participants In this diagnostic study, a previously developed DLS based on UWF fundus images was used to screen for 5 retinal lesions (retinal exudates or drusen, glaucomatous optic neuropathy, retinal hemorrhage, lattice degeneration or retinal breaks, and retinal detachment) in 24 villages of Yangxi County, China, between November 17, 2020, and March 30, 2021. Interventions The captured images were analyzed by the DLS and ophthalmologists. Main Outcomes and Measures The performance of the DLS in rural screening was compared with that of the internal validation in the previous model development stage. The image quality, lesion proportion, and complexity of lesion composition were compared between the model development stage and the rural screening stage. Results A total of 6222 eyes in 3149 participants (1685 women [53.5%]; mean [SD] age, 70.9 [9.1] years) were screened. The DLS achieved a mean (SD) area under the receiver operating characteristic curve (AUC) of 0.918 (0.021) (95% CI, 0.892-0.944) for detecting 5 retinal lesions in the entire data set when applied for patients in rural areas, which was lower than that reported at the model development stage (AUC, 0.998 [0.002] [95% CI, 0.995-1.000]; P < .001). Compared with the fundus images in the model development stage, the fundus images in this rural screening study had an increased frequency of poor quality (13.8% [860 of 6222] vs 0%), increased variation in lesion proportions (0.1% [6 of 6222]-36.5% [2271 of 6222] vs 14.0% [2793 of 19 891]-21.3% [3433 of 16 138]), and an increased complexity of lesion composition. Conclusions and Relevance This diagnostic study suggests that the DLS exhibited excellent performance using UWF fundus images as a screening tool for 5 retinal lesions in patients in a rural setting. However, poor image quality, diverse lesion proportions, and a complex set of lesions may have reduced the performance of the DLS; these factors in targeted screening scenarios should be taken into consideration in the model development stage to ensure good performance.
Collapse
Affiliation(s)
- Tingxin Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Shanshan Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xinyu Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Jianyu Pang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Liqiong Xie
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Pengzhi Zhu
- Greater Bay Area Center for Medical Device Evaluation and Inspection of National Medical Products Administration, Shenzhen, China
| | - Yuzhe Huang
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Hongxin Huang
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Changming Hu
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Wenyong Huang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaoling Liang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, China
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
140
|
Daich Varela M, Sen S, De Guimaraes TAC, Kabiri N, Pontikos N, Balaskas K, Michaelides M. Artificial intelligence in retinal disease: clinical application, challenges, and future directions. Graefes Arch Clin Exp Ophthalmol 2023; 261:3283-3297. [PMID: 37160501 PMCID: PMC10169139 DOI: 10.1007/s00417-023-06052-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/20/2023] [Accepted: 03/24/2023] [Indexed: 05/11/2023] Open
Abstract
Retinal diseases are a leading cause of blindness in developed countries, accounting for the largest share of visually impaired children, working-age adults (inherited retinal disease), and elderly individuals (age-related macular degeneration). These conditions need specialised clinicians to interpret multimodal retinal imaging, with diagnosis and intervention potentially delayed. With an increasing and ageing population, this is becoming a global health priority. One solution is the development of artificial intelligence (AI) software to facilitate rapid data processing. Herein, we review research offering decision support for the diagnosis, classification, monitoring, and treatment of retinal disease using AI. We have prioritised diabetic retinopathy, age-related macular degeneration, inherited retinal disease, and retinopathy of prematurity. There is cautious optimism that these algorithms will be integrated into routine clinical practice to facilitate access to vision-saving treatments, improve efficiency of healthcare systems, and assist clinicians in processing the ever-increasing volume of multimodal data, thereby also liberating time for doctor-patient interaction and co-development of personalised management plans.
Collapse
Affiliation(s)
- Malena Daich Varela
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | | | | | - Nikolas Pontikos
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | - Michel Michaelides
- UCL Institute of Ophthalmology, London, UK.
- Moorfields Eye Hospital, London, UK.
| |
Collapse
|
141
|
Kololgi SP, Lahari CS. Harnessing the Power of Artificial Intelligence in Dermatology: A Comprehensive Commentary. Indian J Dermatol 2023; 68:678-681. [PMID: 38371574 PMCID: PMC10868991 DOI: 10.4103/ijd.ijd_581_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/20/2024] Open
Abstract
This special article provides a comprehensive commentary on the significant role of artificial intelligence (AI) in the field of dermatology. It explores the potential of AI in various aspects of dermatologic practice, including diagnosis, treatment planning, research and patient management. The article discusses the current state of AI in dermatology, its challenges and the ethical considerations surrounding its implementation. It highlights the transformative impact of AI on dermatologic care and offers insights into the future directions of AI in the field.
Collapse
Affiliation(s)
- Shreyas P. Kololgi
- From the Department of Dermatology, Venerology and Leprosy, SS Institute of Medical Sciences and Research Centre, Bengaluru, Karnataka, India
| | - CS Lahari
- From the Department of Dermatology, Venerology and Leprosy, SS Institute of Medical Sciences and Research Centre, Bengaluru, Karnataka, India
| |
Collapse
|
142
|
Rashidisabet H, Sethi A, Jindarak P, Edmonds J, Chan RVP, Leiderman YI, Vajaranant TS, Yi D. Validating the Generalizability of Ophthalmic Artificial Intelligence Models on Real-World Clinical Data. Transl Vis Sci Technol 2023; 12:8. [PMID: 37922149 PMCID: PMC10629532 DOI: 10.1167/tvst.12.11.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Accepted: 08/21/2023] [Indexed: 11/05/2023] Open
Abstract
Purpose This study aims to investigate generalizability of deep learning (DL) models trained on commonly used public fundus images to an instance of real-world data (RWD) for glaucoma diagnosis. Methods We used Illinois Eye and Ear Infirmary fundus data set as an instance of RWD in addition to six publicly available fundus data sets. We compared the performance of DL-trained models on public data and RWD for glaucoma classification and optic disc (OD) segmentation tasks. For each task, we created models trained on each data set, respectively, and each model was tested on both data sets. We further examined each model's decision-making process and learned embeddings for the glaucoma classification task. Results Using public data for the test set, public-trained models outperformed RWD-trained models in OD segmentation and glaucoma classification with a mean intersection over union of 96.3% and mean area under the receiver operating characteristic curve of 95.0%, respectively. Using the RWD test set, the performance of public models decreased by 8.0% and 18.4% to 85.6% and 76.6% for OD segmentation and glaucoma classification tasks, respectively. RWD models outperformed public models on RWD test sets by 2.0% and 9.5%, respectively, in OD segmentation and glaucoma classification tasks. Conclusions DL models trained on commonly used public data have limited ability to generalize to RWD for classifying glaucoma. They perform similarly to RWD models for OD segmentation. Translational Relevance RWD is a potential solution for improving generalizability of DL models and enabling clinical translations in the care of prevalent blinding ophthalmic conditions, such as glaucoma.
Collapse
Affiliation(s)
- Homa Rashidisabet
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, USA
- Artificial Intelligence in Ophthalmology (Ai-O) Center, University of Illinois Chicago, Chicago, IL, USA
| | - Abhishek Sethi
- Artificial Intelligence in Ophthalmology (Ai-O) Center, University of Illinois Chicago, Chicago, IL, USA
- Illinois Eye and Ear Infirmary, Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, USA
| | - Ponpawee Jindarak
- Illinois Eye and Ear Infirmary, Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, USA
| | - James Edmonds
- Artificial Intelligence in Ophthalmology (Ai-O) Center, University of Illinois Chicago, Chicago, IL, USA
- Illinois Eye and Ear Infirmary, Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, USA
| | - R V Paul Chan
- Artificial Intelligence in Ophthalmology (Ai-O) Center, University of Illinois Chicago, Chicago, IL, USA
- Illinois Eye and Ear Infirmary, Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, USA
| | - Yannek I Leiderman
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, USA
- Artificial Intelligence in Ophthalmology (Ai-O) Center, University of Illinois Chicago, Chicago, IL, USA
- Illinois Eye and Ear Infirmary, Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, USA
| | - Thasarat Sutabutr Vajaranant
- Artificial Intelligence in Ophthalmology (Ai-O) Center, University of Illinois Chicago, Chicago, IL, USA
- Illinois Eye and Ear Infirmary, Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, USA
| | - Darvin Yi
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, USA
- Artificial Intelligence in Ophthalmology (Ai-O) Center, University of Illinois Chicago, Chicago, IL, USA
- Illinois Eye and Ear Infirmary, Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, USA
| |
Collapse
|
143
|
Karlin J, Gai L, LaPierre N, Danesh K, Farajzadeh J, Palileo B, Taraszka K, Zheng J, Wang W, Eskin E, Rootman D. Ensemble neural network model for detecting thyroid eye disease using external photographs. Br J Ophthalmol 2023; 107:1722-1729. [PMID: 36126104 DOI: 10.1136/bjo-2022-321833] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/22/2022] [Indexed: 11/03/2022]
Abstract
PURPOSE To describe an artificial intelligence platform that detects thyroid eye disease (TED). DESIGN Development of a deep learning model. METHODS 1944 photographs from a clinical database were used to train a deep learning model. 344 additional images ('test set') were used to calculate performance metrics. Receiver operating characteristic, precision-recall curves and heatmaps were generated. From the test set, 50 images were randomly selected ('survey set') and used to compare model performance with ophthalmologist performance. 222 images obtained from a separate clinical database were used to assess model recall and to quantitate model performance with respect to disease stage and grade. RESULTS The model achieved test set accuracy of 89.2%, specificity 86.9%, recall 93.4%, precision 79.7% and an F1 score of 86.0%. Heatmaps demonstrated that the model identified pixels corresponding to clinical features of TED. On the survey set, the ensemble model achieved accuracy, specificity, recall, precision and F1 score of 86%, 84%, 89%, 77% and 82%, respectively. 27 ophthalmologists achieved mean performance of 75%, 82%, 63%, 72% and 66%, respectively. On the second test set, the model achieved recall of 91.9%, with higher recall for moderate to severe (98.2%, n=55) and active disease (98.3%, n=60), as compared with mild (86.8%, n=68) or stable disease (85.7%, n=63). CONCLUSIONS The deep learning classifier is a novel approach to identify TED and is a first step in the development of tools to improve diagnostic accuracy and lower barriers to specialist evaluation.
Collapse
Affiliation(s)
- Justin Karlin
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| | - Lisa Gai
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Nathan LaPierre
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Kayla Danesh
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| | - Justin Farajzadeh
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| | - Bea Palileo
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| | - Kodi Taraszka
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Jie Zheng
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Wei Wang
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Eleazar Eskin
- Department of Computer Science, University of California, Los Angeles, California, USA
- Department of Human Genetics, University of California, Los Angeles, California, USA
| | - Daniel Rootman
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| |
Collapse
|
144
|
Korot E, Gonçalves MB, Huemer J, Beqiri S, Khalid H, Kelly M, Chia M, Mathijs E, Struyven R, Moussa M, Keane PA. Clinician-Driven AI: Code-Free Self-Training on Public Data for Diabetic Retinopathy Referral. JAMA Ophthalmol 2023; 141:1029-1036. [PMID: 37856110 PMCID: PMC10587830 DOI: 10.1001/jamaophthalmol.2023.4508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 08/23/2023] [Indexed: 10/20/2023]
Abstract
Importance Democratizing artificial intelligence (AI) enables model development by clinicians with a lack of coding expertise, powerful computing resources, and large, well-labeled data sets. Objective To determine whether resource-constrained clinicians can use self-training via automated machine learning (ML) and public data sets to design high-performing diabetic retinopathy classification models. Design, Setting, and Participants This diagnostic quality improvement study was conducted from January 1, 2021, to December 31, 2021. A self-training method without coding was used on 2 public data sets with retinal images from patients in France (Messidor-2 [n = 1748]) and the UK and US (EyePACS [n = 58 689]) and externally validated on 1 data set with retinal images from patients of a private Egyptian medical retina clinic (Egypt [n = 210]). An AI model was trained to classify referable diabetic retinopathy as an exemplar use case. Messidor-2 images were assigned adjudicated labels available on Kaggle; 4 images were deemed ungradable and excluded, leaving 1744 images. A total of 300 images randomly selected from the EyePACS data set were independently relabeled by 3 blinded retina specialists using the International Classification of Diabetic Retinopathy protocol for diabetic retinopathy grade and diabetic macular edema presence; 19 images were deemed ungradable, leaving 281 images. Data analysis was performed from February 1 to February 28, 2021. Exposures Using public data sets, a teacher model was trained with labeled images using supervised learning. Next, the resulting predictions, termed pseudolabels, were used on an unlabeled public data set. Finally, a student model was trained with the existing labeled images and the additional pseudolabeled images. Main Outcomes and Measures The analyzed metrics for the models included the area under the receiver operating characteristic curve (AUROC), accuracy, sensitivity, specificity, and F1 score. The Fisher exact test was performed, and 2-tailed P values were calculated for failure case analysis. Results For the internal validation data sets, AUROC values for performance ranged from 0.886 to 0.939 for the teacher model and from 0.916 to 0.951 for the student model. For external validation of automated ML model performance, AUROC values and accuracy were 0.964 and 93.3% for the teacher model, 0.950 and 96.7% for the student model, and 0.890 and 94.3% for the manually coded bespoke model, respectively. Conclusions and Relevance These findings suggest that self-training using automated ML is an effective method to increase both model performance and generalizability while decreasing the need for costly expert labeling. This approach advances the democratization of AI by enabling clinicians without coding expertise or access to large, well-labeled private data sets to develop their own AI models.
Collapse
Affiliation(s)
- Edward Korot
- Retina Specialists of Michigan, Grand Rapids
- Moorfields Eye Hospital, London, United Kingdom
- Stanford University Byers Eye Institute, Palo Alto, California
| | - Mariana Batista Gonçalves
- Moorfields Eye Hospital, London, United Kingdom
- Federal University of Sao Paulo, Sao Paulo, Brazil
- Instituto da Visão, Sao Paulo, Brazil
| | | | - Sara Beqiri
- Moorfields Eye Hospital, London, United Kingdom
- University College London Medical School, London, United Kingdom
| | - Hagar Khalid
- Moorfields Eye Hospital, London, United Kingdom
- Ophthalmology Department, Faculty of Medicine, Tanta University Hospital, Tanta, Gharbia, Egypt
| | - Madeline Kelly
- Moorfields Eye Hospital, London, United Kingdom
- University College London Medical School, London, United Kingdom
- UCL Centre for Medical Image Computing, London, United Kingdom
| | - Mark Chia
- Moorfields Eye Hospital, London, United Kingdom
| | - Emily Mathijs
- Michigan State University College of Osteopathic Medicine, East Lansing
| | | | - Magdy Moussa
- Ophthalmology Department, Faculty of Medicine, Tanta University Hospital, Tanta, Gharbia, Egypt
| | | |
Collapse
|
145
|
An Y, Cao B, Li K, Xu Y, Zhao W, Zhao D, Ke J. A Prediction Model for Sight-Threatening Diabetic Retinopathy Based on Plasma Adipokines among Patients with Mild Diabetic Retinopathy. J Diabetes Res 2023; 2023:8831609. [PMID: 37920605 PMCID: PMC10620016 DOI: 10.1155/2023/8831609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 08/13/2023] [Accepted: 08/24/2023] [Indexed: 11/04/2023] Open
Abstract
Background Accumulating evidence has suggested a link between adipokines and diabetic retinopathy (DR). This study is aimed at investigating the risk factors for sight-threatening DR (STDR) and establishing a prognostic model for predicting STDR among a high-risk population of patients with type 2 diabetes mellitus (T2DM). Methods Plasma concentrations of adipokines were determined by enzyme-linked immunosorbent assay. In the case-control set, principal component analysis (PCA) was performed to select optimal predictive cytokines for STDR, involving severe nonproliferative DR (NPDR) and proliferative DR. Support vector machine (SVM) was used to examine the possible combination of baseline plasma adipokines to discriminate the patients with mild NPDR who will later develop STDR. An individual prospective cohort with a follow-up period of 3 years was used for the external validation. Results In both training and testing sets, involving 306 patients with T2DM, median levels of plasma adiponectin (APN), leptin, and fatty acid-binding protein 4 (FABP4) were significantly higher in the STDR group than those in mild NPDR. Except for adipsin, the other three adipokines, FABP4, APN, and leptin, were selected by PCA and integrated into SVM. The accuracy of the multivariate SVM classification model was acceptable in both the training set (AUC = 0.81, sensitivity = 71%, and specificity = 91%) and the testing set (AUC = 0.77, sensitivity = 61%, and specificity = 92%). 110 T2DM patients with mild NPDR, the high-risk population of STDR, were enrolled for external validation. Based on the SVM, the risk of each patient was calculated. More STDR occurred in the high-risk group than in the low-risk group, which were grouped by the median value of APN, FABP4, and leptin, respectively. The model was validated in an individual cohort using SVM with the AUC, sensitivity, and specificity reaching 0.77, 64%, and 91%, respectively. Conclusions Adiponectin, leptin, and FABP4 were demonstrated to be associated with the severity of DR and maybe good predictors for STDR, suggesting that adipokines may play an important role in the pathophysiology of DR development.
Collapse
Affiliation(s)
- Yaxin An
- Center for Endocrine Metabolism and Immune Diseases, Beijing Luhe Hospital, Capital Medical University, Beijing 101149, China
| | - Bin Cao
- Center for Endocrine Metabolism and Immune Diseases, Beijing Luhe Hospital, Capital Medical University, Beijing 101149, China
| | - Kun Li
- Beijing Key Laboratory of Diabetes Research and Care, Beijing 101149, China
| | - Yongsong Xu
- Beijing Key Laboratory of Diabetes Research and Care, Beijing 101149, China
| | - Wenying Zhao
- Center for Endocrine Metabolism and Immune Diseases, Beijing Luhe Hospital, Capital Medical University, Beijing 101149, China
| | - Dong Zhao
- Center for Endocrine Metabolism and Immune Diseases, Beijing Luhe Hospital, Capital Medical University, Beijing 101149, China
- Beijing Key Laboratory of Diabetes Research and Care, Beijing 101149, China
| | - Jing Ke
- Center for Endocrine Metabolism and Immune Diseases, Beijing Luhe Hospital, Capital Medical University, Beijing 101149, China
| |
Collapse
|
146
|
Wang M, Lin T, Wang L, Lin A, Zou K, Xu X, Zhou Y, Peng Y, Meng Q, Qian Y, Deng G, Wu Z, Chen J, Lin J, Zhang M, Zhu W, Zhang C, Zhang D, Goh RSM, Liu Y, Pang CP, Chen X, Chen H, Fu H. Uncertainty-inspired open set learning for retinal anomaly identification. Nat Commun 2023; 14:6757. [PMID: 37875484 PMCID: PMC10598011 DOI: 10.1038/s41467-023-42444-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 10/11/2023] [Indexed: 10/26/2023] Open
Abstract
Failure to recognize samples from the classes unseen during training is a major limitation of artificial intelligence in the real-world implementation for recognition and classification of retinal anomalies. We establish an uncertainty-inspired open set (UIOS) model, which is trained with fundus images of 9 retinal conditions. Besides assessing the probability of each category, UIOS also calculates an uncertainty score to express its confidence. Our UIOS model with thresholding strategy achieves an F1 score of 99.55%, 97.01% and 91.91% for the internal testing set, external target categories (TC)-JSIEC dataset and TC-unseen testing set, respectively, compared to the F1 score of 92.20%, 80.69% and 64.74% by the standard AI model. Furthermore, UIOS correctly predicts high uncertainty scores, which would prompt the need for a manual check in the datasets of non-target categories retinal diseases, low-quality fundus images, and non-fundus images. UIOS provides a robust method for real-world screening of retinal anomalies.
Collapse
Affiliation(s)
- Meng Wang
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Tian Lin
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China
| | - Lianyu Wang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, 211100, Nanjing, Jiangsu, China
- Laboratory of Brain-Machine Intelligence Technology, Ministry of Education Nanjing University of Aeronautics and Astronautics, 211106, Nanjing, Jiangsu, China
| | - Aidi Lin
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China
| | - Ke Zou
- National Key Laboratory of Fundamental Science on Synthetic Vision and the College of Computer Science, Sichuan University, 610065, Chengdu, Sichuan, China
| | - Xinxing Xu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Yi Zhou
- School of Electronics and Information Engineering, Soochow University, 215006, Suzhou, Jiangsu, China
| | - Yuanyuan Peng
- School of Biomedical Engineering, Anhui Medical University, 230032, Hefei, Anhui, China
| | - Qingquan Meng
- School of Electronics and Information Engineering, Soochow University, 215006, Suzhou, Jiangsu, China
| | - Yiming Qian
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Guoyao Deng
- National Key Laboratory of Fundamental Science on Synthetic Vision and the College of Computer Science, Sichuan University, 610065, Chengdu, Sichuan, China
| | - Zhiqun Wu
- Longchuan People's Hospital, 517300, Heyuan, Guangdong, China
| | - Junhong Chen
- Puning People's Hospital, 515300, Jieyang, Guangdong, China
| | - Jianhong Lin
- Haifeng PengPai Memory Hospital, 516400, Shanwei, Guangdong, China
| | - Mingzhi Zhang
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China
| | - Weifang Zhu
- School of Electronics and Information Engineering, Soochow University, 215006, Suzhou, Jiangsu, China
| | - Changqing Zhang
- College of Intelligence and Computing, Tianjin University, 300350, Tianjin, China
| | - Daoqiang Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, 211100, Nanjing, Jiangsu, China
- Laboratory of Brain-Machine Intelligence Technology, Ministry of Education Nanjing University of Aeronautics and Astronautics, 211106, Nanjing, Jiangsu, China
| | - Rick Siow Mong Goh
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Yong Liu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Chi Pui Pang
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, 999077, Hong Kong, China
| | - Xinjian Chen
- School of Electronics and Information Engineering, Soochow University, 215006, Suzhou, Jiangsu, China.
- State Key Laboratory of Radiation Medicine and Protection, Soochow University, 215006, Suzhou, China.
| | - Haoyu Chen
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China.
| | - Huazhu Fu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore.
| |
Collapse
|
147
|
Pelayo C, Hoang J, Mora Pinzón M, Lock LJ, Fowlkes C, Stevens CL, Jacobson NA, Channa R, Liu Y. Perspectives of Latinx Patients with Diabetes on Teleophthalmology, Artificial Intelligence-Based Image Interpretation, and Virtual Care: A Qualitative Study. TELEMEDICINE REPORTS 2023; 4:317-326. [PMID: 37908628 PMCID: PMC10615055 DOI: 10.1089/tmr.2023.0045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/28/2023] [Indexed: 11/02/2023]
Abstract
Background Latinx populations in the United States bear a disproportionate burden of diabetic eye disease. Teleophthalmology with and without artificial intelligence (AI)-based image interpretation are validated methods for diabetic eye screening, but limited literature exists on patient perspectives. This study aimed at understanding the perspectives of Latinx patients with diabetes on teleophthalmology, AI-based image interpretation, and general virtual care to prevent avoidable blindness in this population. Methods We conducted semi-structured, individual interviews with 20 Latinx patients with diabetes at an urban, federally qualified health center in Madison, WI. Interviews were transcribed verbatim, professionally translated from Spanish to English, and analyzed using both inductive open coding and deductive coding. Results Most participants had no prior experience with teleophthalmology but did have experience with virtual care. Participants expressed a preference for teleophthalmology compared with traditional in-person dilated eye exams but were willing to obtain whichever method of screening was recommended by their primary care clinician. They also strongly preferred having human physician oversight in image review compared with having images interpreted solely using AI. Many participants preferred in-person clinic visits to virtual health care due to the ability to have a more thorough physical exam, as well as for improved non-verbal communication with their clinician. Discussion Leveraging primary care providers' recommendations, human oversight of AI-based image interpretation, and improving communication may enhance acceptance and utilization of teleophthalmology, AI, and virtual care by Latinx patients. Conclusions Understanding Latinx patient perspectives may contribute toward the development of more effective telemedicine interventions to enhance health equity in Latinx communities.
Collapse
Affiliation(s)
- Christian Pelayo
- Department of Ophthalmology and Visual Sciences, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Johnson Hoang
- Department of Ophthalmology and Visual Sciences, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Maria Mora Pinzón
- Division of Geriatrics and Gerontology, Department of Medicine, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Loren J. Lock
- Department of Ophthalmology and Visual Sciences, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Christiana Fowlkes
- Department of Ophthalmology and Visual Sciences, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Chloe L. Stevens
- Department of Ophthalmology and Visual Sciences, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Nora A. Jacobson
- Institute for Clinical and Translational Research, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
- School of Nursing, Madison, Wisconsin, USA
| | - Roomasa Channa
- Department of Ophthalmology and Visual Sciences, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Yao Liu
- Department of Ophthalmology and Visual Sciences, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| |
Collapse
|
148
|
Wen R, Wang M, Bian W, Zhu H, Xiao Y, He Q, Wang Y, Liu X, Shi Y, Hong Z, Xu B. Machine learning-based prediction of symptomatic intracerebral hemorrhage after intravenous thrombolysis for stroke: a large multicenter study. Front Neurol 2023; 14:1247492. [PMID: 37928151 PMCID: PMC10624225 DOI: 10.3389/fneur.2023.1247492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 09/28/2023] [Indexed: 11/07/2023] Open
Abstract
Background This study aimed to compare the performance of different machine learning models in predicting symptomatic intracranial hemorrhage (sICH) after thrombolysis treatment for ischemic stroke. Methods This multicenter study utilized the Shenyang Stroke Emergency Map database, comprising 8,924 acute ischemic stroke patients from 29 comprehensive hospitals who underwent thrombolysis between January 2019 and December 2021. An independent testing cohort was further established, including 1,921 patients from the First People's Hospital of Shenyang. The structured dataset encompassed 15 variables, including clinical and therapeutic metrics. The primary outcome was the sICH occurrence post-thrombolysis. Models were developed using an 80/20 split for training and internal validation. Performance was assessed using machine learning classifiers, including logistic regression with lasso regularization, support vector machine (SVM), random forest, gradient-boosted decision tree (GBDT), and multilayer perceptron (MLP). The model boasting the highest area under the curve (AUC) was specifically employed to highlight feature importance. Results Baseline characteristics were compared between the training cohort (n = 6,369) and the external validation cohort (n = 1,921), with the sICH incidence being slightly higher in the training cohort (1.6%) compared to the validation cohort (1.1%). Among the evaluated models, the logistic regression with lasso regularization achieved the highest AUC of 0.87 (95% confidence interval [CI]: 0.79-0.95; p < 0.001), followed by the MLP model with an AUC of 0.766 (95% CI: 0.637-0.894; p = 0.04). The reference model and SVM showed AUCs of 0.575 and 0.582, respectively, while the random forest and GBDT models performed less optimally with AUCs of 0.536 and 0.436, respectively. Decision curve analysis revealed net benefits primarily for the SVM and MLP models. Feature importance from the logistic regression model emphasized anticoagulation therapy as the most significant negative predictor (coefficient: -2.0833) and recombinant tissue plasminogen activator as the principal positive predictor (coefficient: 0.5082). Conclusion After a comprehensive evaluation, the MLP model is recommended due to its superior ability to predict the risk of symptomatic hemorrhage post-thrombolysis in ischemic stroke patients. Based on decision curve analysis, the MLP-based model was chosen and demonstrated enhanced discriminative ability compared to the reference. This model serves as a valuable tool for clinicians, aiding in treatment planning and ensuring more precise forecasting of patient outcomes.
Collapse
Affiliation(s)
- Rui Wen
- Shenyang Tenth People’s Hospital, Shenyang, China
| | - Miaoran Wang
- Affiliated Central Hospital of Shenyang Medical College, Shenyang Medical College, Shenyang, China
| | - Wei Bian
- Shenyang First People’s Hospital, Shenyang Medical College, Shenyang, China
| | - Haoyue Zhu
- Shenyang First People’s Hospital, Shenyang Medical College, Shenyang, China
| | - Ying Xiao
- Shenyang First People’s Hospital, Shenyang Medical College, Shenyang, China
| | - Qian He
- Shenyang Tenth People’s Hospital, Shenyang, China
| | - Yu Wang
- Shenyang Tenth People’s Hospital, Shenyang, China
| | - Xiaoqing Liu
- Shenyang Tenth People’s Hospital, Shenyang, China
| | - Yangdi Shi
- Shenyang Tenth People’s Hospital, Shenyang, China
| | - Zhe Hong
- Shenyang First People’s Hospital, Shenyang Medical College, Shenyang, China
| | - Bing Xu
- Shenyang Tenth People’s Hospital, Shenyang, China
| |
Collapse
|
149
|
Tan TF, Chang SYH, Ting DSW. Deep learning for precision medicine: Guiding laser therapy in ischemic retinal diseases. Cell Rep Med 2023; 4:101239. [PMID: 37852186 PMCID: PMC10591061 DOI: 10.1016/j.xcrm.2023.101239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 09/19/2023] [Accepted: 09/20/2023] [Indexed: 10/20/2023]
Abstract
In this issue of Cell Reports Medicine, Zhao and colleagues1 report a multi-tasking artificial intelligence system that can assist the whole process of fundus fluorescein angiography (FFA) imaging and reduce the reliance on retinal specialists in FFA examination.
Collapse
Affiliation(s)
- Ting Fang Tan
- Singapore National Eye Center, Singapore Eye Research Institute, Singapore, Singapore
| | - Shelley Yin-His Chang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan; College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Daniel Shu Wei Ting
- Singapore National Eye Center, Singapore Eye Research Institute, Singapore, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore, Singapore; Byers Eye Institute, Stanford University, Palo Alto, CA, USA.
| |
Collapse
|
150
|
Guan Z, Li H, Liu R, Cai C, Liu Y, Li J, Wang X, Huang S, Wu L, Liu D, Yu S, Wang Z, Shu J, Hou X, Yang X, Jia W, Sheng B. Artificial intelligence in diabetes management: Advancements, opportunities, and challenges. Cell Rep Med 2023; 4:101213. [PMID: 37788667 PMCID: PMC10591058 DOI: 10.1016/j.xcrm.2023.101213] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 08/07/2023] [Accepted: 09/08/2023] [Indexed: 10/05/2023]
Abstract
The increasing prevalence of diabetes, high avoidable morbidity and mortality due to diabetes and diabetic complications, and related substantial economic burden make diabetes a significant health challenge worldwide. A shortage of diabetes specialists, uneven distribution of medical resources, low adherence to medications, and improper self-management contribute to poor glycemic control in patients with diabetes. Recent advancements in digital health technologies, especially artificial intelligence (AI), provide a significant opportunity to achieve better efficiency in diabetes care, which may diminish the increase in diabetes-related health-care expenditures. Here, we review the recent progress in the application of AI in the management of diabetes and then discuss the opportunities and challenges of AI application in clinical practice. Furthermore, we explore the possibility of combining and expanding upon existing digital health technologies to develop an AI-assisted digital health-care ecosystem that includes the prevention and management of diabetes.
Collapse
Affiliation(s)
- Zhouyu Guan
- Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China
| | - Huating Li
- Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China
| | - Ruhan Liu
- Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China; MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Furong Laboratory, Changsha, Hunan 41000, China
| | - Chun Cai
- Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China
| | - Yuexing Liu
- Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China
| | - Jiajia Li
- Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China; MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xiangning Wang
- Department of Ophthalmology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai 200233, China
| | - Shan Huang
- Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China; MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Liang Wu
- Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China
| | - Dan Liu
- Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China
| | - Shujie Yu
- Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China
| | - Zheyuan Wang
- Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China; MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Jia Shu
- Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China; MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xuhong Hou
- Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China
| | - Xiaokang Yang
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Weiping Jia
- Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China.
| | - Bin Sheng
- Shanghai International Joint Laboratory of Intelligent Prevention and Treatment for Metabolic Diseases, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China; MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China.
| |
Collapse
|