1
|
An S, Teo K, McConnell MV, Marshall J, Galloway C, Squirrell D. AI explainability in oculomics: How it works, its role in establishing trust, and what still needs to be addressed. Prog Retin Eye Res 2025; 106:101352. [PMID: 40086660 DOI: 10.1016/j.preteyeres.2025.101352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2024] [Revised: 03/07/2025] [Accepted: 03/10/2025] [Indexed: 03/16/2025]
Abstract
Recent developments in artificial intelligence (AI) have seen a proliferation of algorithms that are now capable of predicting a range of systemic diseases from retinal images. Unlike traditional retinal disease detection AI models which are trained on well-recognised retinal biomarkers, systemic disease detection or "oculomics" models use a range of often poorly characterised retinal biomarkers to arrive at their predictions. As the retinal phenotype that oculomics models use may not be intuitive, clinicians have to rely on the developers' explanations of how these algorithms work in order to understand them. The discipline of understanding how AI algorithms work employs two similar but distinct terms: Explainable AI and Interpretable AI (iAI). Explainable AI describes the holistic functioning of an AI system, including its impact and potential biases. Interpretable AI concentrates solely on examining and understanding the workings of the AI algorithm itself. iAI tools are therefore what the clinician must rely on if they are to understand how the algorithm works and whether its predictions are reliable. The iAI tools that developers use can be delineated into two broad categories: Intrinsic methods that improve transparency through architectural changes and post-hoc methods that explain trained models via external algorithms. Currently post-hoc methods, class activation maps in particular, are far more widely used than other techniques but they have their limitations especially when applied to oculomics AI models. Aimed at clinicians, we examine how the key iAI methods work, what they are designed to do and what their limitations are when applied to oculomics AI. We conclude by discussing how combining existing iAI techniques with novel approaches could allow AI developers to better explain how their oculomics models work and reassure clinicians that the results issued are reliable.
Collapse
Affiliation(s)
- Songyang An
- School of Optometry and Vision Science, University of Auckland, Auckland, New Zealand; Toku Eyes Limited NZ, 110 Carlton Gore Road, Newmarket, Auckland, 1023, New Zealand
| | - Kelvin Teo
- Singapore Eye Research Institute, The Academia, 20 College Road Discovery Tower Level 6, 169856, Singapore; Singapore National University, Singapore
| | - Michael V McConnell
- Division of Cardiovascular Medicine, Stanford University School of Medicine, Stanford, CA, USA; Toku Eyes Limited NZ, 110 Carlton Gore Road, Newmarket, Auckland, 1023, New Zealand
| | - John Marshall
- Institute of Ophthalmology University College London, 11-43 Bath Street, London, EC1V 9EL, UK
| | - Christopher Galloway
- Department of Business and Communication, Massey University, East Precinct Albany Expressway, SH17, Albany, Auckland, 0632, New Zealand
| | - David Squirrell
- Department of Ophthalmology, University of the Sunshine Coast, Queensland, Australia; Toku Eyes Limited NZ, 110 Carlton Gore Road, Newmarket, Auckland, 1023, New Zealand.
| |
Collapse
|
2
|
Chen S, Bai W. Artificial intelligence technology in ophthalmology public health: current applications and future directions. Front Cell Dev Biol 2025; 13:1576465. [PMID: 40313720 PMCID: PMC12044197 DOI: 10.3389/fcell.2025.1576465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2025] [Accepted: 03/28/2025] [Indexed: 05/03/2025] Open
Abstract
Global eye health has become a critical public health challenge, with the prevalence of blindness and visual impairment expected to rise significantly in the coming decades. Traditional ophthalmic public health systems face numerous obstacles, including the uneven distribution of medical resources, insufficient training for primary healthcare workers, and limited public awareness of eye health. Addressing these challenges requires urgent, innovative solutions. Artificial intelligence (AI) has demonstrated substantial potential in enhancing ophthalmic public health across various domains. AI offers significant improvements in ophthalmic data management, disease screening and monitoring, risk prediction and early warning systems, medical resource allocation, and health education and patient management. These advancements substantially improve the quality and efficiency of healthcare, particularly in preventing and treating prevalent eye conditions such as cataracts, diabetic retinopathy, glaucoma, and myopia. Additionally, telemedicine and mobile applications have expanded access to healthcare services and enhanced the capabilities of primary healthcare providers. However, there are challenges in integrating AI into ophthalmic public health. Key issues include interoperability with electronic health records (EHR), data security and privacy, data quality and bias, algorithm transparency, and ethical and regulatory frameworks. Heterogeneous data formats and the lack of standardized metadata hinder seamless integration, while privacy risks necessitate advanced techniques such as anonymization. Data biases, stemming from racial or geographic disparities, and the "black box" nature of AI models, limit reliability and clinical trust. Ethical issues, such as ensuring accountability for AI-driven decisions and balancing innovation with patient safety, further complicate implementation. The future of ophthalmic public health lies in overcoming these barriers to fully harness the potential of AI, ensuring that advancements in technology translate into tangible benefits for patients worldwide.
Collapse
Affiliation(s)
| | - Wen Bai
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| |
Collapse
|
3
|
Teo KYC, Eldem B, Joussen A, Koh A, Korobelnik JF, Li X, Loewenstein A, Lövestam-Adrian M, Navarro R, Okada AA, Pearce I, Rodríguez F, Wong D, Wu L, Zur D, Zarranz-Ventura J, Mitchell P, Chaudhary V, Lanzetta P. Treatment regimens for optimising outcomes in patients with neovascular age-related macular degeneration. Eye (Lond) 2025; 39:860-869. [PMID: 39379523 PMCID: PMC11933311 DOI: 10.1038/s41433-024-03370-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 09/05/2024] [Accepted: 09/20/2024] [Indexed: 10/10/2024] Open
Abstract
Practice patterns for neovascular age-related macular degeneration (nAMD) have evolved from the landmark registration trials of vascular endothelial growth factor (VEGF) inhibitors. Non-monthly regimens like treat-and-extend (T&E) have become popular due to their effectiveness in clinical practice. T&E regimens attempt to limit the burden of visits and treatments by allowing progressively longer treatment intervals, but in so doing, are potentially associated with the expense of treating quiescent disease. This is acceptable to many patients and their ophthalmologists but can still be problematic in the real-world. Recent studies have further refined the T&E approach by allowing for quicker and longer extension of treatment intervals when less severe disease is detected. With newer drugs offering increased durability, a shift to longer regular intervals may emerge as a new practice pattern for VEGF inhibitor therapy. This review aims to consolidate the current literature on the most effective treatment patterns and update treatment guidelines based on options that are now available. It also summarises new aspects of nAMD management that may help to further refine current practice.
Collapse
Affiliation(s)
| | - Bora Eldem
- Department of Ophthalmology, Hacettepe University, School of Medicine, Ankara, Turkey
| | | | - Adrian Koh
- Camden Medical Centre, Singapore, Singapore
| | - Jean-François Korobelnik
- Service d'ophtalmologie, CHU Bordeaux, Bordeaux, France
- University of Bordeaux, INSERM, BPH, UMR1219, F-33000, Bordeaux, France
| | - Xiaoxin Li
- Xiamen Eye Center, Xiamen University, Xiamen, China
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | | | - Rafael Navarro
- Retina and Vitreous Department, Institute of Ocular Microsurgery, Barcelona, Spain
| | - Annabelle A Okada
- Department of Ophthalmology, Kyorin University School of Medicine, Tokyo, Japan
| | - Ian Pearce
- Royal Liverpool University Hospital, Liverpool, UK
| | - Francisco Rodríguez
- Fundación Oftalmológica Nacional, Escuela de Medicina y Ciencias de la Salud, Universidad del Rosario, Bogotá, Colombia
| | - David Wong
- Unity Health Toronto - St. Michael's Hospital, University of Toronto, Toronto, ON, Canada
| | - Lihteh Wu
- Macula, Vitreous and Retina Associates of Costa Rica, San José, Costa Rica
| | - Dinah Zur
- Division of Ophthalmology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | | | - Paul Mitchell
- Department of Ophthalmology, Centre for Vision Research, Westmead Institute for Medical Research, the University of Sydney, Sydney, Australia
| | - Varun Chaudhary
- Hamilton Regional Eye Institute, St. Joseph's Healthcare Hamilton, Hamilton, ON, Canada
- Department of Surgery, McMaster University, Hamilton, ON, Canada
| | - Paolo Lanzetta
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy
| |
Collapse
|
4
|
Frank-Publig S, Birner K, Riedl S, Reiter GS, Schmidt-Erfurth U. Artificial intelligence in assessing progression of age-related macular degeneration. Eye (Lond) 2025; 39:262-273. [PMID: 39558093 PMCID: PMC11751489 DOI: 10.1038/s41433-024-03460-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 09/10/2024] [Accepted: 11/04/2024] [Indexed: 11/20/2024] Open
Abstract
The human population is steadily growing with increased life expectancy, impacting the prevalence of age-dependent diseases, including age-related macular degeneration (AMD). Health care systems are confronted with an increasing burden with rising patient numbers accompanied by ongoing developments of therapeutic approaches. Concurrent advances in imaging modalities provide eye care professionals with a large amount of data for each patient. Furthermore, with continuous progress in therapeutics, there is an unmet need for reliable structural and functional biomarkers in clinical trials and practice to optimize personalized patient care and evaluate individual responses to treatment. A fast and objective solution is Artificial intelligence (AI), which has revolutionized assessment of AMD in all disease stages. Reliable and validated AI-algorithms can aid to overcome the growing number of patients, visits and necessary treatments as well as maximize the benefits of multimodal imaging in clinical trials. Therefore, there are ongoing efforts to develop and validate automated algorithms to unlock more information from datasets allowing automated assessment of disease activity and disease progression. This review aims to present selected AI algorithms, their development, applications and challenges regarding assessment and prediction of AMD progression.
Collapse
Affiliation(s)
- Sophie Frank-Publig
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Klaudia Birner
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Sophie Riedl
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Gregor S Reiter
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|
5
|
M K, G M. A comprehensive review on early detection of drusen patterns in age-related macular degeneration using deep learning models. Photodiagnosis Photodyn Ther 2025; 51:104454. [PMID: 39716627 DOI: 10.1016/j.pdpdt.2024.104454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Revised: 12/19/2024] [Accepted: 12/20/2024] [Indexed: 12/25/2024]
Abstract
Age-related Macular Degeneration (AMD) is a leading cause of visual impairment and blindness that affects the eye from the age of fifty-five and older. It impacts on the retina, the light-sensitive layer of the eye. In early AMD, yellowish deposits called drusen, form under the retina, which could result in distortion and gradual blurring of vision. The presence of drusen is the first sign of early dry AMD. As the disease progresses, more and larger deposits develop, and blood vessels grow up from beneath the retina leading to leakage of blood, that damages the retina. In advanced AMD, peripheral vision may remain, but the straight vision is lost. Detecting AMD early is crucial, but treatments are limited, and nutritional supplements like AREDS2 formula may slow disease progression. AMD diagnosis is primarily achieved through drusen identification, a process involving fundus photography by ophthalmologists, but the early stages of AMD make this task challenging due to ambiguous drusen regions. Furthermore, the existing models have difficulty in correctly predicting the drusen regions because of the resolution of fundus images, for which a solution is proposed as a model based on deep learning. Performance can be optimized by employing both local and global knowledge when AMD issues are still in the early phases. The area of the retina where drusen forms were identified by image segmentation, and then these deposits were automatically recognized through pattern recognition techniques.
Collapse
Affiliation(s)
- Kiruthika M
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India
| | - Malathi G
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India.
| |
Collapse
|
6
|
Yang Z, Tian D, Zhao X, Zhang L, Xu Y, Lu X, Chen Y. Evolutionary patterns and research frontiers of artificial intelligence in age-related macular degeneration: a bibliometric analysis. Quant Imaging Med Surg 2025; 15:813-830. [PMID: 39839014 PMCID: PMC11744182 DOI: 10.21037/qims-24-1406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Accepted: 11/29/2024] [Indexed: 01/23/2025]
Abstract
Background Age-related macular degeneration (AMD) represents a significant clinical concern, particularly in aging populations, and recent advancements in artificial intelligence (AI) have catalyzed substantial research interest in this domain. Despite the growing body of literature, there remains a need for a comprehensive, quantitative analysis to delineate key trends and emerging areas in the field of AI applications in AMD. This bibliometric analysis sought to systematically evaluate the landscape of AI-focused research on AMD to illuminate publication patterns, influential contributors, and focal research trends. Methods Using the Web of Science Core Collection (WoSCC), a search was conducted to retrieve relevant publications from 1992 to 2023. This analysis involved an array of bibliometric indicators to map the evolution of AI research in AMD, assessing parameters such as publication volume, national/regional and institutional contributions, journal impact, author influence, and emerging research hotspots. Visualization tools, including Bibliometrix, CiteSpace and VOSviewer, were employed to generate comprehensive assessments of the data. Results A total of 1,721 publications were identified, with the USA leading in publication output and the University of Melbourne as the most prolific institution. The journal Investigative Ophthalmology & Visual Science published the highest number of articles, and Schmidt-Eerfurth emerged as the most active author. Keyword and clustering analyses, along with citation burst detection, revealed three distinct research stages within the field from 1992 to 2023. Presently, research efforts are concentrated on developing deep learning (DL) models for AMD diagnosis and progression prediction. Prominent emerging themes include early detection, risk stratification, and treatment efficacy prediction. The integration of large language models (LLMs) and vision-language models (VLMs) for enhanced image processing also represents a novel research frontier. Conclusions This bibliometric analysis provides a structured overview of prevailing research trends and emerging directions in AI applications for AMD. These findings furnish valuable insights to guide future research and foster collaborative advancements in this evolving field.
Collapse
Affiliation(s)
- Zuyi Yang
- Eight-year Medical Doctor Program, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- Department of Ophthalmology, Key Lab of Ocular Fundus Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Dianzhe Tian
- Eight-year Medical Doctor Program, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- Department of Liver Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xinyu Zhao
- Department of Ophthalmology, Key Lab of Ocular Fundus Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Lei Zhang
- Department of Liver Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yiyao Xu
- Department of Liver Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xin Lu
- Department of Liver Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Key Lab of Ocular Fundus Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
7
|
Zeng L, Zhang J, Chen W, Ding Y. tdCoxSNN: Time-dependent Cox survival neural network for continuous-time dynamic prediction. J R Stat Soc Ser C Appl Stat 2025; 74:187-203. [PMID: 39807175 PMCID: PMC11725344 DOI: 10.1093/jrsssc/qlae051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 06/22/2024] [Accepted: 09/15/2024] [Indexed: 01/16/2025]
Abstract
The aim of dynamic prediction is to provide individualized risk predictions over time, which are updated as new data become available. In pursuit of constructing a dynamic prediction model for a progressive eye disorder, age-related macular degeneration (AMD), we propose a time-dependent Cox survival neural network (tdCoxSNN) to predict its progression using longitudinal fundus images. tdCoxSNN builds upon the time-dependent Cox model by utilizing a neural network to capture the nonlinear effect of time-dependent covariates on the survival outcome. Moreover, by concurrently integrating a convolutional neural network with the survival network, tdCoxSNN can directly take longitudinal images as input. We evaluate and compare our proposed method with joint modelling and landmarking approaches through extensive simulations. We applied the proposed approach to two real datasets. One is a large AMD study, the Age-Related Eye Disease Study, in which more than 50,000 fundus images were captured over a period of 12 years for more than 4,000 participants. Another is a public dataset of the primary biliary cirrhosis disease, where multiple laboratory tests were longitudinally collected to predict the time-to-liver transplant. Our approach demonstrates commendable predictive performance in both simulation studies and the analysis of the two real datasets.
Collapse
Affiliation(s)
- Lang Zeng
- Department of Biostatistics and Health Data Science, School of Public Health, University of Pittsburgh, Pittsburgh, PA, USA
| | - Jipeng Zhang
- Department of Biostatistics and Health Data Science, School of Public Health, University of Pittsburgh, Pittsburgh, PA, USA
| | - Wei Chen
- Department of Biostatistics and Health Data Science, School of Public Health, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Pediatrics, School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Ying Ding
- Department of Biostatistics and Health Data Science, School of Public Health, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
8
|
Pachade S, Porwal P, Kokare M, Deshmukh G, Sahasrabuddhe V, Luo Z, Han F, Sun Z, Qihan L, Kamata SI, Ho E, Wang E, Sivajohan A, Youn S, Lane K, Chun J, Wang X, Gu Y, Lu S, Oh YT, Park H, Lee CY, Yeh H, Cheng KW, Wang H, Ye J, He J, Gu L, Müller D, Soto-Rey I, Kramer F, Arai H, Ochi Y, Okada T, Giancardo L, Quellec G, Mériaudeau F. RFMiD: Retinal Image Analysis for multi-Disease Detection challenge. Med Image Anal 2025; 99:103365. [PMID: 39395210 DOI: 10.1016/j.media.2024.103365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 07/16/2024] [Accepted: 10/02/2024] [Indexed: 10/14/2024]
Abstract
In the last decades, many publicly available large fundus image datasets have been collected for diabetic retinopathy, glaucoma, and age-related macular degeneration, and a few other frequent pathologies. These publicly available datasets were used to develop a computer-aided disease diagnosis system by training deep learning models to detect these frequent pathologies. One challenge limiting the adoption of a such system by the ophthalmologist is, computer-aided disease diagnosis system ignores sight-threatening rare pathologies such as central retinal artery occlusion or anterior ischemic optic neuropathy and others that ophthalmologists currently detect. Aiming to advance the state-of-the-art in automatic ocular disease classification of frequent diseases along with the rare pathologies, a grand challenge on "Retinal Image Analysis for multi-Disease Detection" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2021). This paper, reports the challenge organization, dataset, top-performing participants solutions, evaluation measures, and results based on a new "Retinal Fundus Multi-disease Image Dataset" (RFMiD). There were two principal sub-challenges: disease screening (i.e. presence versus absence of pathology - a binary classification problem) and disease/pathology classification (a 28-class multi-label classification problem). It received a positive response from the scientific community with 74 submissions by individuals/teams that effectively entered in this challenge. The top-performing methodologies utilized a blend of data-preprocessing, data augmentation, pre-trained model, and model ensembling. This multi-disease (frequent and rare pathologies) detection will enable the development of generalizable models for screening the retina, unlike the previous efforts that focused on the detection of specific diseases.
Collapse
Affiliation(s)
- Samiksha Pachade
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded 431606, India.
| | - Prasanna Porwal
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded 431606, India
| | - Manesh Kokare
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded 431606, India
| | | | - Vivek Sahasrabuddhe
- Department of Ophthalmology, Shankarrao Chavan Government Medical College, Nanded 431606, India
| | - Zhengbo Luo
- Graduate School of Information Production and Systems, Waseda University, Japan
| | - Feng Han
- University of Shanghai for Science and Technology, Shanghai, China
| | - Zitang Sun
- Graduate School of Information Production and Systems, Waseda University, Japan
| | - Li Qihan
- Graduate School of Information Production and Systems, Waseda University, Japan
| | - Sei-Ichiro Kamata
- Graduate School of Information Production and Systems, Waseda University, Japan
| | - Edward Ho
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Edward Wang
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Asaanth Sivajohan
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Saerom Youn
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Kevin Lane
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Jin Chun
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Xinliang Wang
- Beihang University School of Computer Science, China
| | - Yunchao Gu
- Beihang University School of Computer Science, China
| | - Sixu Lu
- Beijing Normal University School of Artificial Intelligence, China
| | - Young-Tack Oh
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Republic of Korea
| | - Hyunjin Park
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea; School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, Republic of Korea
| | - Chia-Yen Lee
- Department of Electrical Engineering, National United University, Miaoli 360001, Taiwan, ROC
| | - Hung Yeh
- Department of Electrical Engineering, National United University, Miaoli 360001, Taiwan, ROC; Institute of Biomedical Engineering, National Yang Ming Chiao Tung University, 1001 Ta-Hsueh Road, Hsinchu, Taiwan, ROC
| | - Kai-Wen Cheng
- Department of Electrical Engineering, National United University, Miaoli 360001, Taiwan, ROC
| | - Haoyu Wang
- School of Biomedical Engineering, the Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Jin Ye
- ShenZhen Key Lab of Computer Vision and Pattern Recognition, Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Junjun He
- School of Biomedical Engineering, the Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China; ShenZhen Key Lab of Computer Vision and Pattern Recognition, Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lixu Gu
- School of Biomedical Engineering, the Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Dominik Müller
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Germany; Medical Data Integration Center, University Hospital Augsburg, Germany
| | - Iñaki Soto-Rey
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Germany; Medical Data Integration Center, University Hospital Augsburg, Germany
| | - Frank Kramer
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Germany
| | | | - Yuma Ochi
- National Institute of Technology, Kisarazu College, Japan
| | - Takami Okada
- Institute of Industrial Ecological Sciences, University of Occupational and Environmental Health, Japan
| | - Luca Giancardo
- Center for Precision Health, School of Biomedical Informatics, University of Texas Health Science Center at Houston (UTHealth), Houston, TX 77030, USA
| | | | | |
Collapse
|
9
|
Sripunya A, Chittasupho C, Mangmool S, Angerhofer A, Imaram W. Gallic Acid-Encapsulated PAMAM Dendrimers as an Antioxidant Delivery System for Controlled Release and Reduced Cytotoxicity against ARPE-19 Cells. Bioconjug Chem 2024; 35:1959-1969. [PMID: 39641479 DOI: 10.1021/acs.bioconjchem.4c00475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2024]
Abstract
Poly(amidoamine) (PAMAM) dendrimers have gained significant attention in various research fields, particularly in medicinal compound delivery. Their versatility lies in their ability to conjugate with functional molecules on their surfaces and encapsulate small molecules, making them suitable for diverse applications. Gallic acid is a potent antioxidant compound that has garnered considerable interest in recent years. Our research aims to investigate if the gallic acid-encapsulated PAMAM dendrimer generations 4 (G4(OH)-Ga) and 5 (G5(OH)-Ga) could enhance radical scavenging, which could potentially slow down the progression of age-related macular degeneration (AMD). Encapsulation of gallic acid in PAMAM dendrimers is a feasible alternative to prevent its degradation and toxicity. In vitro investigation of antioxidant activity was carried out using the DPPH and ABTS radical scavenging assays, as well as the FRAP assay. The IC50 values for DPPH and ABTS assays were determined through nonlinear dose-response curves, correlating the inhibition percentage with the concentration (μg/mL) of the sample and the concentration (μM) of gallic acid within each sample. G4(OH)-Ga and G5(OH)-Ga possess significant antioxidant activities as determined by the DPPH, ABTS, and FRAP assays. Moreover, gallic acid-encapsulated PAMAM dendrimers inhibit H2O2-induced reactive oxygen species (ROS) production in the human retinal pigment epithelium ARPE-19 cells, thereby improving antioxidant characteristics and potentially retarding AMD progression caused by ROS. In an evaluation of cell viability of ARPE-19 cells using the MTT assay, G4(OH)-Ga was found to reduce cytotoxic effects on ARPE-19 cells.
Collapse
Affiliation(s)
- Aorada Sripunya
- Department of Chemistry, Faculty of Science, Kasetsart University, Bangkok 10900, Thailand
| | - Chuda Chittasupho
- Department of Pharmaceutical Sciences, Faculty of Pharmacy, Chiang Mai University, Mueang, Chiang Mai 50200, Thailand
| | - Supachoke Mangmool
- Department of Pharmaceutical Care, Faculty of Pharmacy, Chiang Mai University, Mueang, Chiang Mai 50200, Thailand
| | - Alexander Angerhofer
- Department of Chemistry, University of Florida, Gainesville, Florida 32611, United States
| | - Witcha Imaram
- Department of Chemistry and Center of Excellence for Innovation in Chemistry, Faculty of Science, Kasetsart University, Bangkok 10900, Thailand
- Special Research Unit for Advanced Magnetic Resonance, Department of Chemistry, Faculty of Science, Kasetsart University, Bangkok 10900, Thailand
| |
Collapse
|
10
|
Agrón E, Domalpally A, Chen Q, Lu Z, Chew EY, Keenan TDL. An Updated Simplified Severity Scale for Age-Related Macular Degeneration Incorporating Reticular Pseudodrusen: Age-Related Eye Disease Study Report Number 42. Ophthalmology 2024; 131:1164-1174. [PMID: 38657840 PMCID: PMC11416341 DOI: 10.1016/j.ophtha.2024.04.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 03/25/2024] [Accepted: 04/15/2024] [Indexed: 04/26/2024] Open
Abstract
PURPOSE To update the Age-Related Eye Disease Study (AREDS) simplified severity scale for risk of late age-related macular degeneration (AMD), including incorporation of reticular pseudodrusen (RPD), and to perform external validation on the Age-Related Eye Disease Study 2 (AREDS2). DESIGN Post hoc analysis of 2 clinical trial cohorts: AREDS and AREDS2. PARTICIPANTS Participants with no late AMD in either eye at baseline in AREDS (n = 2719) and AREDS2 (n = 1472). METHODS Five-year rates of progression to late AMD were calculated according to levels 0 to 4 on the simplified severity scale after 2 updates: (1) noncentral geographic atrophy (GA) considered part of the outcome, rather than a risk feature, and (2) scale separation according to RPD status (determined by validated deep learning grading of color fundus photographs). MAIN OUTCOME MEASURES Five-year rate of progression to late AMD (defined as neovascular AMD or any GA). RESULTS In the AREDS, after the first scale update, the 5-year rates of progression to late AMD for levels 0 to 4 were 0.3%, 4.5%, 12.9%, 32.2%, and 55.6%, respectively. As the final simplified severity scale, the 5-year progression rates for levels 0 to 4 were 0.3%, 4.3%, 11.6%, 26.7%, and 50.0%, respectively, for participants without RPD at baseline and 2.8%, 8.0%, 29.0%, 58.7%, and 72.2%, respectively, for participants with RPD at baseline. In external validation on the AREDS2, for levels 2 to 4, the progression rates were similar: 15.0%, 27.7%, and 45.7% (RPD absent) and 26.2%, 46.0%, and 73.0% (RPD present), respectively. CONCLUSIONS The AREDS AMD simplified severity scale has been modernized with 2 important updates. The new scale for individuals without RPD has 5-year progression rates of approximately 0.5%, 4%, 12%, 25%, and 50%, such that the rates on the original scale remain accurate. The new scale for individuals with RPD has 5-year progression rates of approximately 3%, 8%, 30%, 60%, and 70%, that is, approximately double for most levels. This scale fits updated definitions of late AMD, has increased prognostic accuracy, seems generalizable to similar populations, but remains simple for broad risk categorization. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Elvira Agrón
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Amitha Domalpally
- Department of Ophthalmology and Visual Sciences, University of Wisconsin-Madison School of Medicine and Public Health, Madison, Wisconsin
| | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland; Biomedical Informatics and Data Science, School of Medicine, Yale University, New Haven, Connecticut
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland.
| |
Collapse
|
11
|
Goh KL, Abbott CJ, Campbell TG, Cohn AC, Ong DN, Wickremasinghe SS, Hodgson LAB, Guymer RH, Wu Z. Clinical performance of predicting late age-related macular degeneration development using multimodal imaging. Clin Exp Ophthalmol 2024; 52:774-782. [PMID: 38812454 DOI: 10.1111/ceo.14405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 04/17/2024] [Accepted: 05/17/2024] [Indexed: 05/31/2024]
Abstract
BACKGROUND To examine whether the clinical performance of predicting late age-related macular degeneration (AMD) development is improved through using multimodal imaging (MMI) compared to using colour fundus photography (CFP) alone, and how this compares with a basic prediction model using well-established AMD risk factors. METHODS Individuals with AMD in this study underwent MMI, including optical coherence tomography (OCT), fundus autofluorescence, near-infrared reflectance and CFP at baseline, and then at 6-monthly intervals for 3-years to determine MMI-defined late AMD development. Four retinal specialists independently assessed the likelihood that each eye at baseline would progress to MMI-defined late AMD over 3-years with CFP, and then with MMI. Predictive performance with CFP and MMI were compared to each other, and to a basic prediction model using age, presence of pigmentary abnormalities, and OCT-based drusen volume. RESULTS The predictive performance of the clinicians using CFP [AUC = 0.75; 95% confidence interval (CI) = 0.68-0.82] improved when using MMI (AUC = 0.79; 95% CI = 0.72-0.85; p = 0.034). However, a basic prediction model outperformed clinicians using either CFP or MMI (AUC = 0.85; 95% CI = 0.78-91; p ≤ 0.002). CONCLUSIONS Clinical performance for predicting late AMD development was improved by using MMI compared to CFP. However, a basic prediction model using well-established AMD risk factors outperformed retinal specialists, suggesting that such a model could further improve personalised counselling and monitoring of individuals with the early stages of AMD in clinical practice.
Collapse
Affiliation(s)
- Kai Lyn Goh
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Carla J Abbott
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Thomas G Campbell
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Amy C Cohn
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Dai Ni Ong
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Sanjeewa S Wickremasinghe
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Lauren A B Hodgson
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Robyn H Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Zhichao Wu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| |
Collapse
|
12
|
Holste G, Lin M, Zhou R, Wang F, Liu L, Yan Q, Van Tassel SH, Kovacs K, Chew EY, Lu Z, Wang Z, Peng Y. Harnessing the power of longitudinal medical imaging for eye disease prognosis using Transformer-based sequence modeling. NPJ Digit Med 2024; 7:216. [PMID: 39152209 PMCID: PMC11329720 DOI: 10.1038/s41746-024-01207-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 07/29/2024] [Indexed: 08/19/2024] Open
Abstract
Deep learning has enabled breakthroughs in automated diagnosis from medical imaging, with many successful applications in ophthalmology. However, standard medical image classification approaches only assess disease presence at the time of acquisition, neglecting the common clinical setting of longitudinal imaging. For slow, progressive eye diseases like age-related macular degeneration (AMD) and primary open-angle glaucoma (POAG), patients undergo repeated imaging over time to track disease progression and forecasting the future risk of developing a disease is critical to properly plan treatment. Our proposed Longitudinal Transformer for Survival Analysis (LTSA) enables dynamic disease prognosis from longitudinal medical imaging, modeling the time to disease from sequences of fundus photography images captured over long, irregular time periods. Using longitudinal imaging data from the Age-Related Eye Disease Study (AREDS) and Ocular Hypertension Treatment Study (OHTS), LTSA significantly outperformed a single-image baseline in 19/20 head-to-head comparisons on late AMD prognosis and 18/20 comparisons on POAG prognosis. A temporal attention analysis also suggested that, while the most recent image is typically the most influential, prior imaging still provides additional prognostic value.
Collapse
Affiliation(s)
- Gregory Holste
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
- Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, USA
| | - Mingquan Lin
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
- Department of Surgery, University of Minnesota, Minneapolis, MN, USA
| | - Ruiwen Zhou
- Center for Biostatistics and Data Science, Washington University School of Medicine, St. Louis, MO, USA
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Lei Liu
- Center for Biostatistics and Data Science, Washington University School of Medicine, St. Louis, MO, USA
| | - Qi Yan
- Department of Obstetrics & Gynecology, Columbia University Irving Medical Center, New York, NY, USA
| | - Sarah H Van Tassel
- Israel Englander Department of Ophthalmology, Weill Cornell Medicine, New York, NY, USA
| | - Kyle Kovacs
- Israel Englander Department of Ophthalmology, Weill Cornell Medicine, New York, NY, USA
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health (NIH), Bethesda, MD, USA
| | - Zhiyong Lu
- National Center for Biotechnology Information (NCBI), National Library of Medicine (NLM), National Institutes of Health (NIH), Bethesda, MD, USA
| | - Zhangyang Wang
- Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, USA.
| | - Yifan Peng
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
| |
Collapse
|
13
|
Mathieu A, Ajana S, Korobelnik JF, Le Goff M, Gontier B, Rougier MB, Delcourt C, Delyfer MN. DeepAlienorNet: A deep learning model to extract clinical features from colour fundus photography in age-related macular degeneration. Acta Ophthalmol 2024; 102:e823-e830. [PMID: 38345159 DOI: 10.1111/aos.16660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 01/11/2024] [Accepted: 01/25/2024] [Indexed: 07/09/2024]
Abstract
OBJECTIVE This study aimed to develop a deep learning (DL) model, named 'DeepAlienorNet', to automatically extract clinical signs of age-related macular degeneration (AMD) from colour fundus photography (CFP). METHODS AND ANALYSIS The ALIENOR Study is a cohort of French individuals 77 years of age or older. A multi-label DL model was developed to grade the presence of 7 clinical signs: large soft drusen (>125 μm), intermediate soft (63-125 μm), large area of soft drusen (total area >500 μm), presence of central soft drusen (large or intermediate), hyperpigmentation, hypopigmentation, and advanced AMD (defined as neovascular or atrophic AMD). Prediction performances were evaluated using cross-validation and the expert human interpretation of the clinical signs as the ground truth. RESULTS A total of 1178 images were included in the study. Averaging the 7 clinical signs' detection performances, DeepAlienorNet achieved an overall sensitivity, specificity, and AUROC of 0.77, 0.83, and 0.87, respectively. The model demonstrated particularly strong performance in predicting advanced AMD and large areas of soft drusen. It can also generate heatmaps, highlighting the relevant image areas for interpretation. CONCLUSION DeepAlienorNet demonstrates promising performance in automatically identifying clinical signs of AMD from CFP, offering several notable advantages. Its high interpretability reduces the black box effect, addressing ethical concerns. Additionally, the model can be easily integrated to automate well-established and validated AMD progression scores, and the user-friendly interface further enhances its usability. The main value of DeepAlienorNet lies in its ability to assist in precise severity scoring for further adapted AMD management, all while preserving interpretability.
Collapse
Affiliation(s)
- Alexis Mathieu
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | - Soufiane Ajana
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
| | - Jean-François Korobelnik
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | - Mélanie Le Goff
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
| | - Brigitte Gontier
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | | | - Cécile Delcourt
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | - Marie-Noëlle Delyfer
- Inserm, Bordeaux Population Health Research Center, UMR 1219, University of Bordeaux, Bordeaux, France
- Service d'Ophtalmologie, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
- FRCRnet/FCRIN Network, Bordeaux, France
| |
Collapse
|
14
|
Holste G, Lin M, Zhou R, Wang F, Liu L, Yan Q, Van Tassel SH, Kovacs K, Chew EY, Lu Z, Wang Z, Peng Y. Harnessing the power of longitudinal medical imaging for eye disease prognosis using Transformer-based sequence modeling. ARXIV 2024:arXiv:2405.08780v2. [PMID: 39371086 PMCID: PMC11451643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/08/2024]
Abstract
Deep learning has enabled breakthroughs in automated diagnosis from medical imaging, with many successful applications in ophthalmology. However, standard medical image classification approaches only assess disease presence at the time of acquisition, neglecting the common clinical setting of longitudinal imaging. For slow, progressive eye diseases like age-related macular degeneration (AMD) and primary open-angle glaucoma (POAG), patients undergo repeated imaging over time to track disease progression and forecasting the future risk of developing a disease is critical to properly plan treatment. Our proposed Longitudinal Transformer for Survival Analysis (LTSA) enables dynamic disease prognosis from longitudinal medical imaging, modeling the time to disease from sequences of fundus photography images captured over long, irregular time periods. Using longitudinal imaging data from the Age-Related Eye Disease Study (AREDS) and Ocular Hypertension Treatment Study (OHTS), LTSA significantly outperformed a single-image baseline in 19/20 head-to-head comparisons on late AMD prognosis and 18/20 comparisons on POAG prognosis. A temporal attention analysis also suggested that, while the most recent image is typically the most influential, prior imaging still provides additional prognostic value.
Collapse
Affiliation(s)
- Gregory Holste
- Department of Population Health Sciences, Weill Cornell Medicine, NY, USA
- Department of Electrical and Computer Engineering, The University of Texas at Austin, TX, USA
| | - Mingquan Lin
- Department of Electrical and Computer Engineering, The University of Texas at Austin, TX, USA
| | - Ruiwen Zhou
- Center for Biostatistics and Data Science, Washington University School of Medicine, St. Louis, MO, USA
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medicine, NY, USA
| | - Lei Liu
- Center for Biostatistics and Data Science, Washington University School of Medicine, St. Louis, MO, USA
| | - Qi Yan
- Department of Obstetrics & Gynecology, Columbia University, New York, NY, USA
| | | | - Kyle Kovacs
- Department of Ophthalmology, Weill Cornell Medicine, New York, USA
| | - Emily Y. Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health (NIH), Bethesda, MD, USA
| | - Zhiyong Lu
- National Center for Biotechnology Information (NCBI), National Library of Medicine (NLM), National Institutes of Health (NIH), Bethesda, MD, USA
| | - Zhangyang Wang
- Department of Electrical and Computer Engineering, The University of Texas at Austin, TX, USA
| | - Yifan Peng
- Department of Population Health Sciences, Weill Cornell Medicine, NY, USA
| |
Collapse
|
15
|
Lim JI, Rachitskaya AV, Hallak JA, Gholami S, Alam MN. Artificial intelligence for retinal diseases. Asia Pac J Ophthalmol (Phila) 2024; 13:100096. [PMID: 39209215 DOI: 10.1016/j.apjo.2024.100096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Revised: 08/02/2024] [Accepted: 08/20/2024] [Indexed: 09/04/2024] Open
Abstract
PURPOSE To discuss the worldwide applications and potential impact of artificial intelligence (AI) for the diagnosis, management and analysis of treatment outcomes of common retinal diseases. METHODS We performed an online literature review, using PubMed Central (PMC), of AI applications to evaluate and manage retinal diseases. Search terms included AI for screening, diagnosis, monitoring, management, and treatment outcomes for age-related macular degeneration (AMD), diabetic retinopathy (DR), retinal surgery, retinal vascular disease, retinopathy of prematurity (ROP) and sickle cell retinopathy (SCR). Additional search terms included AI and color fundus photographs, optical coherence tomography (OCT), and OCT angiography (OCTA). We included original research articles and review articles. RESULTS Research studies have investigated and shown the utility of AI for screening for diseases such as DR, AMD, ROP, and SCR. Research studies using validated and labeled datasets confirmed AI algorithms could predict disease progression and response to treatment. Studies showed AI facilitated rapid and quantitative interpretation of retinal biomarkers seen on OCT and OCTA imaging. Research articles suggest AI may be useful for planning and performing robotic surgery. Studies suggest AI holds the potential to help lessen the impact of socioeconomic disparities on the outcomes of retinal diseases. CONCLUSIONS AI applications for retinal diseases can assist the clinician, not only by disease screening and monitoring for disease recurrence but also in quantitative analysis of treatment outcomes and prediction of treatment response. The public health impact on the prevention of blindness from DR, AMD, and other retinal vascular diseases remains to be determined.
Collapse
Affiliation(s)
- Jennifer I Lim
- University of Illinois at Chicago, College of Medicine, Department of Ophthalmology and Visual Sciences, Chicago, IL, United States.
| | - Aleksandra V Rachitskaya
- Department of Ophthalmology at Case Western Reserve University, Cleveland Clinic Lerner College of Medicine, Cleveland Clinic Cole Eye Institute, United States
| | - Joelle A Hallak
- University of Illinois at Chicago, College of Medicine, Department of Ophthalmology and Visual Sciences, Chicago, IL, United States; Department of Ophthalmology and Visual Sciences, College of Medicine, University of Illinois at Chicago, Chicago, IL, United States
| | - Sina Gholami
- University of North Carolina at Charlotte, United States
| | - Minhaj N Alam
- University of North Carolina at Charlotte, United States
| |
Collapse
|
16
|
Driban M, Yan A, Selvam A, Ong J, Vupparaboina KK, Chhablani J. Artificial intelligence in chorioretinal pathology through fundoscopy: a comprehensive review. Int J Retina Vitreous 2024; 10:36. [PMID: 38654344 PMCID: PMC11036694 DOI: 10.1186/s40942-024-00554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/02/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Applications for artificial intelligence (AI) in ophthalmology are continually evolving. Fundoscopy is one of the oldest ocular imaging techniques but remains a mainstay in posterior segment imaging due to its prevalence, ease of use, and ongoing technological advancement. AI has been leveraged for fundoscopy to accomplish core tasks including segmentation, classification, and prediction. MAIN BODY In this article we provide a review of AI in fundoscopy applied to representative chorioretinal pathologies, including diabetic retinopathy and age-related macular degeneration, among others. We conclude with a discussion of future directions and current limitations. SHORT CONCLUSION As AI evolves, it will become increasingly essential for the modern ophthalmologist to understand its applications and limitations to improve patient outcomes and continue to innovate.
Collapse
Affiliation(s)
- Matthew Driban
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Audrey Yan
- Department of Medicine, West Virginia School of Osteopathic Medicine, Lewisburg, WV, USA
| | - Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, USA
| | | | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
| |
Collapse
|
17
|
Kananen F, Immonen I. Retinal pigment epithelium-Bruch's membrane volume in grading of age-related macular degeneration. Int J Ophthalmol 2023; 16:1827-1831. [PMID: 38028508 PMCID: PMC10626359 DOI: 10.18240/ijo.2023.11.14] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 08/28/2023] [Indexed: 12/01/2023] Open
Abstract
AIM To assess the agreement of optical coherence tomography (OCT) algorithm-based retinal pigment epithelium -Bruch's membrane complex volume (RBV) with fundus photograph-based age-related macular degeneration (AMD) grading. METHODS Digital color fundus photographs (CFPs) and spectral domain OCT images were acquired from 96 elderly subjects. CFPs were graded according to Age-Related Eye Disease Study (AREDS) classification. OCT image segmentation and RBV data calculation were done with Orion™ software. Univariate and multivariate analyses were performed to find out whether AMD lesion features associated with higher RBVs. RESULTS RBV correlated with AMD grading (rs=0.338, P=0.001), the correlation was slightly stronger in early AMD (n=52; rs=0.432, P=0.001). RBV was higher in subjects with early AMD compared with those with no AMD lesions evident in fundus photographs (1.05±0.20 vs 0.96±0.13 mm3, P=0.023). In multivariate analysis higher RBVs were associated significantly with higher total drusen (β=0.388, P=0.027) and pigmentation areas (β=0.319, P=0.020) in fundus photographs, whereas depigmentation area (β=-0.295, P=0.015) associated with lower RBV. CONCLUSION RBV correlate with AMD grading status, with a stronger association in patients with moderate, non-late AMD grades. This effect is driven mostly by lesions with drusen or pigmentation. Lesions with depigmentation tend to have lower values. RBV is more comprehensive measurement of the key area of AMD pathogenesis, compared to sole drusen volume analysis. RBV measurements are independent on grader variations and offer a possibility to quantify early and middle grade AMD lesions in a research setting, but may not substitute fundus photograph-based grading in the whole range of AMD spectrum.
Collapse
Affiliation(s)
- Fabian Kananen
- Department of Ophthalmology, Örebro University Hospital, Örebro 70185, Sweden
- Department of Ophthalmology and Otorhinolaryngology, Helsinki University, Helsinki 00014, Finland
| | - Ilkka Immonen
- Department of Ophthalmology and Otorhinolaryngology, Helsinki University, Helsinki 00014, Finland
- Department of Ophthalmology, Helsinki University Central Hospital, Helsinki 00014, Finland
| |
Collapse
|
18
|
Daich Varela M, Sen S, De Guimaraes TAC, Kabiri N, Pontikos N, Balaskas K, Michaelides M. Artificial intelligence in retinal disease: clinical application, challenges, and future directions. Graefes Arch Clin Exp Ophthalmol 2023; 261:3283-3297. [PMID: 37160501 PMCID: PMC10169139 DOI: 10.1007/s00417-023-06052-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/20/2023] [Accepted: 03/24/2023] [Indexed: 05/11/2023] Open
Abstract
Retinal diseases are a leading cause of blindness in developed countries, accounting for the largest share of visually impaired children, working-age adults (inherited retinal disease), and elderly individuals (age-related macular degeneration). These conditions need specialised clinicians to interpret multimodal retinal imaging, with diagnosis and intervention potentially delayed. With an increasing and ageing population, this is becoming a global health priority. One solution is the development of artificial intelligence (AI) software to facilitate rapid data processing. Herein, we review research offering decision support for the diagnosis, classification, monitoring, and treatment of retinal disease using AI. We have prioritised diabetic retinopathy, age-related macular degeneration, inherited retinal disease, and retinopathy of prematurity. There is cautious optimism that these algorithms will be integrated into routine clinical practice to facilitate access to vision-saving treatments, improve efficiency of healthcare systems, and assist clinicians in processing the ever-increasing volume of multimodal data, thereby also liberating time for doctor-patient interaction and co-development of personalised management plans.
Collapse
Affiliation(s)
- Malena Daich Varela
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | | | | | - Nikolas Pontikos
- UCL Institute of Ophthalmology, London, UK
- Moorfields Eye Hospital, London, UK
| | | | - Michel Michaelides
- UCL Institute of Ophthalmology, London, UK.
- Moorfields Eye Hospital, London, UK.
| |
Collapse
|
19
|
Liu TYA, Ling C, Hahn L, Jones CK, Boon CJ, Singh MS. Prediction of visual impairment in retinitis pigmentosa using deep learning and multimodal fundus images. Br J Ophthalmol 2023; 107:1484-1489. [PMID: 35896367 PMCID: PMC10579177 DOI: 10.1136/bjo-2021-320897] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Accepted: 06/25/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND The efficiency of clinical trials for retinitis pigmentosa (RP) treatment is limited by the screening burden and lack of reliable surrogate markers for functional end points. Automated methods to determine visual acuity (VA) may help address these challenges. We aimed to determine if VA could be estimated using confocal scanning laser ophthalmoscopy (cSLO) imaging and deep learning (DL). METHODS Snellen corrected VA and cSLO imaging were obtained retrospectively. The Johns Hopkins University (JHU) dataset was used for 10-fold cross-validations and internal testing. The Amsterdam University Medical Centers (AUMC) dataset was used for external independent testing. Both datasets had the same exclusion criteria: visually significant media opacities and images not centred on the central macula. The JHU dataset included patients with RP with and without molecular confirmation. The AUMC dataset only included molecularly confirmed patients with RP. Using transfer learning, three versions of the ResNet-152 neural network were trained: infrared (IR), optical coherence tomography (OCT) and combined image (CI). RESULTS In internal testing (JHU dataset, 2569 images, 462 eyes, 231 patients), the area under the curve (AUC) for the binary classification task of distinguishing between Snellen VA 20/40 or better and worse than Snellen VA 20/40 was 0.83, 0.87 and 0.85 for IR, OCT and CI, respectively. In external testing (AUMC dataset, 349 images, 166 eyes, 83 patients), the AUC was 0.78, 0.87 and 0.85 for IR, OCT and CI, respectively. CONCLUSIONS Our algorithm showed robust performance in predicting visual impairment in patients with RP, thus providing proof-of-concept for predicting structure-function correlation based solely on cSLO imaging in patients with RP.
Collapse
Affiliation(s)
- Tin Yan Alvin Liu
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, Maryland, USA
| | - Carlthan Ling
- Department of Ophthalmology, University of Maryland Medical System, Baltimore, Maryland, USA
| | - Leo Hahn
- Department of Ophthalmology, Amsterdam UMC Locatie AMC, Amsterdam, The Netherlands
| | - Craig K Jones
- Malone Center for Engineering in Healthcare, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Camiel Jf Boon
- Department of Ophthalmology, Amsterdam UMC Locatie AMC, Amsterdam, The Netherlands
- Department of Ophthalmology, Leiden University Medical Center, Leiden, The Netherlands
| | - Mandeep S Singh
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, Maryland, USA
- Department of Genetic Medicine, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
20
|
Rivail A, Vogl WD, Riedl S, Grechenig C, Coulibaly LM, Reiter GS, Guymer RH, Wu Z, Schmidt-Erfurth U, Bogunović H. Deep survival modeling of longitudinal retinal OCT volumes for predicting the onset of atrophy in patients with intermediate AMD. BIOMEDICAL OPTICS EXPRESS 2023; 14:2449-2464. [PMID: 37342683 PMCID: PMC10278641 DOI: 10.1364/boe.487206] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 03/30/2023] [Accepted: 04/10/2023] [Indexed: 06/23/2023]
Abstract
In patients with age-related macular degeneration (AMD), the risk of progression to late stages is highly heterogeneous, and the prognostic imaging biomarkers remain unclear. We propose a deep survival model to predict the progression towards the late atrophic stage of AMD. The model combines the advantages of survival modelling, accounting for time-to-event and censoring, and the advantages of deep learning, generating prediction from raw 3D OCT scans, without the need for extracting a predefined set of quantitative biomarkers. We demonstrate, in an extensive set of evaluations, based on two large longitudinal datasets with 231 eyes from 121 patients for internal evaluation, and 280 eyes from 140 patients for the external evaluation, that this model improves the risk estimation performance over standard deep learning classification models.
Collapse
Affiliation(s)
- Antoine Rivail
- Christian Doppler Lab for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Wolf-Dieter Vogl
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Sophie Riedl
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Christoph Grechenig
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Leonard M. Coulibaly
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Gregor S. Reiter
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Robyn H. Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Zhichao Wu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Ursula Schmidt-Erfurth
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Hrvoje Bogunović
- Christian Doppler Lab for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
21
|
Huang Z, Zhao X, Ziv O, Laurita KR, Rollins AM, Hendon CP. Automated analysis framework for in vivo cardiac ablation therapy monitoring with optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2023; 14:1228-1242. [PMID: 36950243 PMCID: PMC10026573 DOI: 10.1364/boe.480943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 02/12/2023] [Accepted: 02/16/2023] [Indexed: 06/18/2023]
Abstract
Radiofrequency ablation (RFA) is a minimally invasive procedure that is commonly used for the treatment of atrial fibrillation. However, it is associated with a significant risk of arrhythmia recurrence and complications owing to the lack of direct visualization of cardiac substrates and real-time feedback on ablation lesion transmurality. Within this manuscript, we present an automated deep learning framework for in vivo intracardiac optical coherence tomography (OCT) analysis of swine left atria. Our model can accurately identify cardiac substrates, monitor catheter-tissue contact stability, and assess lesion transmurality on both OCT intensity and polarization-sensitive OCT data. To the best of our knowledge, we have developed the first automatic framework for in vivo cardiac OCT analysis, which holds promise for real-time monitoring and guidance of cardiac RFA therapy..
Collapse
Affiliation(s)
- Ziyi Huang
- Department of Electrical Engineering, Columbia University, New York, NY, USA
| | - Xiaowei Zhao
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Ohad Ziv
- School of Medicine, Case Western Reserve University, Cleveland, OH, USA
- Heart and Vascular Research Center, MetroHealth Campus, Case Western Reserve University, Cleveland, OH, USA
| | - Kenneth R. Laurita
- Heart and Vascular Research Center, MetroHealth Campus, Case Western Reserve University, Cleveland, OH, USA
| | - Andrew M. Rollins
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Christine P. Hendon
- Department of Electrical Engineering, Columbia University, New York, NY, USA
| |
Collapse
|
22
|
Xie L, Vaghefi E, Yang S, Han D, Marshall J, Squirrell D. Automation of Macular Degeneration Classification in the AREDS Dataset, Using a Novel Neural Network Design. Clin Ophthalmol 2023; 17:455-469. [PMID: 36755888 PMCID: PMC9901462 DOI: 10.2147/opth.s396537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 01/12/2023] [Indexed: 02/04/2023] Open
Abstract
Purpose To create an ensemble of Convolutional Neural Networks (CNNs), capable of detecting and stratifying the risk of progressive age-related macular degeneration (AMD) from retinal photographs. Design Retrospective cohort study. Methods Three individual CNNs are trained to accurately detect 1) advanced AMD, 2) drusen size and 3) the presence or otherwise of pigmentary abnormalities, from macular centered retinal images were developed. The CNNs were then arranged in a "cascading" architecture to calculate the Age-related Eye Disease Study (AREDS) Simplified 5-level risk Severity score (Risk Score 0 - Risk Score 4), for test images. The process was repeated creating a simplified binary "low risk" (Scores 0-2) and "high risk" (Risk Score 3-4) classification. Participants There were a total of 188,006 images, of which 118,254 images were deemed gradable, representing 4591 patients, from the AREDS1 dataset. The gradable images were split into 50%/25%/25% ratios for training, validation and test purposes. Main Outcome Measures The ability of the ensemble of CNNs using retinal images to predict an individual's risk of experiencing progression of their AMD based on the AREDS 5-step Simplified Severity Scale. Results When assessed against the 5-step Simplified Severity Scale, the results generated by the ensemble of CNN's achieved an accuracy of 80.43% (quadratic kappa 0.870). When assessed against a simplified binary (Low Risk/High Risk) classification, an accuracy of 98.08%, sensitivity of ≥85% and specificity of ≥99% was achieved. Conclusion We have created an ensemble of neural networks, trained on the AREDS 1 dataset, that is able to accurately calculate an individual's score on the AREDS 5-step Simplified Severity Scale for AMD. If the results presented were replicated, then this ensemble of CNNs could be used as a screening tool that has the potential to significantly improve health outcomes by identifying asymptomatic individuals who would benefit from AREDS2 macular supplements.
Collapse
Affiliation(s)
- Li Xie
- Toku Eyes Limited, Auckland, New Zealand
| | - Ehsan Vaghefi
- Toku Eyes Limited, Auckland, New Zealand,School of Optometry and Vision Science, The University of Auckland, Auckland, New Zealand,Correspondence: Ehsan Vaghefi, Tel +6493737599, Email
| | - Song Yang
- Toku Eyes Limited, Auckland, New Zealand
| | - David Han
- Toku Eyes Limited, Auckland, New Zealand,School of Optometry and Vision Science, The University of Auckland, Auckland, New Zealand
| | | | - David Squirrell
- Toku Eyes Limited, Auckland, New Zealand,Department of Ophthalmology, Auckland District Health Board, Auckland, New Zealand
| |
Collapse
|
23
|
Sivaprasad S, Chandra S, Kwon J, Khalid N, Chong V. Perspectives from clinical trials: is geographic atrophy one disease? Eye (Lond) 2023; 37:402-407. [PMID: 35641821 PMCID: PMC9905504 DOI: 10.1038/s41433-022-02115-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 04/27/2022] [Accepted: 05/13/2022] [Indexed: 11/09/2022] Open
Abstract
Geographic atrophy (GA) is currently an untreatable condition. Emerging evidence from recent clinical trials show that anti-complement therapy may be a successful treatment option. However, several trials in this therapy area have failed as well. This raises several questions. Firstly, does complement therapy work for all patients with GA? Secondly, is GA one disease? Can we assume that these failed clinical trials are due to ineffective interventions or are they due to flawed clinical trial designs, heterogeneity in GA progression rates or differences in study cohorts? In this article we try to answer these questions by providing an overview of the challenges of designing and interpreting outcomes of randomised controlled trials (RCTs) in GA. These include differing inclusion-exclusion criteria, heterogeneous progression rates of the disease, outcome choices and confounders.
Collapse
Affiliation(s)
- Sobha Sivaprasad
- National Institute of Health Research Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK.
- University College London, Institute of Ophthalmology, London, UK.
| | - Shruti Chandra
- National Institute of Health Research Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- University College London, Institute of Ophthalmology, London, UK
| | - Jeha Kwon
- Oxford University Hospitals NHS Trust, Oxford, UK
| | | | - Victor Chong
- University College London, Institute of Ophthalmology, London, UK
| |
Collapse
|
24
|
The Need for Artificial Intelligence Based Risk Factor Analysis for Age-Related Macular Degeneration: A Review. Diagnostics (Basel) 2022; 13:diagnostics13010130. [PMID: 36611422 PMCID: PMC9818762 DOI: 10.3390/diagnostics13010130] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 12/16/2022] [Accepted: 12/22/2022] [Indexed: 01/04/2023] Open
Abstract
In epidemiology, a risk factor is a variable associated with increased disease risk. Understanding the role of risk factors is significant for developing a strategy to improve global health. There is strong evidence that risk factors like smoking, alcohol consumption, previous cataract surgery, age, high-density lipoprotein (HDL) cholesterol, BMI, female gender, and focal hyper-pigmentation are independently associated with age-related macular degeneration (AMD). Currently, in the literature, statistical techniques like logistic regression, multivariable logistic regression, etc., are being used to identify AMD risk factors by employing numerical/categorical data. However, artificial intelligence (AI) techniques have not been used so far in the literature for identifying risk factors for AMD. On the other hand, artificial intelligence (AI) based tools can anticipate when a person is at risk of developing chronic diseases like cancer, dementia, asthma, etc., in providing personalized care. AI-based techniques can employ numerical/categorical and/or image data thus resulting in multimodal data analysis, which provides the need for AI-based tools to be used for risk factor analysis in ophthalmology. This review summarizes the statistical techniques used to identify various risk factors and the higher benefits that AI techniques provide for AMD-related disease prediction. Additional studies are required to review different techniques for risk factor identification for other ophthalmic diseases like glaucoma, diabetic macular edema, retinopathy of prematurity, cataract, and diabetic retinopathy.
Collapse
|
25
|
Ganjdanesh A, Zhang J, Yan S, Chen W, Huang H. Multimodal Genotype and Phenotype Data Integration to Improve Partial Data-Based Longitudinal Prediction. J Comput Biol 2022; 29:1324-1345. [PMID: 36383766 PMCID: PMC9835299 DOI: 10.1089/cmb.2022.0378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Multimodal data analysis has attracted ever-increasing attention in computational biology and bioinformatics community recently. However, existing multimodal learning approaches need all data modalities available at both training and prediction stages, thus they cannot be applied to many real-world biomedical applications, which often have a missing modality problem as the collection of all modalities is prohibitively costly. Meanwhile, two diagnosis-related pieces of information are of main interest during the examination of a subject regarding a chronic disease (with longitudinal progression): their current status (diagnosis) and how it will change before next visit (longitudinal outcome). Correct responses to these queries can identify susceptible individuals and provide the means of early interventions for them. In this article, we develop a novel adversarial mutual learning framework for longitudinal disease progression prediction, allowing us to leverage multiple data modalities available for training to train a performant model that uses a single modality for prediction. Specifically, in our framework, a single-modal model (which utilizes the main modality) learns from a pretrained multimodal model (which accepts both main and auxiliary modalities as input) in a mutual learning manner to (1) infer outcome-related representations of the auxiliary modalities based on its own representations for the main modality during adversarial training and (2) successfully combine them to predict the longitudinal outcome. We apply our method to analyze the retinal imaging genetics for the early diagnosis of age-related macular degeneration (AMD) disease, that is, simultaneous assessment of the severity of AMD at the time of the current visit and the prognosis of the condition at the subsequent visit. Our experiments using the Age-Related Eye Disease Study dataset show that our method is more effective than baselines at classifying patients' current and forecasting their future AMD severity.
Collapse
Affiliation(s)
- Alireza Ganjdanesh
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Jipeng Zhang
- Department of Biostatistics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Sarah Yan
- West Windsor-Plainsboro High School South, Princeton Junction, New Jersey, USA
| | - Wei Chen
- Department of Biostatistics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
- Department of Pediatrics, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania, USA
- Department of Human Genetics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Heng Huang
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
26
|
Primary Open-Angle Glaucoma Diagnosis From Optic Disc Photographs Using a Siamese Network. OPHTHALMOLOGY SCIENCE 2022; 2:100209. [PMID: 36531584 PMCID: PMC9754976 DOI: 10.1016/j.xops.2022.100209] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 08/01/2022] [Accepted: 08/05/2022] [Indexed: 11/20/2022]
Abstract
Purpose Primary open-angle glaucoma (POAG) is one of the leading causes of irreversible blindness in the United States and worldwide. Although deep learning methods have been proposed to diagnose POAG, these methods all used a single image as input. Contrastingly, glaucoma specialists typically compare the follow-up image with the baseline image to diagnose incident glaucoma. To simulate this process, we proposed a Siamese neural network, POAGNet, to detect POAG from optic disc photographs. Design The POAGNet, an algorithm for glaucoma diagnosis, is developed using optic disc photographs. Participants The POAGNet was trained and evaluated on 2 data sets: (1) 37 339 optic disc photographs from 1636 Ocular Hypertension Treatment Study (OHTS) participants and (2) 3684 optic disc photographs from the Sequential fundus Images for Glaucoma (SIG) data set. Gold standard labels were obtained using reading center grades. Methods We proposed a Siamese network model, POAGNet, to simulate the clinical process of identifying POAG from optic disc photographs. The POAGNet consists of 2 side outputs for deep supervision and uses convolution to measure the similarity between 2 networks. Main Outcome Measures The main outcome measures are the area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity. Results In POAG diagnosis, extensive experiments show that POAGNet performed better than the best state-of-the-art model on the OHTS test set (area under the curve [AUC] 0.9587 versus 0.8750). It also outperformed the baseline models on the SIG test set (AUC 0.7518 versus 0.6434). To assess the transferability of POAGNet, we also validated the impact of cross-data set variability on our model. The model trained on OHTS achieved an AUC of 0.7490 on SIG, comparable to the previous model trained on the same data set. When using the combination of SIG and OHTS for training, our model achieved superior AUC to the single-data model (AUC 0.8165 versus 0.7518). These demonstrate the relative generalizability of POAGNet. Conclusions By simulating the clinical grading process, POAGNet demonstrated high accuracy in POAG diagnosis. These results highlight the potential of deep learning to assist and enhance clinical POAG diagnosis. The POAGNet is publicly available on https://github.com/bionlplab/poagnet.
Collapse
|
27
|
Lee J, Wanyan T, Chen Q, Keenan TDL, Glicksberg BS, Chew EY, Lu Z, Wang F, Peng Y. Predicting Age-related Macular Degeneration Progression with Longitudinal Fundus Images Using Deep Learning. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2022; 13583:11-20. [PMID: 36656604 PMCID: PMC9842432 DOI: 10.1007/978-3-031-21014-3_2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Accurately predicting a patient's risk of progressing to late age-related macular degeneration (AMD) is difficult but crucial for personalized medicine. While existing risk prediction models for progression to late AMD are useful for triaging patients, none utilizes longitudinal color fundus photographs (CFPs) in a patient's history to estimate the risk of late AMD in a given subsequent time interval. In this work, we seek to evaluate how deep neural networks capture the sequential information in longitudinal CFPs and improve the prediction of 2-year and 5-year risk of progression to late AMD. Specifically, we proposed two deep learning models, CNN-LSTM and CNN-Transformer, which use a Long-Short Term Memory (LSTM) and a Transformer, respectively with convolutional neural networks (CNN), to capture the sequential information in longitudinal CFPs. We evaluated our models in comparison to baselines on the Age-Related Eye Disease Study, one of the largest longitudinal AMD cohorts with CFPs. The proposed models outperformed the baseline models that utilized only single-visit CFPs to predict the risk of late AMD (0.879 vs 0.868 in AUC for 2-year prediction, and 0.879 vs 0.862 for 5-year prediction). Further experiments showed that utilizing longitudinal CFPs over a longer time period was helpful for deep learning models to predict the risk of late AMD. We made the source code available at https://github.com/bionlplab/AMD_prognosis_mlmi2022 to catalyze future works that seek to develop deep learning models for late AMD prediction.
Collapse
Affiliation(s)
- Junghwan Lee
- Columbia University, New York, USA,Weill Cornell Medicine, New York, USA
| | - Tingyi Wanyan
- Indiana University, Bloomington, USA,Ichan School of Medicine at Mount Sinai, New York, USA,Weill Cornell Medicine, New York, USA
| | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, USA
| | | | | | - Emily Y. Chew
- National Eye Institute, National Institutes of Health, Bethesda, USA
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, USA
| | - Fei Wang
- Weill Cornell Medicine, New York, USA
| | | |
Collapse
|
28
|
Charng J, Alam K, Swartz G, Kugelman J, Alonso-Caneiro D, Mackey DA, Chen FK. Deep learning: applications in retinal and optic nerve diseases. Clin Exp Optom 2022:1-10. [PMID: 35999058 DOI: 10.1080/08164622.2022.2111201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022] Open
Abstract
Deep learning (DL) represents a paradigm-shifting, burgeoning field of research with emerging clinical applications in optometry. Unlike traditional programming, which relies on human-set specific rules, DL works by exposing the algorithm to a large amount of annotated data and allowing the software to develop its own set of rules (i.e. learn) by adjusting the parameters inside the model (network) during a training process in order to complete the task on its own. One major limitation of traditional programming is that, with complex tasks, it may require an extensive set of rules to accurately complete the assignment. Additionally, traditional programming can be susceptible to human bias from programmer experience. With the dramatic increase in the amount and the complexity of clinical data, DL has been utilised to automate data analysis and thus to assist clinicians in patient management. This review will present the latest advances in DL, for managing posterior eye diseases as well as DL-based solutions for patients with vision loss.
Collapse
Affiliation(s)
- Jason Charng
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Khyber Alam
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Gavin Swartz
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Jason Kugelman
- School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David Alonso-Caneiro
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David A Mackey
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | - Fred K Chen
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia.,Department of Ophthalmology, Royal Perth Hospital, Western Australia, Perth, Australia
| |
Collapse
|
29
|
Lin M, Hou B, Liu L, Gordon M, Kass M, Wang F, Van Tassel SH, Peng Y. Automated diagnosing primary open-angle glaucoma from fundus image by simulating human's grading with deep learning. Sci Rep 2022; 12:14080. [PMID: 35982106 PMCID: PMC9388536 DOI: 10.1038/s41598-022-17753-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 07/30/2022] [Indexed: 11/09/2022] Open
Abstract
Primary open-angle glaucoma (POAG) is a leading cause of irreversible blindness worldwide. Although deep learning methods have been proposed to diagnose POAG, it remains challenging to develop a robust and explainable algorithm to automatically facilitate the downstream diagnostic tasks. In this study, we present an automated classification algorithm, GlaucomaNet, to identify POAG using variable fundus photographs from different populations and settings. GlaucomaNet consists of two convolutional neural networks to simulate the human grading process: learning the discriminative features and fusing the features for grading. We evaluated GlaucomaNet on two datasets: Ocular Hypertension Treatment Study (OHTS) participants and the Large-scale Attention-based Glaucoma (LAG) dataset. GlaucomaNet achieved the highest AUC of 0.904 and 0.997 for POAG diagnosis on OHTS and LAG datasets. An ensemble of network architectures further improved diagnostic accuracy. By simulating the human grading process, GlaucomaNet demonstrated high accuracy with increased transparency in POAG diagnosis (comprehensiveness scores of 97% and 36%). These methods also address two well-known challenges in the field: the need for increased image data diversity and relying heavily on perimetry for POAG diagnosis. These results highlight the potential of deep learning to assist and enhance clinical POAG diagnosis. GlaucomaNet is publicly available on https://github.com/bionlplab/GlaucomaNet .
Collapse
Affiliation(s)
- Mingquan Lin
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Bojian Hou
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Lei Liu
- Institute for Public Health, Washington University School of Medicine, St. Louis, MO, USA
| | - Mae Gordon
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO, USA
| | - Michael Kass
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO, USA
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
| | | | - Yifan Peng
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
| |
Collapse
|
30
|
Chen S, Liu G, Liu X, Wang Y, He F, Nie D, Liu X, Liu X. RNA-seq analysis reveals differentially expressed inflammatory chemokines in a rat retinal degeneration model induced by sodium iodate. J Int Med Res 2022; 50:3000605221119376. [PMID: 36036255 PMCID: PMC9434683 DOI: 10.1177/03000605221119376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 07/22/2022] [Indexed: 11/23/2022] Open
Abstract
OBJECTIVE Retinal degeneration (RD) is a group of serious blinding eye diseases characterized by photoreceptor cell apoptosis and progressive degeneration of retinal neurons. However, the underlying mechanism of its pathogenesis remains unclear. METHODS In this study, retinal tissues from sodium iodate (NaIO3)-induced RD and control rats were collected for transcriptome analysis using RNA-sequencing (RNA-seq). Analysis of white blood cell-related parameters was conducted in patients with retinitis pigmentosa (RP) and age-related cataract (ARC) patients. RESULTS In total, 334 mRNAs, 77 long non-coding RNAs (lncRNAs), and 20 other RNA types were identified as differentially expressed in the retinas of NaIO3-induced RD rats. Gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses showed that differentially expressed mRNAs were mainly enriched in signaling pathways related to immune inflammation. Moreover, we found that the neutrophil-to-lymphocyte ratio was significantly higher in RP patients than in ARC patients. CONCLUSION Overall, this study suggests that multiple chemokines participating in systemic inflammation may contribute to RD pathogenesis.
Collapse
Affiliation(s)
- Sheng Chen
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology,
Affiliated Shenzhen Eye Hospital of Jinan University, Shenzhen, Guangdong,
China
| | - Guo Liu
- The Sichuan Provincial Key Laboratory for Human Disease Gene
Study, Sichuan Provincial People’s Hospital, School of Medicine, University of
Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Xin Liu
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology,
Affiliated Shenzhen Eye Hospital of Jinan University, Shenzhen, Guangdong,
China
| | - Yun Wang
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology,
Affiliated Shenzhen Eye Hospital of Jinan University, Shenzhen, Guangdong,
China
| | - Fen He
- Shenzhen Aier Eye Hospital Affiliated to Jinan University,
Shenzhen, Guangdong, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology,
Affiliated Shenzhen Eye Hospital of Jinan University, Shenzhen, Guangdong,
China
| | - Xinhua Liu
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology,
Affiliated Shenzhen Eye Hospital of Jinan University, Shenzhen, Guangdong,
China
| | - Xuyang Liu
- Xiamen Eye Center, Xiamen University, Xiamen, Fujian,
China
- Department of Ophthalmology, Shenzhen People’s Hospital, the 2nd
Clinical Medical College, Jinan University, Shenzhen, China
| |
Collapse
|
31
|
Alexopoulos P, Madu C, Wollstein G, Schuman JS. The Development and Clinical Application of Innovative Optical Ophthalmic Imaging Techniques. Front Med (Lausanne) 2022; 9:891369. [PMID: 35847772 PMCID: PMC9279625 DOI: 10.3389/fmed.2022.891369] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 05/23/2022] [Indexed: 11/22/2022] Open
Abstract
The field of ophthalmic imaging has grown substantially over the last years. Massive improvements in image processing and computer hardware have allowed the emergence of multiple imaging techniques of the eye that can transform patient care. The purpose of this review is to describe the most recent advances in eye imaging and explain how new technologies and imaging methods can be utilized in a clinical setting. The introduction of optical coherence tomography (OCT) was a revolution in eye imaging and has since become the standard of care for a plethora of conditions. Its most recent iterations, OCT angiography, and visible light OCT, as well as imaging modalities, such as fluorescent lifetime imaging ophthalmoscopy, would allow a more thorough evaluation of patients and provide additional information on disease processes. Toward that goal, the application of adaptive optics (AO) and full-field scanning to a variety of eye imaging techniques has further allowed the histologic study of single cells in the retina and anterior segment. Toward the goal of remote eye care and more accessible eye imaging, methods such as handheld OCT devices and imaging through smartphones, have emerged. Finally, incorporating artificial intelligence (AI) in eye images has the potential to become a new milestone for eye imaging while also contributing in social aspects of eye care.
Collapse
Affiliation(s)
- Palaiologos Alexopoulos
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
| | - Chisom Madu
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
| | - Gadi Wollstein
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
- Center for Neural Science, College of Arts & Science, New York University, New York, NY, United States
| | - Joel S. Schuman
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
- Center for Neural Science, College of Arts & Science, New York University, New York, NY, United States
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
| |
Collapse
|
32
|
Liu TYA, Wu JH. The Ethical and Societal Considerations for the Rise of Artificial Intelligence and Big Data in Ophthalmology. Front Med (Lausanne) 2022; 9:845522. [PMID: 35836952 PMCID: PMC9273876 DOI: 10.3389/fmed.2022.845522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 06/10/2022] [Indexed: 01/09/2023] Open
Abstract
Medical specialties with access to a large amount of imaging data, such as ophthalmology, have been at the forefront of the artificial intelligence (AI) revolution in medicine, driven by deep learning (DL) and big data. With the rise of AI and big data, there has also been increasing concern on the issues of bias and privacy, which can be partially addressed by low-shot learning, generative DL, federated learning and a "model-to-data" approach, as demonstrated by various groups of investigators in ophthalmology. However, to adequately tackle the ethical and societal challenges associated with the rise of AI in ophthalmology, a more comprehensive approach is preferable. Specifically, AI should be viewed as sociotechnical, meaning this technology shapes, and is shaped by social phenomena.
Collapse
Affiliation(s)
- T. Y. Alvin Liu
- Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, United States,*Correspondence: T. Y. Alvin Liu
| | - Jo-Hsuan Wu
- Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, CA, United States
| |
Collapse
|
33
|
García-Layana A, López-Gálvez M, García-Arumí J, Arias L, Gea-Sánchez A, Marín-Méndez JJ, Sayar-Beristain O, Sedano-Gil G, Aslam TM, Minnella AM, Ibáñez IL, de Dios Hernández JM, Seddon JM. A Screening Tool for Self-Evaluation of Risk for Age-Related Macular Degeneration: Validation in a Spanish Population. Transl Vis Sci Technol 2022; 11:23. [PMID: 35749108 PMCID: PMC9234358 DOI: 10.1167/tvst.11.6.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Purpose The objectives of this study were the creation and validation of a screening tool for age-related macular degeneration (AMD) for routine assessment by primary care physicians, ophthalmologists, other healthcare professionals, and the general population. Methods A simple, self-administered questionnaire (Simplified Théa AMD Risk-Assessment Scale [STARS] version 4.0) which included well-established risk factors for AMD, such as family history, smoking, and dietary factors, was administered to patients during ophthalmology visits. A fundus examination was performed to determine presence of large soft drusen, pigmentary abnormalities, or late AMD. Based on data from the questionnaire and the clinical examination, predictive models were developed to estimate probability of the Age-Related Eye Disease Study (AREDS) score (categorized as low risk/high risk). The models were evaluated by area under the receiving operating characteristic curve analysis. Results A total of 3854 subjects completed the questionnaire and underwent a fundus examination. Early/intermediate and late AMD were detected in 15.9% and 23.8% of the patients, respectively. A predictive model was developed with training, validation, and test datasets. The model in the test set had an area under the curve of 0.745 (95% confidence interval [CI] = 0.705-0.784), a positive predictive value of 0.500 (95% CI = 0.449-0.557), and a negative predictive value of 0.810 (95% CI = 0.770-0.844). Conclusions The STARS questionnaire version 4.0 and the model identify patients at high risk of developing late AMD. Translational Relevance The screening instrument described could be useful to evaluate the risk of late AMD in patients >55 years without having an eye examination, which could lead to more timely referrals and encourage lifestyle changes.
Collapse
Affiliation(s)
- Alfredo García-Layana
- Retinal Pathologies and New Therapies Group, Experimental Ophthalmology Laboratory, Department of Ophthalmology, Clínica Universidad de Navarra, Pamplona, Spain,Navarra Institute for Health Research, IdiSNA, Pamplona, Spain,Red Temática de Investigación Cooperativa Sanitaria en Enfermedades Oculares (Oftared), Instituto de Salud Carlos III, Madrid, Spain
| | - Maribel López-Gálvez
- Red Temática de Investigación Cooperativa Sanitaria en Enfermedades Oculares (Oftared), Instituto de Salud Carlos III, Madrid, Spain,Retina Group, IOBA, Campus Miguel Delibes, Valladolid, Spain,Grupo de Ingeniería Biomédica, Universidad de Valladolid, Campus Miguel Delibes. Valladolid, Spain,Department of Ophthalmology, Hospital Clínico Universitario de Valladolid, Valladolid, Spain
| | - José García-Arumí
- Department of Ophthalmology, Vall d'Hebron University Hospital, Barcelona, Spain
| | - Luis Arias
- Department of Ophthalmology, Bellvitge University Hospital, University of Barcelona, Barcelona, Spain
| | - Alfredo Gea-Sánchez
- Preventive Medicine and Public Health, School of Medicine, University of Navarra, Pamplona, Spain
| | | | | | | | - Tariq M. Aslam
- School of Pharmacy and Optometry, University of Manchester and Manchester Royal Eye Hospital, Manchester, UK
| | - Angelo M. Minnella
- UOC Oculistica, Università Cattolica del S. Cuore, Fondazione Policlinico Universitario A. Gemelli-IRCCS, Rome, Italy
| | - Isabel López Ibáñez
- Department of Family and Community Medicine, Centro de Salud Nápoles y Sicilia, Valencia, Spain
| | | | - Johanna M. Seddon
- Department of Ophthalmology and Visual Sciences, University of Massachusetts Medical School, Worcester, Massachusetts, USA
| |
Collapse
|
34
|
Dow ER, Keenan TDL, Lad EM, Lee AY, Lee CS, Loewenstein A, Eydelman MB, Chew EY, Keane PA, Lim JI. From Data to Deployment: The Collaborative Community on Ophthalmic Imaging Roadmap for Artificial Intelligence in Age-Related Macular Degeneration. Ophthalmology 2022; 129:e43-e59. [PMID: 35016892 PMCID: PMC9859710 DOI: 10.1016/j.ophtha.2022.01.002] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 12/16/2021] [Accepted: 01/04/2022] [Indexed: 01/25/2023] Open
Abstract
OBJECTIVE Health care systems worldwide are challenged to provide adequate care for the 200 million individuals with age-related macular degeneration (AMD). Artificial intelligence (AI) has the potential to make a significant, positive impact on the diagnosis and management of patients with AMD; however, the development of effective AI devices for clinical care faces numerous considerations and challenges, a fact evidenced by a current absence of Food and Drug Administration (FDA)-approved AI devices for AMD. PURPOSE To delineate the state of AI for AMD, including current data, standards, achievements, and challenges. METHODS Members of the Collaborative Community on Ophthalmic Imaging Working Group for AI in AMD attended an inaugural meeting on September 7, 2020, to discuss the topic. Subsequently, they undertook a comprehensive review of the medical literature relevant to the topic. Members engaged in meetings and discussion through December 2021 to synthesize the information and arrive at a consensus. RESULTS Existing infrastructure for robust AI development for AMD includes several large, labeled data sets of color fundus photography and OCT images; however, image data often do not contain the metadata necessary for the development of reliable, valid, and generalizable models. Data sharing for AMD model development is made difficult by restrictions on data privacy and security, although potential solutions are under investigation. Computing resources may be adequate for current applications, but knowledge of machine learning development may be scarce in many clinical ophthalmology settings. Despite these challenges, researchers have produced promising AI models for AMD for screening, diagnosis, prediction, and monitoring. Future goals include defining benchmarks to facilitate regulatory authorization and subsequent clinical setting generalization. CONCLUSIONS Delivering an FDA-authorized, AI-based device for clinical care in AMD involves numerous considerations, including the identification of an appropriate clinical application; acquisition and development of a large, high-quality data set; development of the AI architecture; training and validation of the model; and functional interactions between the model output and clinical end user. The research efforts undertaken to date represent starting points for the medical devices that eventually will benefit providers, health care systems, and patients.
Collapse
Affiliation(s)
- Eliot R Dow
- Byers Eye Institute, Stanford University, Palo Alto, California
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Eleonora M Lad
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Medical Center, Tel Aviv, Israel
| | - Malvina B Eydelman
- Office of Health Technology 1, Center of Devices and Radiological Health, Food and Drug Administration, Silver Spring, Maryland
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland.
| | - Pearse A Keane
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom.
| | - Jennifer I Lim
- Department of Ophthalmology, University of Illinois at Chicago, Chicago, Illinois.
| |
Collapse
|
35
|
Ganjdanesh A, Zhang J, Chew EY, Ding Y, Huang H, Chen W. LONGL-Net: temporal correlation structure guided deep learning model to predict longitudinal age-related macular degeneration severity. PNAS NEXUS 2022; 1:pgab003. [PMID: 35360552 PMCID: PMC8962776 DOI: 10.1093/pnasnexus/pgab003] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 11/15/2021] [Indexed: 01/28/2023]
Abstract
Age-related macular degeneration (AMD) is the principal cause of blindness in developed countries, and its prevalence will increase to 288 million people in 2040. Therefore, automated grading and prediction methods can be highly beneficial for recognizing susceptible subjects to late-AMD and enabling clinicians to start preventive actions for them. Clinically, AMD severity is quantified by Color Fundus Photographs (CFP) of the retina, and many machine-learning-based methods are proposed for grading AMD severity. However, few models were developed to predict the longitudinal progression status, i.e. predicting future late-AMD risk based on the current CFP, which is more clinically interesting. In this paper, we propose a new deep-learning-based classification model (LONGL-Net) that can simultaneously grade the current CFP and predict the longitudinal outcome, i.e. whether the subject will be in late-AMD in the future time-point. We design a new temporal-correlation-structure-guided Generative Adversarial Network model that learns the interrelations of temporal changes in CFPs in consecutive time-points and provides interpretability for the classifier's decisions by forecasting AMD symptoms in the future CFPs. We used about 30,000 CFP images from 4,628 participants in the Age-Related Eye Disease Study. Our classifier showed average 0.905 (95% CI: 0.886-0.922) AUC and 0.762 (95% CI: 0.733-0.792) accuracy on the 3-class classification problem of simultaneously grading current time-point's AMD condition and predicting late AMD progression of subjects in the future time-point. We further validated our model on the UK Biobank dataset, where our model showed average 0.905 accuracy and 0.797 sensitivity in grading 300 CFP images.
Collapse
Affiliation(s)
- Alireza Ganjdanesh
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA 15261, USA
| | - Jipeng Zhang
- Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Ying Ding
- Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Heng Huang
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA 15261, USA
| | - Wei Chen
- Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, PA 15213, USA
- Division of Pulmonary Medicine, Department of Pediatrics, UPMC Children's Hospital of Pittsburgh, University of Pittsburgh, Pittsburgh, PA 15219, USA
| |
Collapse
|
36
|
Ghahramani G, Brendel M, Lin M, Chen Q, Keenan T, Chen K, Chew E, Lu Z, Peng Y, Wang F. Multi-task deep learning-based survival analysis on the prognosis of late AMD using the longitudinal data in AREDS. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2022; 2021:506-515. [PMID: 35308963 PMCID: PMC8861665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Age-related macular degeneration (AMD) is the leading cause of vision loss. Some patients experience vision loss over a delayed timeframe, others at a rapid pace. Physicians analyze time-of-visit fundus photographs to predict patient risk of developing late-AMD, the most severe form of AMD. Our study hypothesizes that 1) incorporating historical data improves predictive strength of developing late-AMD and 2) state-of-the-art deep-learning techniques extract more predictive image features than clinicians do. We incorporate longitudinal data from the Age-Related Eye Disease Studies and deep-learning extracted image features in survival settings to predict development of late- AMD. To extract image features, we used multi-task learning frameworks to train convolutional neural networks. Our findings show 1) incorporating longitudinal data improves prediction of late-AMD for clinical standard features, but only the current visit is informative when using complex features and 2) "deep-features" are more informative than clinician derived features. We make codes publicly available at https://github.com/bionlplab/AMD_prognosis_amia2021.
Collapse
Affiliation(s)
- Gregory Ghahramani
- Department of Physiology, Biophysics, and Systems Biology, Weill Cornell Medicine, New York, NY USA
| | - Matthew Brendel
- Department of Physiology, Biophysics, and Systems Biology, Weill Cornell Medicine, New York, NY USA
| | - Mingquan Lin
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY USA
| | - Qingyu Chen
- National Center for Biotechnology Information (NCBI), National Library of Medicine (NLM), National Institutes of Health (NIH), Bethesda, MD USA
| | - Tiarnan Keenan
- National Eye Institute (NEI), National Institutes of Health (NIH), Bethesda, MD USA
| | - Kun Chen
- Department of Statistics, University of Connecticut, Storrs, CT USA
| | - Emily Chew
- National Eye Institute (NEI), National Institutes of Health (NIH), Bethesda, MD USA
| | - Zhiyong Lu
- National Center for Biotechnology Information (NCBI), National Library of Medicine (NLM), National Institutes of Health (NIH), Bethesda, MD USA
| | - Yifan Peng
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY USA
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY USA
| |
Collapse
|
37
|
Kumar H, Goh KL, Guymer RH, Wu Z. A clinical perspective on the expanding role of artificial intelligence in age-related macular degeneration. Clin Exp Optom 2022; 105:674-679. [PMID: 35073498 DOI: 10.1080/08164622.2021.2022961] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
In recent years, there has been intense development of artificial intelligence (AI) techniques, which have the potential to improve the clinical management of age-related macular degeneration (AMD) and facilitate the prevention of irreversible vision loss from this condition. Such AI techniques could be used as clinical decision support tools to: (i) improve the detection of AMD by community eye health practitioners, (ii) enhance risk stratification to enable personalised monitoring strategies for those with the early stages of AMD, and (iii) enable early detection of signs indicative of possible choroidal neovascularisation allowing triaging of patients requiring urgent review. This review discusses the latest developments in AI techniques that show promise for these tasks, as well as how they may help in the management of patients being treated for choroidal neovascularisation and in accelerating the discovery of new treatments in AMD.
Collapse
Affiliation(s)
- Himeesh Kumar
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Victoria, Australia
| | - Kai Lyn Goh
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Victoria, Australia
| | - Robyn H Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Victoria, Australia
| | - Zhichao Wu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Victoria, Australia
| |
Collapse
|
38
|
Liu TYA, Wei J, Zhu H, Subramanian PS, Myung D, Yi PH, Hui FK, Unberath M, Ting DSW, Miller NR. Detection of Optic Disc Abnormalities in Color Fundus Photographs Using Deep Learning. J Neuroophthalmol 2021; 41:368-374. [PMID: 34415271 PMCID: PMC10637344 DOI: 10.1097/wno.0000000000001358] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
BACKGROUND To date, deep learning-based detection of optic disc abnormalities in color fundus photographs has mostly been limited to the field of glaucoma. However, many life-threatening systemic and neurological conditions can manifest as optic disc abnormalities. In this study, we aimed to extend the application of deep learning (DL) in optic disc analyses to detect a spectrum of nonglaucomatous optic neuropathies. METHODS Using transfer learning, we trained a ResNet-152 deep convolutional neural network (DCNN) to distinguish between normal and abnormal optic discs in color fundus photographs (CFPs). Our training data set included 944 deidentified CFPs (abnormal 364; normal 580). Our testing data set included 151 deidentified CFPs (abnormal 71; normal 80). Both the training and testing data sets contained a wide range of optic disc abnormalities, including but not limited to ischemic optic neuropathy, atrophy, compressive optic neuropathy, hereditary optic neuropathy, hypoplasia, papilledema, and toxic optic neuropathy. The standard measures of performance (sensitivity, specificity, and area under the curve of the receiver operating characteristic curve (AUC-ROC)) were used for evaluation. RESULTS During the 10-fold cross-validation test, our DCNN for distinguishing between normal and abnormal optic discs achieved the following mean performance: AUC-ROC 0.99 (95 CI: 0.98-0.99), sensitivity 94% (95 CI: 91%-97%), and specificity 96% (95 CI: 93%-99%). When evaluated against the external testing data set, our model achieved the following mean performance: AUC-ROC 0.87, sensitivity 90%, and specificity 69%. CONCLUSION In summary, we have developed a deep learning algorithm that is capable of detecting a spectrum of optic disc abnormalities in color fundus photographs, with a focus on neuro-ophthalmological etiologies. As the next step, we plan to validate our algorithm prospectively as a focused screening tool in the emergency department, which if successful could be beneficial because current practice pattern and training predict a shortage of neuro-ophthalmologists and ophthalmologists in general in the near future.
Collapse
Affiliation(s)
- T Y Alvin Liu
- Department of Ophthalmology (TYAL, NRM), Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland; Department of Biomedical Engineering (JW), Johns Hopkins University, Baltimore, Maryland; Malone Center for Engineering in Healthcare (HZ, MU), Johns Hopkins University, Baltimore, Maryland; Department of Radiology (PHY, FKH), Johns Hopkins University, Baltimore, Maryland; Singapore Eye Research Institute (DSWT), Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore ; Department of Ophthalmology (PSS), University of Colorado School of Medicine, Aurora, Colorado; and Department of Ophthalmology (DM), Byers Eye Institute, Stanford University, Palo Alto, California
| | | | | | | | | | | | | | | | | | | |
Collapse
|