1
|
Weinreb RN, Lee AY, Baxter SL, Lee RWJ, Leng T, McConnell MV, El-Nimri NW, Rhew DC. Application of Artificial Intelligence to Deliver Healthcare From the Eye. JAMA Ophthalmol 2025:2833592. [PMID: 40338607 DOI: 10.1001/jamaophthalmol.2025.0881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/09/2025]
Abstract
Importance Oculomics is the science of analyzing ocular data to identify, diagnose, and manage systemic disease. This article focuses on prescreening, its use with retinal images analyzed by artificial intelligence (AI), to identify ocular or systemic disease or potential disease in asymptomatic individuals. The implementation of prescreening in a coordinated care system, defined as Healthcare From the Eye prescreening, has the potential to improve access, affordability, equity, quality, and safety of health care on a global level. Stakeholders include physicians, payers, policymakers, regulators and representatives from industry, government, and data privacy sectors. Observations The combination of AI analysis of ocular data with automated technologies that capture images during routine eye examinations enables prescreening of large populations for chronic disease. Retinal images can be acquired during either a routine eye examination or in settings outside of eye care with readily accessible, safe, quick, and noninvasive retinal imaging devices. The outcome of such an examination can then be digitally communicated across relevant stakeholders in a coordinated fashion to direct a patient to screening and monitoring services. Such an approach offers the opportunity to transform health care delivery and improve early disease detection, improve access to care, enhance equity especially in rural and underserved communities, and reduce costs. Conclusions and Relevance With effective implementation and collaboration among key stakeholders, this approach has the potential to contribute to an equitable and effective health care system.
Collapse
Affiliation(s)
- Robert N Weinreb
- Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla
- Shiley Eye Institute, University of California, San Diego, La Jolla
| | - Aaron Y Lee
- Department of Ophthalmology, School of Medicine, University of Washington, Seattle
| | - Sally L Baxter
- Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla
- Shiley Eye Institute, University of California, San Diego, La Jolla
- Division of Biomedical Informatics, Department of Medicine, University of California, San Diego, La Jolla
| | - Richard W J Lee
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Theodore Leng
- Department of Ophthalmology, Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Michael V McConnell
- Division of Cardiovascular Medicine, Stanford University School of Medicine, Stanford, California
| | | | - David C Rhew
- Health & Life Sciences, Microsoft, Seattle, Washington
- Division of Primary Care & Population Health, Department of Medicine, Stanford University School of Medicine, Stanford, California
| |
Collapse
|
2
|
Al Younis SM, Ghosh SK, Raja H, Alskafi FA, Yousefi S, Khandoker AH. Prediction of heart failure risk factors from retinal optical imaging via explainable machine learning. Front Med (Lausanne) 2025; 12:1551557. [PMID: 40166058 PMCID: PMC11955505 DOI: 10.3389/fmed.2025.1551557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2025] [Accepted: 03/03/2025] [Indexed: 04/02/2025] Open
Abstract
Over 64 million people worldwide are affected by heart failure (HF), a condition that significantly raises mortality and medical expenses. In this study, we explore the potential of retinal optical coherence tomography (OCT) features as non-invasive biomarkers for the classification of heart failure subtypes: left ventricular heart failure (LVHF), congestive heart failure (CHF), and unspecified heart failure (UHF). By analyzing retinal measurements from the left eye, right eye, and both eyes, we aim to investigate the relationship between ocular indicators and heart failure using machine learning (ML) techniques. We conducted nine classification experiments to compare normal individuals against LVHF, CHF, and UHF patients, using retinal OCT features from each eye separately and in combination. Our analysis revealed that retinal thickness metrics, particularly ISOS-RPE and macular thickness in various regions, were significantly reduced in heart failure patients. Logistic regression, CatBoost, and XGBoost models demonstrated robust performance, with notable accuracy and area under the curve (AUC) scores, especially in classifying CHF and UHF. Feature importance analysis highlighted key retinal parameters, such as inner segment-outer segment to retinal pigment epithelium (ISOS-RPE) and inner nuclear layer to the external limiting membrane (INL-ELM) thickness, as crucial indicators for heart failure detection. The integration of explainable artificial intelligence further enhanced model interpretability, shedding light on the biological mechanisms linking retinal changes to heart failure pathology. Our findings suggest that retinal OCT features, particularly when derived from both eyes, have significant potential as non-invasive tools for early detection and classification of heart failure. These insights may aid in developing wearable, portable diagnostic systems, providing scalable solutions for personalized healthcare, and improving clinical outcomes for heart failure patients.
Collapse
Affiliation(s)
- Sona M. Al Younis
- Department of Biomedical Engineering and Biotechnology, Healthcare Engineering Innovation Group (HEIG), Khalifa University, Abu Dhabi, United Arab Emirates
| | - Samit Kumar Ghosh
- Department of Biomedical Engineering and Biotechnology, Healthcare Engineering Innovation Group (HEIG), Khalifa University, Abu Dhabi, United Arab Emirates
| | - Hina Raja
- Department of Mathematics and Computer Science, Fisk University, Nashville, TN, United States
| | - Feryal A. Alskafi
- Department of Biomedical Engineering and Biotechnology, Healthcare Engineering Innovation Group (HEIG), Khalifa University, Abu Dhabi, United Arab Emirates
| | - Siamak Yousefi
- Department of Mathematics and Computer Science, Fisk University, Nashville, TN, United States
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, United States
| | - Ahsan H. Khandoker
- Department of Biomedical Engineering and Biotechnology, Healthcare Engineering Innovation Group (HEIG), Khalifa University, Abu Dhabi, United Arab Emirates
| |
Collapse
|
3
|
An S, Teo K, McConnell MV, Marshall J, Galloway C, Squirrell D. AI explainability in oculomics: How it works, its role in establishing trust, and what still needs to be addressed. Prog Retin Eye Res 2025; 106:101352. [PMID: 40086660 DOI: 10.1016/j.preteyeres.2025.101352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2024] [Revised: 03/07/2025] [Accepted: 03/10/2025] [Indexed: 03/16/2025]
Abstract
Recent developments in artificial intelligence (AI) have seen a proliferation of algorithms that are now capable of predicting a range of systemic diseases from retinal images. Unlike traditional retinal disease detection AI models which are trained on well-recognised retinal biomarkers, systemic disease detection or "oculomics" models use a range of often poorly characterised retinal biomarkers to arrive at their predictions. As the retinal phenotype that oculomics models use may not be intuitive, clinicians have to rely on the developers' explanations of how these algorithms work in order to understand them. The discipline of understanding how AI algorithms work employs two similar but distinct terms: Explainable AI and Interpretable AI (iAI). Explainable AI describes the holistic functioning of an AI system, including its impact and potential biases. Interpretable AI concentrates solely on examining and understanding the workings of the AI algorithm itself. iAI tools are therefore what the clinician must rely on if they are to understand how the algorithm works and whether its predictions are reliable. The iAI tools that developers use can be delineated into two broad categories: Intrinsic methods that improve transparency through architectural changes and post-hoc methods that explain trained models via external algorithms. Currently post-hoc methods, class activation maps in particular, are far more widely used than other techniques but they have their limitations especially when applied to oculomics AI models. Aimed at clinicians, we examine how the key iAI methods work, what they are designed to do and what their limitations are when applied to oculomics AI. We conclude by discussing how combining existing iAI techniques with novel approaches could allow AI developers to better explain how their oculomics models work and reassure clinicians that the results issued are reliable.
Collapse
Affiliation(s)
- Songyang An
- School of Optometry and Vision Science, University of Auckland, Auckland, New Zealand; Toku Eyes Limited NZ, 110 Carlton Gore Road, Newmarket, Auckland, 1023, New Zealand
| | - Kelvin Teo
- Singapore Eye Research Institute, The Academia, 20 College Road Discovery Tower Level 6, 169856, Singapore; Singapore National University, Singapore
| | - Michael V McConnell
- Division of Cardiovascular Medicine, Stanford University School of Medicine, Stanford, CA, USA; Toku Eyes Limited NZ, 110 Carlton Gore Road, Newmarket, Auckland, 1023, New Zealand
| | - John Marshall
- Institute of Ophthalmology University College London, 11-43 Bath Street, London, EC1V 9EL, UK
| | - Christopher Galloway
- Department of Business and Communication, Massey University, East Precinct Albany Expressway, SH17, Albany, Auckland, 0632, New Zealand
| | - David Squirrell
- Department of Ophthalmology, University of the Sunshine Coast, Queensland, Australia; Toku Eyes Limited NZ, 110 Carlton Gore Road, Newmarket, Auckland, 1023, New Zealand.
| |
Collapse
|
4
|
Zhu Z, Wang Y, Qi Z, Hu W, Zhang X, Wagner SK, Wang Y, Ran AR, Ong J, Waisberg E, Masalkhi M, Suh A, Tham YC, Cheung CY, Yang X, Yu H, Ge Z, Wang W, Sheng B, Liu Y, Lee AG, Denniston AK, Wijngaarden PV, Keane PA, Cheng CY, He M, Wong TY. Oculomics: Current concepts and evidence. Prog Retin Eye Res 2025; 106:101350. [PMID: 40049544 DOI: 10.1016/j.preteyeres.2025.101350] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Revised: 03/03/2025] [Accepted: 03/03/2025] [Indexed: 03/20/2025]
Abstract
The eye provides novel insights into general health, as well as pathogenesis and development of systemic diseases. In the past decade, growing evidence has demonstrated that the eye's structure and function mirror multiple systemic health conditions, especially in cardiovascular diseases, neurodegenerative disorders, and kidney impairments. This has given rise to the field of oculomics-the application of ophthalmic biomarkers to understand mechanisms, detect and predict disease. The development of this field has been accelerated by three major advances: 1) the availability and widespread clinical adoption of high-resolution and non-invasive ophthalmic imaging ("hardware"); 2) the availability of large studies to interrogate associations ("big data"); 3) the development of novel analytical methods, including artificial intelligence (AI) ("software"). Oculomics offers an opportunity to enhance our understanding of the interplay between the eye and the body, while supporting development of innovative diagnostic, prognostic, and therapeutic tools. These advances have been further accelerated by developments in AI, coupled with large-scale linkage datasets linking ocular imaging data with systemic health data. Oculomics also enables the detection, screening, diagnosis, and monitoring of many systemic health conditions. Furthermore, oculomics with AI allows prediction of the risk of systemic diseases, enabling risk stratification, opening up new avenues for prevention or individualized risk prediction and prevention, facilitating personalized medicine. In this review, we summarise current concepts and evidence in the field of oculomics, highlighting the progress that has been made, remaining challenges, and the opportunities for future research.
Collapse
Affiliation(s)
- Zhuoting Zhu
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia.
| | - Yueye Wang
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Ziyi Qi
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia; Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Diseases, Shanghai, China
| | - Wenyi Hu
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia
| | - Xiayin Zhang
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Siegfried K Wagner
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK; Institute of Ophthalmology, University College London, London, UK
| | - Yujie Wang
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Joshua Ong
- Department of Ophthalmology and Visual Sciences, University of Michigan Kellogg Eye Center, Ann Arbor, USA
| | - Ethan Waisberg
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | - Mouayad Masalkhi
- University College Dublin School of Medicine, Belfield, Dublin, Ireland
| | - Alex Suh
- Tulane University School of Medicine, New Orleans, LA, USA
| | - Yih Chung Tham
- Department of Ophthalmology and Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Xiaohong Yang
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Honghua Yu
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Zongyuan Ge
- Monash e-Research Center, Faculty of Engineering, Airdoc Research, Nvidia AI Technology Research Center, Monash University, Melbourne, VIC, Australia
| | - Wei Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yun Liu
- Google Research, Mountain View, CA, USA
| | - Andrew G Lee
- Center for Space Medicine and the Department of Ophthalmology, Baylor College of Medicine, Houston, USA; Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, USA; The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, USA; Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, USA; Department of Ophthalmology, University of Texas Medical Branch, Galveston, USA; University of Texas MD Anderson Cancer Center, Houston, USA; Texas A&M College of Medicine, Bryan, USA; Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Alastair K Denniston
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK; Institute of Ophthalmology, University College London, London, UK; National Institute for Health and Care Research (NIHR) Birmingham Biomedical Research Centre (BRC), University Hospital Birmingham and University of Birmingham, Birmingham, UK; University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK; Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK
| | - Peter van Wijngaarden
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, VIC, Australia; Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, VIC, Australia; Florey Institute of Neuroscience and Mental Health, University of Melbourne, Parkville, VIC, Australia
| | - Pearse A Keane
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK; Institute of Ophthalmology, University College London, London, UK
| | - Ching-Yu Cheng
- Department of Ophthalmology and Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Mingguang He
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong, China; Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Hong Kong, China
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
5
|
Eid P, Bourredjem A, Anwer A, Creuzot-Garcher C, Keane PA, Zhou Y, Wagner S, Meriaudeau F, Arnould L. Retinal Microvascular Biomarker Assessment With Automated Algorithm and Semiautomated Software in the Montrachet Dataset. Transl Vis Sci Technol 2025; 14:13. [PMID: 40072417 PMCID: PMC11918093 DOI: 10.1167/tvst.14.3.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2024] [Accepted: 02/07/2025] [Indexed: 03/14/2025] Open
Abstract
Purpose To compare automated and semiautomated methods for the measurement of retinal microvascular biomarkers: the automated retinal vascular morphology (AutoMorph) algorithm and the Singapore "I" Vessel Assessment (SIVA) software. Methods Analysis of retinal fundus photographs centered on optic discs from the population-based Montrachet Study of adults aged 75 years and older. Comparison and agreement evaluation with intraclass correlation coefficients (ICCs) between SIVA and AutoMorph measures of the central retinal venular and arteriolar equivalent, arteriolar-venular ratio, and fractal dimension. Results Overall, 1069 fundus photographs were included in this study. The mean age of the patients was 80.04 ± 3.94 years. After the image quality grading process with an optimal threshold, the lowest rejection rate was 51.17% for the AutoMorph analysis (n = 522). The measure of agreement between SIVA and AutoMorph retinal microvascular biomarkers showed a good correlation for vascular complexity (ICC, 0.77-0.47), a poor correlation for vascular calibers (ICC, 0.36-0.23), and no correlation for vascular tortuosity. Significant associations between retinal biomarkers and systemic variables (age, history of stroke, and systolic blood pressure) were consistent between SIVA and AutoMorph. Conclusions In this dataset, AutoMorph presented a substantial rejection rate. SIVA and AutoMorph provided well-correlated measurements of vascular complexity and caliber with consistent clinical associations. Further comparisons are needed before a transition is made from semiautomated to automated algorithms for the analysis of retinal microvascular biomarkers. Translational Relevance Open source software needs to be compared with former semiautomated software for retinal microvascular biomarkers assessment before transition in daily clinic and collaborative research.
Collapse
Affiliation(s)
- Pétra Eid
- Ophthalmology Department, Dijon University Hospital, Dijon, France
- Centre des Sciences du Goût et de l'Alimentation, AgroSup Dijon, CNRS, INRAE, Université Bourgogne, Dijon, France
| | - Abderrahmane Bourredjem
- CIC 1432, Epidémiologie Clinique, Centre Hospitalier Universitaire Dijon-Bourgogne, Dijon, France
| | - Atif Anwer
- Institut de Chimie Moléculaire Université de Bourgogne (ICMUB), Imagerie Fonctionnelle et moléculaire et Traitement des Images Médicales (IFTIM), Burgundy University, Dijon, France
| | - Catherine Creuzot-Garcher
- Ophthalmology Department, Dijon University Hospital, Dijon, France
- Centre des Sciences du Goût et de l'Alimentation, AgroSup Dijon, CNRS, INRAE, Université Bourgogne, Dijon, France
| | - Pearse Andrew Keane
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Yukun Zhou
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Siegfried Wagner
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Fabrice Meriaudeau
- Institut de Chimie Moléculaire Université de Bourgogne (ICMUB), Imagerie Fonctionnelle et moléculaire et Traitement des Images Médicales (IFTIM), Burgundy University, Dijon, France
| | - Louis Arnould
- Ophthalmology Department, Dijon University Hospital, Dijon, France
- Pathophysiology and Epidemiology of Cerebro-Cardiovascular Diseases (PEC2), (EA 7460), Faculty of Health Sciences, Université de Bourgogne, Dijon, France
| |
Collapse
|
6
|
Hu W, Lin Z, Clark M, Henwood J, Shang X, Chen R, Kiburg K, Zhang L, Ge Z, van Wijngaarden P, Zhu Z, He M. Real-world feasibility, accuracy and acceptability of automated retinal photography and AI-based cardiovascular disease risk assessment in Australian primary care settings: a pragmatic trial. NPJ Digit Med 2025; 8:122. [PMID: 39994433 PMCID: PMC11850881 DOI: 10.1038/s41746-025-01436-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Accepted: 01/03/2025] [Indexed: 02/26/2025] Open
Abstract
We aim to assess the real-world accuracy (primary outcome), feasibility and acceptability (secondary outcomes) of an automated retinal photography and artificial intelligence (AI)-based cardiovascular disease (CVD) risk assessment system (rpCVD) in Australian primary care settings. Participants aged 45-70 years who had recently undergone all or part of a CVD risk assessment were recruited from two general practice clinics in Victoria, Australia. After consenting, participants underwent retinal imaging using an automated fundus camera, and an rpCVD risk score was generated by a deep learning algorithm. This score was compared against the World Health Organisation (WHO) CVD risk score, which incorporates age, sex, and other clinical risk factors. The predictive accuracy of the rpCVD and WHO CVD risk scores for 10-year incident CVD events was evaluated using data from the UK Biobank, with the accuracy of each system assessed through the area under the receiver operating characteristic curve (AUC). Participant satisfaction was assessed through a survey, and the imaging success rate was determined by the percentage of individuals with images of sufficient quality to produce an rpCVD risk score. Of the 361 participants, 339 received an rpCVD risk score, resulting in a 93.9% imaging success rate. The rpCVD risk scores showed a moderate correlation with the WHO CVD risk scores (Pearson correlation coefficient [PCC] = 0.526, 95% CI: 0.444-0.599). Despite this, the rpCVD system, which relies solely on retinal images, demonstrated a similar level of accuracy in predicting 10-year incident CVD (AUC = 0.672, 95% CI: 0.658-0.686) compared to the WHO CVD risk score (AUC = 0.693, 95% CI: 0.680-0.707). High satisfaction rates were reported, with 92.5% of participants and 87.5% of general practitioners (GPs) expressing satisfaction with the system. The automated rpCVD system, using only retinal photographs, demonstrated predictive accuracy comparable to the WHO CVD risk score, which incorporates multiple clinical factors including age, the most heavily weighted factor for CVD prediction. This underscores the potential of the rpCVD approach as a faster, easier, and non-invasive alternative for CVD risk assessment in primary care settings, avoiding the need for more complex clinical procedures.
Collapse
Affiliation(s)
- Wenyi Hu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Australia
- Ophthalmology, Department of Surgery, The University of Melbourne, Melbourne, Australia
| | - Zhihong Lin
- The AIM for Health Lab, Monash University, Melbourne, Australia
- Faculty of Engineering, Monash University, Melbourne, Australia
| | - Malcolm Clark
- Department of General Practice, The University of Melbourne, Melbourne, Australia
| | - Jacqueline Henwood
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Australia
| | - Xianwen Shang
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Australia
- Ophthalmology, Department of Surgery, The University of Melbourne, Melbourne, Australia
- School of Optometry, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ruiye Chen
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Australia
- Ophthalmology, Department of Surgery, The University of Melbourne, Melbourne, Australia
| | - Katerina Kiburg
- Ophthalmology, Department of Surgery, The University of Melbourne, Melbourne, Australia
| | - Lei Zhang
- Clinical Medical Research Center, Children's Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, 210008, China
- Artificial Intelligence and Modelling in Epidemiology Program, Melbourne Sexual Health Centre, Alfred Health, Melbourne, Australia
- Central Clinical School, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - Zongyuan Ge
- The AIM for Health Lab, Monash University, Melbourne, Australia.
- Faculty of Information Technology, Monash University, Melbourne, Australia.
| | - Peter van Wijngaarden
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Australia.
- Ophthalmology, Department of Surgery, The University of Melbourne, Melbourne, Australia.
- The Florey Institute of Neuroscience and Mental Health, Melbourne, Australia.
| | - Zhuoting Zhu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Australia.
- Ophthalmology, Department of Surgery, The University of Melbourne, Melbourne, Australia.
| | - Mingguang He
- Ophthalmology, Department of Surgery, The University of Melbourne, Melbourne, Australia.
- School of Optometry, The Hong Kong Polytechnic University, Hong Kong, China.
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
| |
Collapse
|
7
|
Dumitrascu OM, Li X, Zhu W, Woodruff BK, Nikolova S, Sobczak J, Youssef A, Saxena S, Andreev J, Caselli RJ, Chen JJ, Wang Y. Color Fundus Photography and Deep Learning Applications in Alzheimer Disease. MAYO CLINIC PROCEEDINGS. DIGITAL HEALTH 2024; 2:548-558. [PMID: 39748801 PMCID: PMC11695061 DOI: 10.1016/j.mcpdig.2024.08.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
Abstract
Objective To report the development and performance of 2 distinct deep learning models trained exclusively on retinal color fundus photographs to classify Alzheimer disease (AD). Patients and Methods Two independent datasets (UK Biobank and our tertiary academic institution) of good-quality retinal photographs derived from patients with AD and controls were used to build 2 deep learning models, between April 1, 2021, and January 30, 2024. ADVAS is a U-Net-based architecture that uses retinal vessel segmentation. ADRET is a bidirectional encoder representations from transformers style self-supervised learning convolutional neural network pretrained on a large data set of retinal color photographs from UK Biobank. The models' performance to distinguish AD from non-AD was determined using mean accuracy, sensitivity, specificity, and receiving operating curves. The generated attention heatmaps were analyzed for distinctive features. Results The self-supervised ADRET model had superior accuracy when compared with ADVAS, in both UK Biobank (98.27% vs 77.20%; P<.001) and our institutional testing data sets (98.90% vs 94.17%; P=.04). No major differences were noted between the original and binary vessel segmentation and between both eyes vs single-eye models. Attention heatmaps obtained from patients with AD highlighted regions surrounding small vascular branches as areas of highest relevance to the model decision making. Conclusion A bidirectional encoder representations from transformers style self-supervised convolutional neural network pretrained on a large data set of retinal color photographs alone can screen symptomatic AD with high accuracy, better than U-Net-pretrained models. To be translated in clinical practice, this methodology requires further validation in larger and diverse populations and integrated techniques to harmonize fundus photographs and attenuate the imaging-associated noise.
Collapse
Affiliation(s)
- Oana M. Dumitrascu
- Department of Neurology, Mayo Clinic, Scottsdale, AZ
- Department of Ophthalmology, Mayo Clinic, Scottsdale, AZ
| | - Xin Li
- School of Computed and Augmented Intelligence, Arizona State University, Tempe, AZ
| | - Wenhui Zhu
- School of Computed and Augmented Intelligence, Arizona State University, Tempe, AZ
| | | | | | - Jacob Sobczak
- Department of Neurology, Mayo Clinic, Scottsdale, AZ
| | - Amal Youssef
- Department of Neurology, Mayo Clinic, Scottsdale, AZ
| | | | | | | | - John J. Chen
- Department of Ophthalmology, Mayo Clinic Rochester, MN
- Department of Neurology, Mayo Clinic Rochester, MN
| | - Yalin Wang
- School of Computed and Augmented Intelligence, Arizona State University, Tempe, AZ
| |
Collapse
|
8
|
An S, Squirrell D. Validation of neuron activation patterns for artificial intelligence models in oculomics. Sci Rep 2024; 14:20940. [PMID: 39251780 PMCID: PMC11383926 DOI: 10.1038/s41598-024-71517-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Accepted: 08/28/2024] [Indexed: 09/11/2024] Open
Abstract
Recent advancements in artificial intelligence (AI) have prompted researchers to expand into the field of oculomics; the association between the retina and systemic health. Unlike conventional AI models trained on well-recognized retinal features, the retinal phenotypes that most oculomics models use are more subtle. Consequently, applying conventional tools, such as saliency maps, to understand how oculomics models arrive at their inference is problematic and open to bias. We hypothesized that neuron activation patterns (NAPs) could be an alternative way to interpret oculomics models, but currently, most existing implementations focus on failure diagnosis. In this study, we designed a novel NAP framework to interpret an oculomics model. We then applied our framework to an AI model predicting systolic blood pressure from fundus images in the United Kingdom Biobank dataset. We found that the NAP generated from our framework was correlated to the clinically relevant endpoint of cardiovascular risk. Our NAP was also able to discern two biologically distinct groups among participants who were assigned the same predicted systolic blood pressure. These results demonstrate the feasibility of our proposed NAP framework for gaining deeper insights into the functioning of oculomics models. Further work is required to validate these results on external datasets.
Collapse
Affiliation(s)
- Songyang An
- School of Optometry and Vision Science, The University of Auckland, 85 Park Rd, Grafton, Auckland, 1023, New Zealand.
- Toku Eyes Limited NZ, Auckland, New Zealand.
| | | |
Collapse
|
9
|
Martin E, Cook AG, Frost SM, Turner AW, Chen FK, McAllister IL, Nolde JM, Schlaich MP. Ocular biomarkers: useful incidental findings by deep learning algorithms in fundus photographs. Eye (Lond) 2024; 38:2581-2588. [PMID: 38734746 PMCID: PMC11385472 DOI: 10.1038/s41433-024-03085-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 04/03/2024] [Accepted: 04/11/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND/OBJECTIVES Artificial intelligence can assist with ocular image analysis for screening and diagnosis, but it is not yet capable of autonomous full-spectrum screening. Hypothetically, false-positive results may have unrealized screening potential arising from signals persisting despite training and/or ambiguous signals such as from biomarker overlap or high comorbidity. The study aimed to explore the potential to detect clinically useful incidental ocular biomarkers by screening fundus photographs of hypertensive adults using diabetic deep learning algorithms. SUBJECTS/METHODS Patients referred for treatment-resistant hypertension were imaged at a hospital unit in Perth, Australia, between 2016 and 2022. The same 45° colour fundus photograph selected for each of the 433 participants imaged was processed by three deep learning algorithms. Two expert retinal specialists graded all false-positive results for diabetic retinopathy in non-diabetic participants. RESULTS Of the 29 non-diabetic participants misclassified as positive for diabetic retinopathy, 28 (97%) had clinically useful retinal biomarkers. The models designed to screen for fewer diseases captured more incidental disease. All three algorithms showed a positive correlation between severity of hypertensive retinopathy and misclassified diabetic retinopathy. CONCLUSIONS The results suggest that diabetic deep learning models may be responsive to hypertensive and other clinically useful retinal biomarkers within an at-risk, hypertensive cohort. Observing that models trained for fewer diseases captured more incidental pathology increases confidence in signalling hypotheses aligned with using self-supervised learning to develop autonomous comprehensive screening. Meanwhile, non-referable and false-positive outputs of other deep learning screening models could be explored for immediate clinical use in other populations.
Collapse
Affiliation(s)
- Eve Martin
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia.
- School of Population and Global Health, The University of Western Australia, Crawley, Australia.
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia.
- Australian e-Health Research Centre, Floreat, WA, Australia.
| | - Angus G Cook
- School of Population and Global Health, The University of Western Australia, Crawley, Australia
| | - Shaun M Frost
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia
- Australian e-Health Research Centre, Floreat, WA, Australia
| | - Angus W Turner
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Fred K Chen
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, The University of Melbourne, East Melbourne, VIC, Australia
- Ophthalmology Department, Royal Perth Hospital, Perth, Australia
| | - Ian L McAllister
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Janis M Nolde
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| | - Markus P Schlaich
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| |
Collapse
|
10
|
Vaghefi E, An S, Corbett R, Squirrell D. Association of retinal image-based, deep learning cardiac BioAge with telomere length and cardiovascular biomarkers. Optom Vis Sci 2024; 101:464-469. [PMID: 38935034 PMCID: PMC11462873 DOI: 10.1097/opx.0000000000002158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/28/2024] Open
Abstract
SIGNIFICANCE Our retinal image-based deep learning (DL) cardiac biological age (BioAge) model could facilitate fast, accurate, noninvasive screening for cardiovascular disease (CVD) in novel community settings and thus improve outcome with those with limited access to health care services. PURPOSE This study aimed to determine whether the results issued by our DL cardiac BioAge model are consistent with the known trends of CVD risk and the biomarker leukocyte telomere length (LTL), in a cohort of individuals from the UK Biobank. METHODS A cross-sectional cohort study was conducted using those individuals in the UK Biobank who had LTL data. These individuals were divided by sex, ranked by LTL, and then grouped into deciles. The retinal images were then presented to the DL model, and individual's cardiac BioAge was determined. Individuals within each LTL decile were then ranked by cardiac BioAge, and the mean of the CVD risk biomarkers in the top and bottom quartiles was compared. The relationship between an individual's cardiac BioAge, the CVD biomarkers, and LTL was determined using traditional correlation statistics. RESULTS The DL cardiac BioAge model was able to accurately stratify individuals by the traditional CVD risk biomarkers, and for both males and females, those issued with a cardiac BioAge in the top quartile of their chronological peer group had a significantly higher mean systolic blood pressure, hemoglobin A 1c , and 10-year Pooled Cohort Equation CVD risk scores compared with those individuals in the bottom quartile (p<0.001). Cardiac BioAge was associated with LTL shortening for both males and females (males: -0.22, r2 = 0.04; females: -0.18, r2 = 0.03). CONCLUSIONS In this cross-sectional cohort study, increasing CVD risk whether assessed by traditional biomarkers, CVD risk scoring, or our DL cardiac BioAge, CVD risk model, was inversely related to LTL. At a population level, our data support the growing body of evidence that suggests LTL shortening is a surrogate marker for increasing CVD risk and that this risk can be captured by our novel DL cardiac BioAge model.
Collapse
Affiliation(s)
- Ehsan Vaghefi
- Department of Optometry and Ophthalmology, University of Auckland, Auckland, New Zealand
- Department of Ophthalmology, Greenlane Clinical Centre, Auckland District Health Board, Auckland, New Zealand
| | - Songyang An
- Department of Optometry and Ophthalmology, University of Auckland, Auckland, New Zealand
- Toku Eyes, Auckland, New Zealand
| | | | - David Squirrell
- Toku Eyes, Auckland, New Zealand
- Department of Ophthalmology, Greenlane Clinical Centre, Auckland District Health Board, Auckland, New Zealand
| |
Collapse
|