1
|
Domalpally A, Slater R, Linderman RE, Balaji R, Bogost J, Voland R, Pak J, Blodi BA, Channa R, Fong D, Chew EY. Strong versus Weak Data Labeling for Artificial Intelligence Algorithms in the Measurement of Geographic Atrophy. OPHTHALMOLOGY SCIENCE 2024; 4:100477. [PMID: 38827491 PMCID: PMC11141255 DOI: 10.1016/j.xops.2024.100477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 11/15/2023] [Accepted: 01/19/2024] [Indexed: 06/04/2024]
Abstract
Purpose To gain an understanding of data labeling requirements to train deep learning models for measurement of geographic atrophy (GA) with fundus autofluorescence (FAF) images. Design Evaluation of artificial intelligence (AI) algorithms. Subjects The Age-Related Eye Disease Study 2 (AREDS2) images were used for training and cross-validation, and GA clinical trial images were used for testing. Methods Training data consisted of 2 sets of FAF images; 1 with area measurements only and no indication of GA location (Weakly labeled) and the second with GA segmentation masks (Strongly labeled). Main Outcome Measures Bland-Altman plots and scatter plots were used to compare GA area measurement between ground truth and AI measurements. The Dice coefficient was used to compare accuracy of segmentation of the Strong model. Results In the cross-validation AREDS2 data set (n = 601), the mean (standard deviation [SD]) area of GA measured by human grader, Weakly labeled AI model, and Strongly labeled AI model was 6.65 (6.3) mm2, 6.83 (6.29) mm2, and 6.58 (6.24) mm2, respectively. The mean difference between ground truth and AI was 0.18 mm2 (95% confidence interval, [CI], -7.57 to 7.92) for the Weakly labeled model and -0.07 mm2 (95% CI, -1.61 to 1.47) for the Strongly labeled model. With GlaxoSmithKline testing data (n = 156), the mean (SD) GA area was 9.79 (5.6) mm2, 8.82 (4.61) mm2, and 9.55 (5.66) mm2 for human grader, Strongly labeled AI model, and Weakly labeled AI model, respectively. The mean difference between ground truth and AI for the 2 models was -0.97 mm2 (95% CI, -4.36 to 2.41) and -0.24 mm2 (95% CI, -4.98 to 4.49), respectively. The Dice coefficient was 0.99 for intergrader agreement, 0.89 for the cross-validation data, and 0.92 for the testing data. Conclusions Deep learning models can achieve reasonable accuracy even with Weakly labeled data. Training methods that integrate large volumes of Weakly labeled images with small number of Strongly labeled images offer a promising solution to overcome the burden of cost and time for data labeling. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Amitha Domalpally
- A-EYE Research Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Robert Slater
- A-EYE Research Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Rachel E. Linderman
- A-EYE Research Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Rohit Balaji
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Jacob Bogost
- A-EYE Research Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Rick Voland
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Jeong Pak
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Barbara A. Blodi
- A-EYE Research Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Roomasa Channa
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | | | - Emily Y. Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| |
Collapse
|
2
|
Chen D, Geevarghese A, Lee S, Plovnick C, Elgin C, Zhou R, Oermann E, Aphinyonaphongs Y, Al-Aswad LA. Transparency in Artificial Intelligence Reporting in Ophthalmology-A Scoping Review. OPHTHALMOLOGY SCIENCE 2024; 4:100471. [PMID: 38591048 PMCID: PMC11000111 DOI: 10.1016/j.xops.2024.100471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 11/18/2023] [Accepted: 01/12/2024] [Indexed: 04/10/2024]
Abstract
Topic This scoping review summarizes artificial intelligence (AI) reporting in ophthalmology literature in respect to model development and validation. We characterize the state of transparency in reporting of studies prospectively validating models for disease classification. Clinical Relevance Understanding what elements authors currently describe regarding their AI models may aid in the future standardization of reporting. This review highlights the need for transparency to facilitate the critical appraisal of models prior to clinical implementation, to minimize bias and inappropriate use. Transparent reporting can improve effective and equitable use in clinical settings. Methods Eligible articles (as of January 2022) from PubMed, Embase, Web of Science, and CINAHL were independently screened by 2 reviewers. All observational and clinical trial studies evaluating the performance of an AI model for disease classification of ophthalmic conditions were included. Studies were evaluated for reporting of parameters derived from reporting guidelines (CONSORT-AI, MI-CLAIM) and our previously published editorial on model cards. The reporting of these factors, which included basic model and dataset details (source, demographics), and prospective validation outcomes, were summarized. Results Thirty-seven prospective validation studies were included in the scoping review. Eleven additional associated training and/or retrospective validation studies were included if this information could not be determined from the primary articles. These 37 studies validated 27 unique AI models; multiple studies evaluated the same algorithms (EyeArt, IDx-DR, and Medios AI). Details of model development were variably reported; 18 of 27 models described training dataset annotation and 10 of 27 studies reported training data distribution. Demographic information of training data was rarely reported; 7 of the 27 unique models reported age and gender and only 2 reported race and/or ethnicity. At the level of prospective clinical validation, age and gender of populations was more consistently reported (29 and 28 of 37 studies, respectively), but only 9 studies reported race and/or ethnicity data. Scope of use was difficult to discern for the majority of models. Fifteen studies did not state or imply primary users. Conclusion Our scoping review demonstrates variable reporting of information related to both model development and validation. The intention of our study was not to assess the quality of the factors we examined, but to characterize what information is, and is not, regularly reported. Our results suggest the need for greater transparency in the reporting of information necessary to determine the appropriateness and fairness of these tools prior to clinical use. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Dinah Chen
- Department of Ophthalmology, NYU Langone Health, New York, New York
| | | | - Samuel Lee
- Department of Neurosurgery, NYU Grossman School of Medicine, New York, New York
| | | | - Cansu Elgin
- Department of Ophthalmology, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Raymond Zhou
- Department of Neurosurgery, Vanderbilt School of Medicine, Nashville, Tennessee
| | - Eric Oermann
- Department of Neurosurgery, NYU Grossman School of Medicine, New York, New York
- Department of Neurosurgery, NYU Langone Health, New York, New York
| | - Yindalon Aphinyonaphongs
- Department of Medicine, NYU Langone Health, New York, New York
- Department of Population Health, NYU Grossman School of Medicine, New York, New York
| | - Lama A. Al-Aswad
- Department of Ophthalmology, NYU Langone Health, New York, New York
- Department of Population Health, NYU Grossman School of Medicine, New York, New York
| |
Collapse
|
3
|
Chaurasia AK, MacGregor S, Craig JE, Mackey DA, Hewitt AW. Assessing the Efficacy of Synthetic Optic Disc Images for Detecting Glaucomatous Optic Neuropathy Using Deep Learning. Transl Vis Sci Technol 2024; 13:1. [PMID: 38829624 DOI: 10.1167/tvst.13.6.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/05/2024] Open
Abstract
Purpose Deep learning architectures can automatically learn complex features and patterns associated with glaucomatous optic neuropathy (GON). However, developing robust algorithms requires a large number of data sets. We sought to train an adversarial model for generating high-quality optic disc images from a large, diverse data set and then assessed the performance of models on generated synthetic images for detecting GON. Methods A total of 17,060 (6874 glaucomatous and 10,186 healthy) fundus images were used to train deep convolutional generative adversarial networks (DCGANs) for synthesizing disc images for both classes. We then trained two models to detect GON, one solely on these synthetic images and another on a mixed data set (synthetic and real clinical images). Both the models were externally validated on a data set not used for training. The multiple classification metrics were evaluated with 95% confidence intervals. Models' decision-making processes were assessed using gradient-weighted class activation mapping (Grad-CAM) techniques. Results Following receiver operating characteristic curve analysis, an optimal cup-to-disc ratio threshold for detecting GON from the training data was found to be 0.619. DCGANs generated high-quality synthetic disc images for healthy and glaucomatous eyes. When trained on a mixed data set, the model's area under the receiver operating characteristic curve attained 99.85% on internal validation and 86.45% on external validation. Grad-CAM saliency maps were primarily centered on the optic nerve head, indicating a more precise and clinically relevant attention area of the fundus image. Conclusions Although our model performed well on synthetic data, training on a mixed data set demonstrated better performance and generalization. Integrating synthetic and real clinical images can optimize the performance of a deep learning model in glaucoma detection. Translational Relevance Optimizing deep learning models for glaucoma detection through integrating DCGAN-generated synthetic and real-world clinical data can be improved and generalized in clinical practice.
Collapse
Affiliation(s)
- Abadh K Chaurasia
- Menzies Institute for Medical Research, University of Tasmania, Tasmania, Australia
| | - Stuart MacGregor
- QIMR Berghofer Medical Research Institute, Brisbane, Australia
- School of Medicine, University of Queensland, Brisbane, Australia
| | - Jamie E Craig
- Department of Ophthalmology, Flinders University, Flinders Medical Centre, Bedford Park, Australia
| | - David A Mackey
- Lions Eye Institute, Centre for Ophthalmology and Visual Science, University of Western Australia, Perth, Australia
| | - Alex W Hewitt
- Menzies Institute for Medical Research, University of Tasmania, Tasmania, Australia
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| |
Collapse
|
4
|
Iorga RE, Costin D, Munteanu-Dănulescu RS, Rezuș E, Moraru AD. Non-Invasive Retinal Vessel Analysis as a Predictor for Cardiovascular Disease. J Pers Med 2024; 14:501. [PMID: 38793083 PMCID: PMC11122007 DOI: 10.3390/jpm14050501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 05/06/2024] [Accepted: 05/08/2024] [Indexed: 05/26/2024] Open
Abstract
Cardiovascular disease (CVD) is the most frequent cause of death worldwide. The alterations in the microcirculation may predict the cardiovascular mortality. The retinal vasculature can be used as a model to study vascular alterations associated with cardiovascular disease. In order to quantify microvascular changes in a non-invasive way, fundus images can be taken and analysed. The central retinal arteriolar (CRAE), the venular (CRVE) diameter and the arteriolar-to-venular diameter ratio (AVR) can be used as biomarkers to predict the cardiovascular mortality. A narrower CRAE, wider CRVE and a lower AVR have been associated with increased cardiovascular events. Dynamic retinal vessel analysis (DRVA) allows the quantification of retinal changes using digital image sequences in response to visual stimulation with flicker light. This article is not just a review of the current literature, it also aims to discuss the methodological benefits and to identify research gaps. It highlights the potential use of microvascular biomarkers for screening and treatment monitoring of cardiovascular disease. Artificial intelligence (AI), such as Quantitative Analysis of Retinal vessel Topology and size (QUARTZ), and SIVA-deep learning system (SIVA-DLS), seems efficient in extracting information from fundus photographs and has the advantage of increasing diagnosis accuracy and improving patient care by complementing the role of physicians. Retinal vascular imaging using AI may help identify the cardiovascular risk, and is an important tool in primary cardiovascular disease prevention. Further research should explore the potential clinical application of retinal microvascular biomarkers, in order to assess systemic vascular health status, and to predict cardiovascular events.
Collapse
Affiliation(s)
- Raluca Eugenia Iorga
- Department of Surgery II, Discipline of Ophthalmology, “Grigore T. Popa” University of Medicine and Pharmacy, Strada Universitatii No. 16, 700115 Iași, Romania; (R.E.I.); (A.D.M.)
| | - Damiana Costin
- Doctoral School, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iași, Romania
| | | | - Elena Rezuș
- Department of Internal Medicine II, Discipline of Reumathology, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iași, Romania;
| | - Andreea Dana Moraru
- Department of Surgery II, Discipline of Ophthalmology, “Grigore T. Popa” University of Medicine and Pharmacy, Strada Universitatii No. 16, 700115 Iași, Romania; (R.E.I.); (A.D.M.)
| |
Collapse
|
5
|
Zago Ribeiro L, Nakayama LF, Malerbi FK, Regatieri CVS. Automated machine learning model for fundus image classification by health-care professionals with no coding experience. Sci Rep 2024; 14:10395. [PMID: 38710726 PMCID: PMC11074250 DOI: 10.1038/s41598-024-60807-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 04/26/2024] [Indexed: 05/08/2024] Open
Abstract
To assess the feasibility of code-free deep learning (CFDL) platforms in the prediction of binary outcomes from fundus images in ophthalmology, evaluating two distinct online-based platforms (Google Vertex and Amazon Rekognition), and two distinct datasets. Two publicly available datasets, Messidor-2 and BRSET, were utilized for model development. The Messidor-2 consists of fundus photographs from diabetic patients and the BRSET is a multi-label dataset. The CFDL platforms were used to create deep learning models, with no preprocessing of the images, by a single ophthalmologist without coding expertise. The performance metrics employed to evaluate the models were F1 score, area under curve (AUC), precision and recall. The performance metrics for referable diabetic retinopathy and macular edema were above 0.9 for both tasks and CDFL. The Google Vertex models demonstrated superior performance compared to the Amazon models, with the BRSET dataset achieving the highest accuracy (AUC of 0.994). Multi-classification tasks using only BRSET achieved similar overall performance between platforms, achieving AUC of 0.994 for laterality, 0.942 for age grouping, 0.779 for genetic sex identification, 0.857 for optic, and 0.837 for normality with Google Vertex. The study demonstrates the feasibility of using automated machine learning platforms for predicting binary outcomes from fundus images in ophthalmology. It highlights the high accuracy achieved by the models in some tasks and the potential of CFDL as an entry-friendly platform for ophthalmologists to familiarize themselves with machine learning concepts.
Collapse
Affiliation(s)
- Lucas Zago Ribeiro
- Department of Ophthalmology and Visual Sciences, Federal University of São Paulo, São Paulo, SP, Brazil.
| | - Luis Filipe Nakayama
- Department of Ophthalmology and Visual Sciences, Federal University of São Paulo, São Paulo, SP, Brazil
- Massachusetts Institute of Technology, Institute for Medical Engineering and Science, Cambridge, MA, USA
| | - Fernando Korn Malerbi
- Department of Ophthalmology and Visual Sciences, Federal University of São Paulo, São Paulo, SP, Brazil
| | | |
Collapse
|
6
|
Touma S, Hammou BA, Antaki F, Boucher MC, Duval R. Comparing code-free deep learning models to expert-designed models for detecting retinal diseases from optical coherence tomography. Int J Retina Vitreous 2024; 10:37. [PMID: 38671486 PMCID: PMC11055378 DOI: 10.1186/s40942-024-00555-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 04/04/2024] [Indexed: 04/28/2024] Open
Abstract
BACKGROUND Code-free deep learning (CFDL) is a novel tool in artificial intelligence (AI). This study directly compared the discriminative performance of CFDL models designed by ophthalmologists without coding experience against bespoke models designed by AI experts in detecting retinal pathologies from optical coherence tomography (OCT) videos and fovea-centered images. METHODS Using the same internal dataset of 1,173 OCT macular videos and fovea-centered images, model development was performed simultaneously but independently by an ophthalmology resident (CFDL models) and a postdoctoral researcher with expertise in AI (bespoke models). We designed a multi-class model to categorize video and fovea-centered images into five labels: normal retina, macular hole, epiretinal membrane, wet age-related macular degeneration and diabetic macular edema. We qualitatively compared point estimates of the performance metrics of the CFDL and bespoke models. RESULTS For videos, the CFDL model demonstrated excellent discriminative performance, even outperforming the bespoke models for some metrics: area under the precision-recall curve was 0.984 (vs. 0.901), precision and sensitivity were both 94.1% (vs. 94.2%) and accuracy was 94.1% (vs. 96.7%). The fovea-centered CFDL model overall performed better than video-based model and was as accurate as the best bespoke model. CONCLUSION This comparative study demonstrated that code-free models created by clinicians without coding expertise perform as accurately as expert-designed bespoke models at classifying various retinal pathologies from OCT videos and images. CFDL represents a step forward towards the democratization of AI in medicine, although its numerous limitations must be carefully addressed to ensure its effective application in healthcare.
Collapse
Affiliation(s)
- Samir Touma
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est- de-l'Île-de-Montréal, 5415 boulevard de l'Assomption, H1T 2M4, Montreal, QC, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, QC, Canada
| | - Badr Ait Hammou
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est- de-l'Île-de-Montréal, 5415 boulevard de l'Assomption, H1T 2M4, Montreal, QC, Canada
| | - Fares Antaki
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est- de-l'Île-de-Montréal, 5415 boulevard de l'Assomption, H1T 2M4, Montreal, QC, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, QC, Canada
- The CHUM School of Artificial Intelligence in Healthcare (SAIH), Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, QC, Canada
| | - Marie Carole Boucher
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est- de-l'Île-de-Montréal, 5415 boulevard de l'Assomption, H1T 2M4, Montreal, QC, Canada
| | - Renaud Duval
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada.
- Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est- de-l'Île-de-Montréal, 5415 boulevard de l'Assomption, H1T 2M4, Montreal, QC, Canada.
| |
Collapse
|
7
|
Driban M, Yan A, Selvam A, Ong J, Vupparaboina KK, Chhablani J. Artificial intelligence in chorioretinal pathology through fundoscopy: a comprehensive review. Int J Retina Vitreous 2024; 10:36. [PMID: 38654344 PMCID: PMC11036694 DOI: 10.1186/s40942-024-00554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/02/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Applications for artificial intelligence (AI) in ophthalmology are continually evolving. Fundoscopy is one of the oldest ocular imaging techniques but remains a mainstay in posterior segment imaging due to its prevalence, ease of use, and ongoing technological advancement. AI has been leveraged for fundoscopy to accomplish core tasks including segmentation, classification, and prediction. MAIN BODY In this article we provide a review of AI in fundoscopy applied to representative chorioretinal pathologies, including diabetic retinopathy and age-related macular degeneration, among others. We conclude with a discussion of future directions and current limitations. SHORT CONCLUSION As AI evolves, it will become increasingly essential for the modern ophthalmologist to understand its applications and limitations to improve patient outcomes and continue to innovate.
Collapse
Affiliation(s)
- Matthew Driban
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Audrey Yan
- Department of Medicine, West Virginia School of Osteopathic Medicine, Lewisburg, WV, USA
| | - Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, USA
| | | | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
| |
Collapse
|
8
|
Mbagwu M, Chu Z, Borkar D, Koshta A, Shah N, Torres A, Kalvaria H, Lum F, Leng T. Feasibility of cross-vendor linkage of ophthalmic images with electronic health record data: an analysis from the IRIS Registry ®. JAMIA Open 2024; 7:ooae005. [PMID: 38283883 PMCID: PMC10811449 DOI: 10.1093/jamiaopen/ooae005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 10/02/2023] [Accepted: 01/05/2024] [Indexed: 01/30/2024] Open
Abstract
Purpose To link compliant, universal Digital Imaging and Communications in Medicine (DICOM) ophthalmic imaging data at the individual patient level with the American Academy of Ophthalmology IRIS® Registry (Intelligent Research in Sight). Design A retrospective study using de-identified EHR registry data. Subjects Participants Controls IRIS Registry records. Materials and Methods DICOM files of several imaging modalities were acquired from two large retina ophthalmology practices. Metadata tags were extracted and harmonized to facilitate linkage to the IRIS Registry using a proprietary, heuristic patient-matching algorithm, adhering to HITRUST guidelines. Linked patients and images were assessed by image type and clinical diagnosis. Reasons for failed linkage were assessed by examining patients' records. Main Outcome Measures Success rate of linking clinicoimaging and EHR data at the patient level. Results A total of 2 287 839 DICOM files from 54 896 unique patients were available. Of these, 1 937 864 images from 46 196 unique patients were successfully linked to existing patients in the registry. After removing records with abnormal patient names and invalid birthdates, the success linkage rate was 93.3% for images. 88.2% of all patients at the participating practices were linked to at least one image. Conclusions and Relevance Using identifiers from DICOM metadata, we created an automated pipeline to connect longitudinal real-world clinical data comprehensively and accurately to various imaging modalities from multiple manufacturers at the patient and visit levels. The process has produced an enriched and multimodal IRIS Registry, bridging the gap between basic research and clinical care by enabling future applications in artificial intelligence algorithmic development requiring large linked clinicoimaging datasets.
Collapse
Affiliation(s)
- Michael Mbagwu
- Verana Health, San Francisco, CA 94107, United States
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, CA 94303, United States
| | - Zhongdi Chu
- Verana Health, San Francisco, CA 94107, United States
| | - Durga Borkar
- Verana Health, San Francisco, CA 94107, United States
- Duke Eye Center, Duke University School of Medicine, Durham, NC 27705, United States
| | - Alex Koshta
- Verana Health, San Francisco, CA 94107, United States
| | - Nisarg Shah
- Verana Health, San Francisco, CA 94107, United States
| | | | | | - Flora Lum
- American Academy of Ophthalmology, San Francisco, CA 94109, United States
| | - Theodore Leng
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, CA 94303, United States
| |
Collapse
|
9
|
McCormick I, Butcher R, Ramke J, Bolster NM, Limburg H, Chroston H, Bastawrous A, Burton MJ, Mactaggart I. The Rapid Assessment of Avoidable Blindness survey: Review of the methodology and protocol for the seventh version (RAAB7). Wellcome Open Res 2024; 9:133. [PMID: 38828387 PMCID: PMC11143406 DOI: 10.12688/wellcomeopenres.20907.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/26/2024] [Indexed: 06/05/2024] Open
Abstract
The Rapid Assessment of Avoidable Blindness (RAAB) is a population-based cross-sectional survey methodology used to collect data on the prevalence of vision impairment and its causes and eye care service indicators among the population 50 years and older. RAAB has been used for over 20 years with modifications to the protocol over time reflected in changing version numbers; this paper describes the latest version of the methodology-RAAB7. RAAB7 is a collaborative project between the International Centre for Eye Health and Peek Vision with guidance from a steering group of global eye health stakeholders. We have fully digitised RAAB, allowing for fast, accurate and secure data collection. A bespoke Android mobile application automatically synchronises data to a secure Amazon Web Services virtual private cloud when devices are online so users can monitor data collection in real-time. Vision is screened using Peek Vision's digital visual acuity test for mobile devices and uncorrected, corrected and pinhole visual acuity are collected. An optional module on Disability is available. We have rebuilt the RAAB data repository as the end point of RAAB7's digital data workflow, including a front-end website to access the past 20 years of RAAB surveys worldwide. This website ( https://www.raab.world) hosts open access RAAB data to support the advocacy and research efforts of the global eye health community. Active research sub-projects are finalising three new components in 2024-2025: 1) Near vision screening to address data gaps on near vision impairment and effective refractive error coverage; 2) an optional Health Economics module to assess the affordability of eye care services and productivity losses associated with vision impairment; 3) an optional Health Systems data collection module to support RAAB's primary aim to inform eye health service planning by supporting users to integrate eye care facility data with population data.
Collapse
Affiliation(s)
- Ian McCormick
- International Centre for Eye Health, London School of Hygiene & Tropical Medicine, London, UK
| | - Robert Butcher
- International Centre for Eye Health, London School of Hygiene & Tropical Medicine, London, UK
- Clinical Research Department, London School of Hygiene & Tropical Medicine, London, UK
| | - Jacqueline Ramke
- International Centre for Eye Health, London School of Hygiene & Tropical Medicine, London, UK
- School of Optometry and Vision Science, The University of Auckland, Auckland, New Zealand
| | - Nigel M Bolster
- International Centre for Eye Health, London School of Hygiene & Tropical Medicine, London, UK
- Peek Vision, London, UK
| | - Hans Limburg
- Independent consultant, Grootebroek, The Netherlands
| | - Hannah Chroston
- International Centre for Eye Health, London School of Hygiene & Tropical Medicine, London, UK
| | - Andrew Bastawrous
- International Centre for Eye Health, London School of Hygiene & Tropical Medicine, London, UK
- Peek Vision, London, UK
| | - Matthew J Burton
- International Centre for Eye Health, London School of Hygiene & Tropical Medicine, London, UK
- National Institute for Health Research Biomedical Research Centre for Ophthalmology at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Islay Mactaggart
- International Centre for Eye Health, London School of Hygiene & Tropical Medicine, London, UK
| |
Collapse
|
10
|
Wong CYT, O'Byrne C, Taribagil P, Liu T, Antaki F, Keane PA. Comparing code-free and bespoke deep learning approaches in ophthalmology. Graefes Arch Clin Exp Ophthalmol 2024:10.1007/s00417-024-06432-x. [PMID: 38446200 DOI: 10.1007/s00417-024-06432-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 02/13/2024] [Accepted: 02/27/2024] [Indexed: 03/07/2024] Open
Abstract
AIM Code-free deep learning (CFDL) allows clinicians without coding expertise to build high-quality artificial intelligence (AI) models without writing code. In this review, we comprehensively review the advantages that CFDL offers over bespoke expert-designed deep learning (DL). As exemplars, we use the following tasks: (1) diabetic retinopathy screening, (2) retinal multi-disease classification, (3) surgical video classification, (4) oculomics and (5) resource management. METHODS We performed a search for studies reporting CFDL applications in ophthalmology in MEDLINE (through PubMed) from inception to June 25, 2023, using the keywords 'autoML' AND 'ophthalmology'. After identifying 5 CFDL studies looking at our target tasks, we performed a subsequent search to find corresponding bespoke DL studies focused on the same tasks. Only English-written articles with full text available were included. Reviews, editorials, protocols and case reports or case series were excluded. We identified ten relevant studies for this review. RESULTS Overall, studies were optimistic towards CFDL's advantages over bespoke DL in the five ophthalmological tasks. However, much of such discussions were identified to be mono-dimensional and had wide applicability gaps. High-quality assessment of better CFDL applicability over bespoke DL warrants a context-specific, weighted assessment of clinician intent, patient acceptance and cost-effectiveness. We conclude that CFDL and bespoke DL are unique in their own assets and are irreplaceable with each other. Their benefits are differentially valued on a case-to-case basis. Future studies are warranted to perform a multidimensional analysis of both techniques and to improve limitations of suboptimal dataset quality, poor applicability implications and non-regulated study designs. CONCLUSION For clinicians without DL expertise and easy access to AI experts, CFDL allows the prototyping of novel clinical AI systems. CFDL models concert with bespoke models, depending on the task at hand. A multidimensional, weighted evaluation of the factors involved in the implementation of those models for a designated task is warranted.
Collapse
Affiliation(s)
- Carolyn Yu Tung Wong
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Ciara O'Byrne
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Priyal Taribagil
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Timing Liu
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Fares Antaki
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- The CHUM School of Artificial Intelligence in Healthcare, Montreal, QC, Canada
| | - Pearse Andrew Keane
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK.
- Moorfields Eye Hospital NHS Foundation Trust, London, UK.
- NIHR Moorfields Biomedical Research Centre, London, UK.
| |
Collapse
|
11
|
Veritti D, Rubinato L, Sarao V, De Nardin A, Foresti GL, Lanzetta P. Behind the mask: a critical perspective on the ethical, moral, and legal implications of AI in ophthalmology. Graefes Arch Clin Exp Ophthalmol 2024; 262:975-982. [PMID: 37747539 PMCID: PMC10907411 DOI: 10.1007/s00417-023-06245-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 07/24/2023] [Accepted: 09/15/2023] [Indexed: 09/26/2023] Open
Abstract
PURPOSE This narrative review aims to provide an overview of the dangers, controversial aspects, and implications of artificial intelligence (AI) use in ophthalmology and other medical-related fields. METHODS We conducted a decade-long comprehensive search (January 2013-May 2023) of both academic and grey literature, focusing on the application of AI in ophthalmology and healthcare. This search included key web-based academic databases, non-traditional sources, and targeted searches of specific organizations and institutions. We reviewed and selected documents for relevance to AI, healthcare, ethics, and guidelines, aiming for a critical analysis of ethical, moral, and legal implications of AI in healthcare. RESULTS Six main issues were identified, analyzed, and discussed. These include bias and clinical safety, cybersecurity, health data and AI algorithm ownership, the "black-box" problem, medical liability, and the risk of widening inequality in healthcare. CONCLUSION Solutions to address these issues include collecting high-quality data of the target population, incorporating stronger security measures, using explainable AI algorithms and ensemble methods, and making AI-based solutions accessible to everyone. With careful oversight and regulation, AI-based systems can be used to supplement physician decision-making and improve patient care and outcomes.
Collapse
Affiliation(s)
- Daniele Veritti
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy.
| | - Leopoldo Rubinato
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy
| | - Valentina Sarao
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare - IEMO, Udine, Italy
| | - Axel De Nardin
- Department of Mathematics, Informatics and Physics, University of Udine, Udine, Italy
| | - Gian Luca Foresti
- Department of Mathematics, Informatics and Physics, University of Udine, Udine, Italy
| | - Paolo Lanzetta
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare - IEMO, Udine, Italy
| |
Collapse
|
12
|
Shi D, He S, Yang J, Zheng Y, He M. One-shot Retinal Artery and Vein Segmentation via Cross-modality Pretraining. OPHTHALMOLOGY SCIENCE 2024; 4:100363. [PMID: 37868792 PMCID: PMC10585631 DOI: 10.1016/j.xops.2023.100363] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 06/29/2023] [Accepted: 06/30/2023] [Indexed: 10/24/2023]
Abstract
Purpose To perform one-shot retinal artery and vein segmentation with cross-modality artery-vein (AV) soft-label pretraining. Design Cross-sectional study. Subjects The study included 6479 color fundus photography (CFP) and arterial-venous fundus fluorescein angiography (FFA) pairs from 1964 participants for pretraining and 6 AV segmentation data sets with various image sources, including RITE, HRF, LES-AV, AV-WIDE, PortableAV, and DRSplusAV for one-shot finetuning and testing. Methods We structurally matched the arterial and venous phase of FFA with CFP, the AV soft labels were automatically generated by utilizing the fluorescein intensity difference of the arterial and venous-phase FFA images, and the soft labels were then used to train a generative adversarial network to learn to generate AV soft segmentations using CFP images as input. We then finetuned the pretrained model to perform AV segmentation using only one image from each of the AV segmentation data sets and test on the remainder. To investigate the effect and reliability of one-shot finetuning, we conducted experiments without finetuning and by finetuning the pretrained model on an iteratively different single image for each data set under the same experimental setting and tested the models on the remaining images. Main Outcome Measures The AV segmentation was assessed by area under the receiver operating characteristic curve (AUC), accuracy, Dice score, sensitivity, and specificity. Results After the FFA-AV soft label pretraining, our method required only one exemplar image from each camera or modality and achieved similar performance with full-data training, with AUC ranging from 0.901 to 0.971, accuracy from 0.959 to 0.980, Dice score from 0.585 to 0.773, sensitivity from 0.574 to 0.763, and specificity from 0.981 to 0.991. Compared with no finetuning, the segmentation performance improved after one-shot finetuning. When finetuned on different images in each data set, the standard deviation of the segmentation results across models ranged from 0.001 to 0.10. Conclusions This study presents the first one-shot approach to retinal artery and vein segmentation. The proposed labeling method is time-saving and efficient, demonstrating a promising direction for retinal-vessel segmentation and enabling the potential for widespread application. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Danli Shi
- Centre for Eye and Vision Research (CEVR), Hong Kong SAR, China
- The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Shuang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Jiancheng Yang
- Swiss Federal Institute of Technology in Lausanne (EPFL), Lausanne, Switzerland
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Mingguang He
- Centre for Eye and Vision Research (CEVR), Hong Kong SAR, China
- The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| |
Collapse
|
13
|
Milad D, Antaki F, Bernstein A, Touma S, Duval R. Automated Machine Learning versus Expert-Designed Models in Ocular Toxoplasmosis: Detection and Lesion Localization Using Fundus Images. Ocul Immunol Inflamm 2024:1-7. [PMID: 38411944 DOI: 10.1080/09273948.2024.2319281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 02/11/2024] [Indexed: 02/28/2024]
Abstract
PURPOSE Automated machine learning (AutoML) allows clinicians without coding experience to build their own deep learning (DL) models. This study assesses the performance of AutoML in detecting and localizing ocular toxoplasmosis (OT) lesions in fundus images and compares it to expert-designed models. METHODS Ophthalmology trainees without coding experience designed AutoML models using 304 labelled fundus images. We designed a binary model to differentiate OT from normal and an object detection model to visually identify OT lesions. RESULTS The AutoML model had an area under the precision-recall curve (AuPRC) of 0.945, sensitivity of 100%, specificity of 83% and accuracy of 93.5% (vs. 94%, 86% and 91% for the bespoke models). The AutoML object detection model had an AuPRC of 0.600 with a precision of 93.3% and recall of 56%. Using a diversified external validation dataset, our model correctly labeled 15 normal fundus images (100%) and 15 OT fundus images (100%), with a mean confidence score of 0.965 and 0.963, respectively. CONCLUSION AutoML models created by ophthalmologists without coding experience were comparable or better than expert-designed bespoke models trained on the same dataset. By creatively using AutoML to identify OT lesions on fundus images, our approach brings the whole spectrum of DL model design into the hands of clinicians.
Collapse
Affiliation(s)
- Daniel Milad
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est-de-l'Île-de-Montréal, Montreal, Quebec, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Quebec, Canada
| | - Fares Antaki
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est-de-l'Île-de-Montréal, Montreal, Quebec, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Quebec, Canada
- The CHUM School of Artificial Intelligence in Healthcare (SAIH), Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Quebec, Canada
| | - Allison Bernstein
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est-de-l'Île-de-Montréal, Montreal, Quebec, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Quebec, Canada
| | - Samir Touma
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est-de-l'Île-de-Montréal, Montreal, Quebec, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Quebec, Canada
| | - Renaud Duval
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est-de-l'Île-de-Montréal, Montreal, Quebec, Canada
| |
Collapse
|
14
|
Bahr T, Vu TA, Tuttle JJ, Iezzi R. Deep Learning and Machine Learning Algorithms for Retinal Image Analysis in Neurodegenerative Disease: Systematic Review of Datasets and Models. Transl Vis Sci Technol 2024; 13:16. [PMID: 38381447 PMCID: PMC10893898 DOI: 10.1167/tvst.13.2.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/26/2023] [Indexed: 02/22/2024] Open
Abstract
Purpose Retinal images contain rich biomarker information for neurodegenerative disease. Recently, deep learning models have been used for automated neurodegenerative disease diagnosis and risk prediction using retinal images with good results. Methods In this review, we systematically report studies with datasets of retinal images from patients with neurodegenerative diseases, including Alzheimer's disease, Huntington's disease, Parkinson's disease, amyotrophic lateral sclerosis, and others. We also review and characterize the models in the current literature which have been used for classification, regression, or segmentation problems using retinal images in patients with neurodegenerative diseases. Results Our review found several existing datasets and models with various imaging modalities primarily in patients with Alzheimer's disease, with most datasets on the order of tens to a few hundred images. We found limited data available for the other neurodegenerative diseases. Although cross-sectional imaging data for Alzheimer's disease is becoming more abundant, datasets with longitudinal imaging of any disease are lacking. Conclusions The use of bilateral and multimodal imaging together with metadata seems to improve model performance, thus multimodal bilateral image datasets with patient metadata are needed. We identified several deep learning tools that have been useful in this context including feature extraction algorithms specifically for retinal images, retinal image preprocessing techniques, transfer learning, feature fusion, and attention mapping. Importantly, we also consider the limitations common to these models in real-world clinical applications. Translational Relevance This systematic review evaluates the deep learning models and retinal features relevant in the evaluation of retinal images of patients with neurodegenerative disease.
Collapse
Affiliation(s)
- Tyler Bahr
- Mayo Clinic, Department of Ophthalmology, Rochester, MN, USA
| | - Truong A. Vu
- University of the Incarnate Word, School of Osteopathic Medicine, San Antonio, TX, USA
| | - Jared J. Tuttle
- University of Texas Health Science Center at San Antonio, Joe R. and Teresa Lozano Long School of Medicine, San Antonio, TX, USA
| | - Raymond Iezzi
- Mayo Clinic, Department of Ophthalmology, Rochester, MN, USA
| |
Collapse
|
15
|
Gonçalves MB, Nakayama LF, Ferraz D, Faber H, Korot E, Malerbi FK, Regatieri CV, Maia M, Celi LA, Keane PA, Belfort R. Image quality assessment of retinal fundus photographs for diabetic retinopathy in the machine learning era: a review. Eye (Lond) 2024; 38:426-433. [PMID: 37667028 PMCID: PMC10858054 DOI: 10.1038/s41433-023-02717-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 06/26/2023] [Accepted: 08/25/2023] [Indexed: 09/06/2023] Open
Abstract
This study aimed to evaluate the image quality assessment (IQA) and quality criteria employed in publicly available datasets for diabetic retinopathy (DR). A literature search strategy was used to identify relevant datasets, and 20 datasets were included in the analysis. Out of these, 12 datasets mentioned performing IQA, but only eight specified the quality criteria used. The reported quality criteria varied widely across datasets, and accessing the information was often challenging. The findings highlight the importance of IQA for AI model development while emphasizing the need for clear and accessible reporting of IQA information. The study suggests that automated quality assessments can be a valid alternative to manual labeling and emphasizes the importance of establishing quality standards based on population characteristics, clinical use, and research purposes. In conclusion, image quality assessment is important for AI model development; however, strict data quality standards must not limit data sharing. Given the importance of IQA for developing, validating, and implementing deep learning (DL) algorithms, it's recommended that this information be reported in a clear, specific, and accessible way whenever possible. Automated quality assessments are a valid alternative to the traditional manual labeling process, and quality standards should be determined according to population characteristics, clinical use, and research purpose.
Collapse
Affiliation(s)
- Mariana Batista Gonçalves
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Luis Filipe Nakayama
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil.
- Massachusetts Institute of Technology, Laboratory for Computational Physiology, Cambridge, MA, USA.
| | - Daniel Ferraz
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Hanna Faber
- Department of Ophthalmology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Department of Ophthalmology, University of Tuebingen, Tuebingen, Germany
| | - Edward Korot
- Retina Specialists of Michigan, Grand Rapids, MI, USA
- Stanford University Byers Eye Institute Palo Alto, Palo Alto, CA, USA
| | | | | | - Mauricio Maia
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
| | - Leo Anthony Celi
- Massachusetts Institute of Technology, Laboratory for Computational Physiology, Cambridge, MA, USA
- Harvard TH Chan School of Public Health, Department of Biostatistics, Boston, MA, USA
- Beth Israel Deaconess Medical Center, Department of Medicine, Boston, MA, USA
| | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Rubens Belfort
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
| |
Collapse
|
16
|
Nakayama LF, Restrepo D, Matos J, Ribeiro LZ, Malerbi FK, Celi LA, Regatieri CS. BRSET: A Brazilian Multilabel Ophthalmological Dataset of Retina Fundus Photos. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.01.23.24301660. [PMID: 38343827 PMCID: PMC10854338 DOI: 10.1101/2024.01.23.24301660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/17/2024]
Abstract
Introduction The Brazilian Multilabel Ophthalmological Dataset (BRSET) addresses the scarcity of publicly available ophthalmological datasets in Latin America. BRSET comprises 16,266 color fundus retinal photos from 8,524 Brazilian patients, aiming to enhance data representativeness, serving as a research and teaching tool. It contains sociodemographic information, enabling investigations into differential model performance across demographic groups. Methods Data from three São Paulo outpatient centers yielded demographic and medical information from electronic records, including nationality, age, sex, clinical history, insulin use, and duration of diabetes diagnosis. A retinal specialist labeled images for anatomical features (optic disc, blood vessels, macula), quality control (focus, illumination, image field, artifacts), and pathologies (e.g., diabetic retinopathy). Diabetic retinopathy was graded using International Clinic Diabetic Retinopathy and Scottish Diabetic Retinopathy Grading. Validation used Dino V2 Base for feature extraction, with 70% training and 30% testing subsets. Support Vector Machines (SVM) and Logistic Regression (LR) were employed with weighted training. Performance metrics included area under the receiver operating curve (AUC) and Macro F1-score. Results BRSET comprises 65.1% Canon CR2 and 34.9% Nikon NF5050 images. 61.8% of the patients are female, and the average age is 57.6 years. Diabetic retinopathy affected 15.8% of patients, across a spectrum of disease severity. Anatomically, 20.2% showed abnormal optic discs, 4.9% abnormal blood vessels, and 28.8% abnormal macula. Models were trained on BRSET in three prediction tasks: "diabetes diagnosis"; "sex classification"; and "diabetic retinopathy diagnosis". Discussion BRSET is the first multilabel ophthalmological dataset in Brazil and Latin America. It provides an opportunity for investigating model biases by evaluating performance across demographic groups. The model performance of three prediction tasks demonstrates the value of the dataset for external validation and for teaching medical computer vision to learners in Latin America using locally relevant data sources.
Collapse
Affiliation(s)
- Luis Filipe Nakayama
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - David Restrepo
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Telematics Department, University of Cauca, Popayán, Cauca, Colombia
| | - João Matos
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Faculty of Engineering of University of Porto, Porto, Portugal
| | - Lucas Zago Ribeiro
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
| | - Fernando Korn Malerbi
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
| | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Caio Saito Regatieri
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
| |
Collapse
|
17
|
Tochel C, Pead E, McTrusty A, Buckmaster F, MacGillivray T, Tatham AJ, Strang NC, Dhillon B, Bernabeu MO. Novel linkage approach to join community-acquired and national data. BMC Med Res Methodol 2024; 24:13. [PMID: 38233744 PMCID: PMC10792819 DOI: 10.1186/s12874-024-02143-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 01/05/2024] [Indexed: 01/19/2024] Open
Abstract
BACKGROUND Community optometrists in Scotland have performed regular free-at-point-of-care eye examinations for all, for over 15 years. Eye examinations include retinal imaging but image storage is fragmented and they are not used for research. The Scottish Collaborative Optometry-Ophthalmology Network e-research project aimed to collect these images and create a repository linked to routinely collected healthcare data, supporting the development of pre-symptomatic diagnostic tools. METHODS As the image record was usually separate from the patient record and contained minimal patient information, we developed an efficient matching algorithm using a combination of deterministic and probabilistic steps which minimised the risk of false positives, to facilitate national health record linkage. We visited two practices and assessed the data contained in their image device and Practice Management Systems. Practice activities were explored to understand the context of data collection processes. Iteratively, we tested a series of matching rules which captured a high proportion of true positive records compared to manual matches. The approach was validated by testing manual matching against automated steps in three further practices. RESULTS A sequence of deterministic rules successfully matched 95% of records in the three test practices compared to manual matching. Adding two probabilistic rules to the algorithm successfully matched 99% of records. CONCLUSIONS The potential value of community-acquired retinal images can be harnessed only if they are linked to centrally-held healthcare care data. Despite the lack of interoperability between systems within optometry practices and inconsistent use of unique identifiers, data linkage is possible using robust, almost entirely automated processes.
Collapse
Affiliation(s)
- Claire Tochel
- Centre for Medical Informatics, University of Edinburgh, Edinburgh, UK.
| | - Emma Pead
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Alice McTrusty
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Fiona Buckmaster
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Tom MacGillivray
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Andrew J Tatham
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
- Princess Alexandra Eye Pavilion, NHS Lothian, Edinburgh, UK
| | - Niall C Strang
- Department of Vision Sciences, Glasgow Caledonian University, Glasgow, UK
| | - Baljean Dhillon
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
- Princess Alexandra Eye Pavilion, NHS Lothian, Edinburgh, UK
| | - Miguel O Bernabeu
- Centre for Medical Informatics, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
18
|
Zhang J, Zou H. Insights into artificial intelligence in myopia management: from a data perspective. Graefes Arch Clin Exp Ophthalmol 2024; 262:3-17. [PMID: 37231280 PMCID: PMC10212230 DOI: 10.1007/s00417-023-06101-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 03/23/2023] [Accepted: 05/06/2023] [Indexed: 05/27/2023] Open
Abstract
Given the high incidence and prevalence of myopia, the current healthcare system is struggling to handle the task of myopia management, which is worsened by home quarantine during the ongoing COVID-19 pandemic. The utilization of artificial intelligence (AI) in ophthalmology is thriving, yet not enough in myopia. AI can serve as a solution for the myopia pandemic, with application potential in early identification, risk stratification, progression prediction, and timely intervention. The datasets used for developing AI models are the foundation and determine the upper limit of performance. Data generated from clinical practice in managing myopia can be categorized into clinical data and imaging data, and different AI methods can be used for analysis. In this review, we comprehensively review the current application status of AI in myopia with an emphasis on data modalities used for developing AI models. We propose that establishing large public datasets with high quality, enhancing the model's capability of handling multimodal input, and exploring novel data modalities could be of great significance for the further application of AI for myopia.
Collapse
Affiliation(s)
- Juzhao Zhang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haidong Zou
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- Shanghai Eye Diseases Prevention & Treatment Center, Shanghai Eye Hospital, Shanghai, China.
- National Clinical Research Center for Eye Diseases, Shanghai, China.
- Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China.
| |
Collapse
|
19
|
Soh ZD, Tan M, Nongpiur ME, Xu BY, Friedman D, Zhang X, Leung C, Liu Y, Koh V, Aung T, Cheng CY. Assessment of angle closure disease in the age of artificial intelligence: A review. Prog Retin Eye Res 2024; 98:101227. [PMID: 37926242 DOI: 10.1016/j.preteyeres.2023.101227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 11/02/2023] [Accepted: 11/02/2023] [Indexed: 11/07/2023]
Abstract
Primary angle closure glaucoma is a visually debilitating disease that is under-detected worldwide. Many of the challenges in managing primary angle closure disease (PACD) are related to the lack of convenient and precise tools for clinic-based disease assessment and monitoring. Artificial intelligence (AI)- assisted tools to detect and assess PACD have proliferated in recent years with encouraging results. Machine learning (ML) algorithms that utilize clinical data have been developed to categorize angle closure eyes by disease mechanism. Other ML algorithms that utilize image data have demonstrated good performance in detecting angle closure. Nonetheless, deep learning (DL) algorithms trained directly on image data generally outperformed traditional ML algorithms in detecting PACD, were able to accurately differentiate between angle status (open, narrow, closed), and automated the measurement of quantitative parameters. However, more work is required to expand the capabilities of these AI algorithms and for deployment into real-world practice settings. This includes the need for real-world evaluation, establishing the use case for different algorithms, and evaluating the feasibility of deployment while considering other clinical, economic, social, and policy-related factors.
Collapse
Affiliation(s)
- Zhi Da Soh
- Singapore Eye Research Institute, Singapore National Eye Centre, 20 College Road, 169856, Singapore; Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, 21 Lower Kent Ridge Road, 119077, Singapore.
| | - Mingrui Tan
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*Star), 1 Fusionopolis Way, 138632, Singapore.
| | - Monisha Esther Nongpiur
- Singapore Eye Research Institute, Singapore National Eye Centre, 20 College Road, 169856, Singapore; Ophthalmology & Visual Sciences Academic Clinical Programme, Academic Medicine, Duke-NUS Medical School, 8 College Road, 169857, Singapore.
| | - Benjamin Yixing Xu
- Roski Eye Institute, Keck School of Medicine, University of Southern California, 1450 San Pablo St #4400, Los Angeles, CA, 90033, USA.
| | - David Friedman
- Department of Ophthalmology, Harvard Medical School, 25 Shattuck Street, Boston, MA, 02115, USA; Massachusetts Eye and Ear, Mass General Brigham, 243 Charles Street, Boston, MA, 02114, USA.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat Sen University, No. 54 Xianlie South Road, Yuexiu District, Guangzhou, China.
| | - Christopher Leung
- Department of Ophthalmology, School of Clinical Medicine, The University of Hong Kong, Cyberport 4, 100 Cyberport Road, Hong Kong; Department of Ophthalmology, Queen Mary Hospital, 102 Pok Fu Lam Road, Hong Kong.
| | - Yong Liu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*Star), 1 Fusionopolis Way, 138632, Singapore.
| | - Victor Koh
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, 21 Lower Kent Ridge Road, 119077, Singapore; Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, 1E Kent Ridge Road, NUHS Tower Block, Level 7, 119228, Singapore.
| | - Tin Aung
- Singapore Eye Research Institute, Singapore National Eye Centre, 20 College Road, 169856, Singapore; Ophthalmology & Visual Sciences Academic Clinical Programme, Academic Medicine, Duke-NUS Medical School, 8 College Road, 169857, Singapore.
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, 20 College Road, 169856, Singapore; Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, 21 Lower Kent Ridge Road, 119077, Singapore; Ophthalmology & Visual Sciences Academic Clinical Programme, Academic Medicine, Duke-NUS Medical School, 8 College Road, 169857, Singapore; Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, 1E Kent Ridge Road, NUHS Tower Block, Level 7, 119228, Singapore.
| |
Collapse
|
20
|
Kolasa K, Admassu B, Hołownia-Voloskova M, Kędzior KJ, Poirrier JE, Perni S. Systematic reviews of machine learning in healthcare: a literature review. Expert Rev Pharmacoecon Outcomes Res 2024; 24:63-115. [PMID: 37955147 DOI: 10.1080/14737167.2023.2279107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 10/31/2023] [Indexed: 11/14/2023]
Abstract
INTRODUCTION The increasing availability of data and computing power has made machine learning (ML) a viable approach to faster, more efficient healthcare delivery. METHODS A systematic literature review (SLR) of published SLRs evaluating ML applications in healthcare settings published between1 January 2010 and 27 March 2023 was conducted. RESULTS In total 220 SLRs covering 10,462 ML algorithms were reviewed. The main application of AI in medicine related to the clinical prediction and disease prognosis in oncology and neurology with the use of imaging data. Accuracy, specificity, and sensitivity were provided in 56%, 28%, and 25% SLRs respectively. Internal and external validation was reported in 53% and less than 1% of the cases respectively. The most common modeling approach was neural networks (2,454 ML algorithms), followed by support vector machine and random forest/decision trees (1,578 and 1,522 ML algorithms, respectively). EXPERT OPINION The review indicated considerable reporting gaps in terms of the ML's performance, both internal and external validation. Greater accessibility to healthcare data for developers can ensure the faster adoption of ML algorithms into clinical practice.
Collapse
Affiliation(s)
- Katarzyna Kolasa
- Division of Health Economics and Healthcare Management, Kozminski University, Warsaw, Poland
| | - Bisrat Admassu
- Division of Health Economics and Healthcare Management, Kozminski University, Warsaw, Poland
| | | | | | | | | |
Collapse
|
21
|
Brandao-de-Resende C, Melo M, Lee E, Jindal A, Neo YN, Sanghi P, Freitas JR, Castro PV, Rosa VO, Valentim GF, Higino MLO, Hay GR, Keane PA, Vasconcelos-Santos DV, Day AC. A machine learning system to optimise triage in an adult ophthalmic emergency department: a model development and validation study. EClinicalMedicine 2023; 66:102331. [PMID: 38089860 PMCID: PMC10711497 DOI: 10.1016/j.eclinm.2023.102331] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/27/2023] [Accepted: 11/07/2023] [Indexed: 12/31/2023] Open
Abstract
Background A substantial proportion of attendances to ophthalmic emergency departments are for non-urgent presentations. We developed and evaluated a machine learning system (DemDx Ophthalmology Triage System: DOTS) to optimise triage, with the aim of reducing inappropriate emergency attendances and streamlining case referral when necessary. Methods DOTS was built using retrospective tabular data from 11,315 attendances between July 1st, 2021, to June 15th, 2022 at Moorfields Eye Hospital Emergency Department (MEH) in London, UK. Demographic and clinical features were used as inputs and a triage recommendation was given ("see immediately", "see within a week", or "see electively"). DOTS was validated temporally and compared with triage nurses' performance (1269 attendances at MEH) and validated externally (761 attendances at the Federal University of Minas Gerais - UFMG, Brazil). It was also tested for biases and robustness to variations in disease incidences. All attendances from patients aged at least 18 years with at least one confirmed diagnosis were included in the study. Findings For identifying ophthalmic emergency attendances, on temporal validation, DOTS had a sensitivity of 94.5% [95% CI 92.3-96.1] and a specificity of 42.4% [38.8-46.1]. For comparison within the same dataset, triage nurses had a sensitivity of 96.4% [94.5-97.7] and a specificity of 25.1% [22.0-28.5]. On external validation at UFMG, DOTS had a sensitivity of 95.2% [92.5-97.0] and a specificity of 32.2% [27.4-37.0]. In simulated scenarios with varying disease incidences, the sensitivity was ≥92.2% and the specificity was ≥36.8%. No differences in sensitivity were found in subgroups of index of multiple deprivation, but the specificity was higher for Q2 when compared to Q4 (Q4 is less deprived than Q2). Interpretation At MEH, DOTS had similar sensitivity to triage nurses in determining attendance priority; however, with a specificity of 17.3% higher, DOTS resulted in lower rates of patients triaged to be seen immediately at emergency. DOTS showed consistent performance in temporal and external validation, in social-demographic subgroups and was robust to varying relative disease incidences. Further trials are necessary to validate these findings. This system will be prospectively evaluated, considering human-computer interaction, in a clinical trial. Funding The Artificial Intelligence in Health and Care Award (AI_AWARD01671) of the NHS AI Lab under National Institute for Health and Care Research (NIHR) and the Accelerated Access Collaborative (AAC).
Collapse
Affiliation(s)
- Camilo Brandao-de-Resende
- Institute of Ophthalmology, University College London (UCL), London, UK
- NIHR Moorfields Clinical Research Facility, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Research Department, DemDX Ltd, London, UK
| | - Mariane Melo
- NIHR Moorfields Clinical Research Facility, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Research Department, DemDX Ltd, London, UK
| | - Elsa Lee
- Institute of Ophthalmology, University College London (UCL), London, UK
- NIHR Moorfields Clinical Research Facility, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Research Department, DemDX Ltd, London, UK
| | - Anish Jindal
- Institute of Ophthalmology, University College London (UCL), London, UK
- Accident and Emergency Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Yan N. Neo
- Accident and Emergency Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Priyanka Sanghi
- Accident and Emergency Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Joao R. Freitas
- Research Department, DemDX Ltd, London, UK
- University of Sao Paulo (USP), Sao Paulo, Brazil
| | - Paulo V.I.P. Castro
- Hospital Sao Geraldo, Federal University of Minas Gerais (UFMG), Belo Horizonte, Brazil
| | - Victor O.M. Rosa
- Hospital Sao Geraldo, Federal University of Minas Gerais (UFMG), Belo Horizonte, Brazil
| | | | - Maria Luisa O. Higino
- Hospital Sao Geraldo, Federal University of Minas Gerais (UFMG), Belo Horizonte, Brazil
| | - Gordon R. Hay
- Institute of Ophthalmology, University College London (UCL), London, UK
- Accident and Emergency Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Pearse A. Keane
- Institute of Ophthalmology, University College London (UCL), London, UK
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | | | - Alexander C. Day
- Institute of Ophthalmology, University College London (UCL), London, UK
- Accident and Emergency Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| |
Collapse
|
22
|
Abdelfattah S, Baza M, Mahmoud M, Fouda MM, Abualsaud K, Yaacoub E, Alsabaan M, Guizani M. Lightweight Multi-Class Support Vector Machine-Based Medical Diagnosis System with Privacy Preservation. SENSORS (BASEL, SWITZERLAND) 2023; 23:9033. [PMID: 38005421 PMCID: PMC10674529 DOI: 10.3390/s23229033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 10/15/2023] [Accepted: 10/20/2023] [Indexed: 11/26/2023]
Abstract
Machine learning, powered by cloud servers, has found application in medical diagnosis, enhancing the capabilities of smart healthcare services. Research literature demonstrates that the support vector machine (SVM) consistently demonstrates remarkable accuracy in medical diagnosis. Nonetheless, safeguarding patients' health data privacy and preserving the intellectual property of diagnosis models is of paramount importance. This concern arises from the common practice of outsourcing these models to third-party cloud servers that may not be entirely trustworthy. Few studies in the literature have delved into addressing these issues within SVM-based diagnosis systems. These studies, however, typically demand substantial communication and computational resources and may fail to conceal classification results and protect model intellectual property. This paper aims to tackle these limitations within a multi-class SVM medical diagnosis system. To achieve this, we have introduced modifications to an inner product encryption cryptosystem and incorporated it into our medical diagnosis framework. Notably, our cryptosystem proves to be more efficient than the Paillier and multi-party computation cryptography methods employed in previous research. Although we focus on a medical application in this paper, our approach can also be used for other applications that need the evaluation of machine learning models in a privacy-preserving way such as electricity theft detection in the smart grid, electric vehicle charging coordination, and vehicular social networks. To assess the performance and security of our approach, we conducted comprehensive analyses and experiments. Our findings demonstrate that our proposed method successfully fulfills our security and privacy objectives while maintaining high classification accuracy and minimizing communication and computational overhead.
Collapse
Affiliation(s)
- Sherif Abdelfattah
- Department of Computer Science and Information Systems, Bradley University, Peoria, IL 61625, USA;
| | - Mohamed Baza
- Department of Computer Science, College of Charleston, Charleston, SC 29424, USA;
| | - Mohamed Mahmoud
- Department of Electrical and Computer Engineering, Tennessee Technological University, Cookeville, TN 38505, USA;
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, College of Science and Engineering, Idaho State University, Pocatello, ID 83209, USA;
- Center for Advanced Energy Studies (CAES), Idaho Falls, ID 83401, USA
| | - Khalid Abualsaud
- Department of Computer Science and Engineering, Qatar University, Doha 2713, Qatar;
| | - Elias Yaacoub
- Department of Computer Science and Engineering, Qatar University, Doha 2713, Qatar;
| | - Maazen Alsabaan
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia;
| | - Mohsen Guizani
- Machine Learning Department, Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi P.O. Box 131818, United Arab Emirates;
| |
Collapse
|
23
|
Ye X, He S, Zhong X, Yu J, Yang S, Shen Y, Chen Y, Wang Y, Huang X, Shen L. OIMHS: An Optical Coherence Tomography Image Dataset Based on Macular Hole Manual Segmentation. Sci Data 2023; 10:769. [PMID: 37932307 PMCID: PMC10628143 DOI: 10.1038/s41597-023-02675-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 10/24/2023] [Indexed: 11/08/2023] Open
Abstract
Macular holes, one of the most common macular diseases, require timely treatment. The morphological changes on optical coherence tomography (OCT) images provided an opportunity for direct observation of the disease, and accurate segmentation was needed to identify and quantify the lesions. Developments of such algorithms had been obstructed by a lack of high-quality datasets (the OCT images and the corresponding gold standard macular hole segmentation labels), especially for supervised learning-based segmentation algorithms. In such context, we established a large OCT image macular hole segmentation (OIMHS) dataset with 3859 B-scan images of 119 patients, and each image provided four segmentation labels: retina, macular hole, intraretinal cysts, and choroid. This dataset offered an excellent opportunity for investigating the accuracy and reliability of different segmentation algorithms for macular holes and a new research insight into the further development of clinical research for macular diseases, which included the retina, lesions, and choroid in quantitative analyses.
Collapse
Affiliation(s)
- Xin Ye
- Center for Rehabilitation Medicine, Department of Ophthalmology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China
| | - Shucheng He
- Center for Rehabilitation Medicine, Department of Ophthalmology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China
| | - Xiaxing Zhong
- Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Jiafeng Yu
- Center for Rehabilitation Medicine, Department of Ophthalmology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China
| | | | - Yingjiao Shen
- Center for Rehabilitation Medicine, Department of Ophthalmology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China
| | - Yiqi Chen
- Center for Rehabilitation Medicine, Department of Ophthalmology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China
| | - Yaqi Wang
- College of Media Engineering, Communication University of Zhejiang, Hangzhou, China
| | - Xingru Huang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK.
| | - Lijun Shen
- Center for Rehabilitation Medicine, Department of Ophthalmology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China.
| |
Collapse
|
24
|
Arora A, Alderman JE, Palmer J, Ganapathi S, Laws E, McCradden MD, Oakden-Rayner L, Pfohl SR, Ghassemi M, McKay F, Treanor D, Rostamzadeh N, Mateen B, Gath J, Adebajo AO, Kuku S, Matin R, Heller K, Sapey E, Sebire NJ, Cole-Lewis H, Calvert M, Denniston A, Liu X. The value of standards for health datasets in artificial intelligence-based applications. Nat Med 2023; 29:2929-2938. [PMID: 37884627 PMCID: PMC10667100 DOI: 10.1038/s41591-023-02608-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 09/22/2023] [Indexed: 10/28/2023]
Abstract
Artificial intelligence as a medical device is increasingly being applied to healthcare for diagnosis, risk stratification and resource allocation. However, a growing body of evidence has highlighted the risk of algorithmic bias, which may perpetuate existing health inequity. This problem arises in part because of systemic inequalities in dataset curation, unequal opportunity to participate in research and inequalities of access. This study aims to explore existing standards, frameworks and best practices for ensuring adequate data diversity in health datasets. Exploring the body of existing literature and expert views is an important step towards the development of consensus-based guidelines. The study comprises two parts: a systematic review of existing standards, frameworks and best practices for healthcare datasets; and a survey and thematic analysis of stakeholder views of bias, health equity and best practices for artificial intelligence as a medical device. We found that the need for dataset diversity was well described in literature, and experts generally favored the development of a robust set of guidelines, but there were mixed views about how these could be implemented practically. The outputs of this study will be used to inform the development of standards for transparency of data diversity in health datasets (the STANDING Together initiative).
Collapse
Affiliation(s)
- Anmol Arora
- School of Clinical Medicine, University of Cambridge, Cambridge, UK
| | - Joseph E Alderman
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
| | - Joanne Palmer
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
| | | | - Elinor Laws
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
| | - Melissa D McCradden
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Genetics and Genome Biology, Peter Gilgan Centre for Research and Learning, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, Toronto, Ontario, Canada
| | - Lauren Oakden-Rayner
- The Australian Institute for Machine Learning, University of Adelaide, Adelaide, South Australia, Australia
| | | | - Marzyeh Ghassemi
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Institute for Medical Engineering & Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Vector Institute, Toronto, Ontario, Canada
| | - Francis McKay
- The Ethox Centre and the Wellcome Centre for Ethics and Humanities, Nuffield Department of Population Health, University of Oxford, Oxford, UK
| | - Darren Treanor
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
- University of Leeds, Leeds, UK
- Department of Clinical Pathology and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
- Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden
| | | | - Bilal Mateen
- Institute for Health Informatics, University College London, London, UK
- Wellcome Trust, London, UK
| | - Jacqui Gath
- Patient and Public Involvement and Engagement (PPIE) Group, STANDING Together, Birmingham, UK
| | - Adewole O Adebajo
- Patient and Public Involvement and Engagement (PPIE) Group, STANDING Together, Birmingham, UK
| | | | - Rubeta Matin
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | | | - Elizabeth Sapey
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
- PIONEER, HDR UK Hub in Acute Care, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK
| | - Neil J Sebire
- National Institute for Health and Care Research, Great Ormond Street Hospital Biomedical Research Centre, London, UK
- Great Ormond Street Institute of Child Health, University Hospital London, London, UK
| | | | - Melanie Calvert
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
- Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK
- Centre for Patient Reported Outcomes Research, Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- National Institute for Health and Care Research Applied Research Collaboration West Midlands, University of Birmingham, Birmingham, UK
- National Institute for Health and Care Research Birmingham-Oxford Blood and Transplant Research Unit in Precision Transplant and Cellular Therapeutics, University of Birmingham, Birmingham, UK
- DEMAND Hub, University of Birmingham, Birmingham, UK
- UK SPINE, University of Birmingham, Birmingham, UK
| | - Alastair Denniston
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
- Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK
- National Institute for Health and Care Research Biomedical Research Centre, Moorfields Eye Hospital/University College London, London, UK
| | - Xiaoxuan Liu
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK.
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK.
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK.
| |
Collapse
|
25
|
Liu C, Liu Z, Holmes J, Zhang L, Zhang L, Ding Y, Shu P, Wu Z, Dai H, Li Y, Shen D, Liu N, Li Q, Li X, Zhu D, Liu T, Liu W. Artificial general intelligence for radiation oncology. META-RADIOLOGY 2023; 1:100045. [PMID: 38344271 PMCID: PMC10857824 DOI: 10.1016/j.metrad.2023.100045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/15/2024]
Abstract
The emergence of artificial general intelligence (AGI) is transforming radiation oncology. As prominent vanguards of AGI, large language models (LLMs) such as GPT-4 and PaLM 2 can process extensive texts and large vision models (LVMs) such as the Segment Anything Model (SAM) can process extensive imaging data to enhance the efficiency and precision of radiation therapy. This paper explores full-spectrum applications of AGI across radiation oncology including initial consultation, simulation, treatment planning, treatment delivery, treatment verification, and patient follow-up. The fusion of vision data with LLMs also creates powerful multimodal models that elucidate nuanced clinical patterns. Together, AGI promises to catalyze a shift towards data-driven, personalized radiation therapy. However, these models should complement human expertise and care. This paper provides an overview of how AGI can transform radiation oncology to elevate the standard of patient care in radiation oncology, with the key insight being AGI's ability to exploit multimodal clinical data at scale.
Collapse
Affiliation(s)
- Chenbin Liu
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, Guangdong, China
| | | | - Jason Holmes
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Lu Zhang
- Department of Computer Science and Engineering, The University of Texas at Arlington, USA
| | - Lian Zhang
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Yuzhen Ding
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Peng Shu
- School of Computing, University of Georgia, USA
| | - Zihao Wu
- School of Computing, University of Georgia, USA
| | - Haixing Dai
- School of Computing, University of Georgia, USA
| | - Yiwei Li
- School of Computing, University of Georgia, USA
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, China
- Shanghai United Imaging Intelligence Co., Ltd, China
- Shanghai Clinical Research and Trial Center, China
| | - Ninghao Liu
- School of Computing, University of Georgia, USA
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA
| | - Xiang Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA
| | - Dajiang Zhu
- Department of Computer Science and Engineering, The University of Texas at Arlington, USA
| | | | - Wei Liu
- Department of Radiation Oncology, Mayo Clinic, USA
| |
Collapse
|
26
|
Cho SI, Navarrete-Dechent C, Daneshjou R, Cho HS, Chang SE, Kim SH, Na JI, Han SS. Generation of a Melanoma and Nevus Data Set From Unstandardized Clinical Photographs on the Internet. JAMA Dermatol 2023; 159:1223-1231. [PMID: 37792351 PMCID: PMC10551819 DOI: 10.1001/jamadermatol.2023.3521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 06/16/2023] [Indexed: 10/05/2023]
Abstract
Importance Artificial intelligence (AI) training for diagnosing dermatologic images requires large amounts of clean data. Dermatologic images have different compositions, and many are inaccessible due to privacy concerns, which hinder the development of AI. Objective To build a training data set for discriminative and generative AI from unstandardized internet images of melanoma and nevus. Design, Setting, and Participants In this diagnostic study, a total of 5619 (CAN5600 data set) and 2006 (CAN2000 data set; a manually revised subset of CAN5600) cropped lesion images of either melanoma or nevus were semiautomatically annotated from approximately 500 000 photographs on the internet using convolutional neural networks (CNNs), region-based CNNs, and large mask inpainting. For unsupervised pretraining, 132 673 possible lesions (LESION130k data set) were also created with diversity by collecting images from 18 482 websites in approximately 80 countries. A total of 5000 synthetic images (GAN5000 data set) were generated using the generative adversarial network (StyleGAN2-ADA; training, CAN2000 data set; pretraining, LESION130k data set). Main Outcomes and Measures The area under the receiver operating characteristic curve (AUROC) for determining malignant neoplasms was analyzed. In each test, 1 of the 7 preexisting public data sets (total of 2312 images; including Edinburgh, an SNU subset, Asan test, Waterloo, 7-point criteria evaluation, PAD-UFES-20, and MED-NODE) was used as the test data set. Subsequently, a comparative study was conducted between the performance of the EfficientNet Lite0 CNN on the proposed data set and that trained on the remaining 6 preexisting data sets. Results The EfficientNet Lite0 CNN trained on the annotated or synthetic images achieved higher or equivalent mean (SD) AUROCs to the EfficientNet Lite0 trained using the pathologically confirmed public data sets, including CAN5600 (0.874 [0.042]; P = .02), CAN2000 (0.848 [0.027]; P = .08), and GAN5000 (0.838 [0.040]; P = .31 [Wilcoxon signed rank test]) and the preexisting data sets combined (0.809 [0.063]) by the benefits of increased size of the training data set. Conclusions and Relevance The synthetic data set in this diagnostic study was created using various AI technologies from internet images. A neural network trained on the created data set (CAN5600) performed better than the same network trained on preexisting data sets combined. Both the annotated (CAN5600 and LESION130k) and synthetic (GAN5000) data sets could be shared for AI training and consensus between physicians.
Collapse
Affiliation(s)
| | | | - Roxana Daneshjou
- Department of Dermatology, Stanford University, Stanford, California
| | - Hye Soo Cho
- Department of Dermatology, Asan Medical Center, Ulsan University College of Medicine, Seoul, Korea
| | - Sung Eun Chang
- Department of Dermatology, Asan Medical Center, Ulsan University College of Medicine, Seoul, Korea
| | - Seong Hwan Kim
- Department of Plastic and Reconstructive Surgery, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul, Korea
| | - Jung-Im Na
- Department of Dermatology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seoul, Korea
| | - Seung Seog Han
- Department of Dermatology, I Dermatology Clinic, Seoul, Korea
- IDerma Inc, Seoul, Korea
| |
Collapse
|
27
|
Korot E, Gonçalves MB, Huemer J, Beqiri S, Khalid H, Kelly M, Chia M, Mathijs E, Struyven R, Moussa M, Keane PA. Clinician-Driven AI: Code-Free Self-Training on Public Data for Diabetic Retinopathy Referral. JAMA Ophthalmol 2023; 141:1029-1036. [PMID: 37856110 PMCID: PMC10587830 DOI: 10.1001/jamaophthalmol.2023.4508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 08/23/2023] [Indexed: 10/20/2023]
Abstract
Importance Democratizing artificial intelligence (AI) enables model development by clinicians with a lack of coding expertise, powerful computing resources, and large, well-labeled data sets. Objective To determine whether resource-constrained clinicians can use self-training via automated machine learning (ML) and public data sets to design high-performing diabetic retinopathy classification models. Design, Setting, and Participants This diagnostic quality improvement study was conducted from January 1, 2021, to December 31, 2021. A self-training method without coding was used on 2 public data sets with retinal images from patients in France (Messidor-2 [n = 1748]) and the UK and US (EyePACS [n = 58 689]) and externally validated on 1 data set with retinal images from patients of a private Egyptian medical retina clinic (Egypt [n = 210]). An AI model was trained to classify referable diabetic retinopathy as an exemplar use case. Messidor-2 images were assigned adjudicated labels available on Kaggle; 4 images were deemed ungradable and excluded, leaving 1744 images. A total of 300 images randomly selected from the EyePACS data set were independently relabeled by 3 blinded retina specialists using the International Classification of Diabetic Retinopathy protocol for diabetic retinopathy grade and diabetic macular edema presence; 19 images were deemed ungradable, leaving 281 images. Data analysis was performed from February 1 to February 28, 2021. Exposures Using public data sets, a teacher model was trained with labeled images using supervised learning. Next, the resulting predictions, termed pseudolabels, were used on an unlabeled public data set. Finally, a student model was trained with the existing labeled images and the additional pseudolabeled images. Main Outcomes and Measures The analyzed metrics for the models included the area under the receiver operating characteristic curve (AUROC), accuracy, sensitivity, specificity, and F1 score. The Fisher exact test was performed, and 2-tailed P values were calculated for failure case analysis. Results For the internal validation data sets, AUROC values for performance ranged from 0.886 to 0.939 for the teacher model and from 0.916 to 0.951 for the student model. For external validation of automated ML model performance, AUROC values and accuracy were 0.964 and 93.3% for the teacher model, 0.950 and 96.7% for the student model, and 0.890 and 94.3% for the manually coded bespoke model, respectively. Conclusions and Relevance These findings suggest that self-training using automated ML is an effective method to increase both model performance and generalizability while decreasing the need for costly expert labeling. This approach advances the democratization of AI by enabling clinicians without coding expertise or access to large, well-labeled private data sets to develop their own AI models.
Collapse
Affiliation(s)
- Edward Korot
- Retina Specialists of Michigan, Grand Rapids
- Moorfields Eye Hospital, London, United Kingdom
- Stanford University Byers Eye Institute, Palo Alto, California
| | - Mariana Batista Gonçalves
- Moorfields Eye Hospital, London, United Kingdom
- Federal University of Sao Paulo, Sao Paulo, Brazil
- Instituto da Visão, Sao Paulo, Brazil
| | | | - Sara Beqiri
- Moorfields Eye Hospital, London, United Kingdom
- University College London Medical School, London, United Kingdom
| | - Hagar Khalid
- Moorfields Eye Hospital, London, United Kingdom
- Ophthalmology Department, Faculty of Medicine, Tanta University Hospital, Tanta, Gharbia, Egypt
| | - Madeline Kelly
- Moorfields Eye Hospital, London, United Kingdom
- University College London Medical School, London, United Kingdom
- UCL Centre for Medical Image Computing, London, United Kingdom
| | - Mark Chia
- Moorfields Eye Hospital, London, United Kingdom
| | - Emily Mathijs
- Michigan State University College of Osteopathic Medicine, East Lansing
| | | | - Magdy Moussa
- Ophthalmology Department, Faculty of Medicine, Tanta University Hospital, Tanta, Gharbia, Egypt
| | | |
Collapse
|
28
|
Thirunavukarasu AJ, Elangovan K, Gutierrez L, Li Y, Tan I, Keane PA, Korot E, Ting DSW. Democratizing Artificial Intelligence Imaging Analysis With Automated Machine Learning: Tutorial. J Med Internet Res 2023; 25:e49949. [PMID: 37824185 PMCID: PMC10603560 DOI: 10.2196/49949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 08/21/2023] [Accepted: 09/13/2023] [Indexed: 10/13/2023] Open
Abstract
Deep learning-based clinical imaging analysis underlies diagnostic artificial intelligence (AI) models, which can match or even exceed the performance of clinical experts, having the potential to revolutionize clinical practice. A wide variety of automated machine learning (autoML) platforms lower the technical barrier to entry to deep learning, extending AI capabilities to clinicians with limited technical expertise, and even autonomous foundation models such as multimodal large language models. Here, we provide a technical overview of autoML with descriptions of how autoML may be applied in education, research, and clinical practice. Each stage of the process of conducting an autoML project is outlined, with an emphasis on ethical and technical best practices. Specifically, data acquisition, data partitioning, model training, model validation, analysis, and model deployment are considered. The strengths and limitations of available code-free, code-minimal, and code-intensive autoML platforms are considered. AutoML has great potential to democratize AI in medicine, improving AI literacy by enabling "hands-on" education. AutoML may serve as a useful adjunct in research by facilitating rapid testing and benchmarking before significant computational resources are committed. AutoML may also be applied in clinical contexts, provided regulatory requirements are met. The abstraction by autoML of arduous aspects of AI engineering promotes prioritization of data set curation, supporting the transition from conventional model-driven approaches to data-centric development. To fulfill its potential, clinicians must be educated on how to apply these technologies ethically, rigorously, and effectively; this tutorial represents a comprehensive summary of relevant considerations.
Collapse
Affiliation(s)
- Arun James Thirunavukarasu
- University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
| | - Kabilan Elangovan
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
| | - Laura Gutierrez
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
| | - Yong Li
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
| | - Iris Tan
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
| | - Pearse A Keane
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Edward Korot
- Byers Eye Institute, Stanford University, Palo Alto, CA, United States
- Retina Specialists of Michigan, Grand Rapids, MI, United States
| | - Daniel Shu Wei Ting
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
- Byers Eye Institute, Stanford University, Palo Alto, CA, United States
- Singapore National Eye Centre, Singapore, Singapore
| |
Collapse
|
29
|
Restrepo D, Quion J, Vásquez-Venegas C, Villanueva C, Anthony Celi L, Nakayama LF. A scoping review of the landscape of health-related open datasets in Latin America. PLOS DIGITAL HEALTH 2023; 2:e0000368. [PMID: 37878549 PMCID: PMC10599518 DOI: 10.1371/journal.pdig.0000368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 09/16/2023] [Indexed: 10/27/2023]
Abstract
Artificial intelligence (AI) algorithms have the potential to revolutionize healthcare, but their successful translation into clinical practice has been limited. One crucial factor is the data used to train these algorithms, which must be representative of the population. However, most healthcare databases are derived from high-income countries, leading to non-representative models and potentially exacerbating health inequities. This review focuses on the landscape of health-related open datasets in Latin America, aiming to identify existing datasets, examine data-sharing frameworks, techniques, platforms, and formats, and identify best practices in Latin America. The review found 61 datasets from 23 countries, with the DATASUS dataset from Brazil contributing to the majority of articles. The analysis revealed a dearth of datasets created by the authors themselves, indicating a reliance on existing open datasets. The findings underscore the importance of promoting open data in Latin America. We provide recommendations for enhancing data sharing in the region.
Collapse
Affiliation(s)
- David Restrepo
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Telematics Department, University of Cauca, Popayán, Cauca, Colombia
| | - Justin Quion
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Constanza Vásquez-Venegas
- Scientific Image Analysis Lab, Integrative Biology Program, Biomedical Sciences Institute (ICBM), Faculty of Medicine, Universidad de Chile, Santiago, Chile
| | - Cleva Villanueva
- Instituto Politécnico Nacional, Escuela Superior de Medicina, Ciudad de Mexico, Mexico
| | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Luis Filipe Nakayama
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
| |
Collapse
|
30
|
Ait Hammou B, Antaki F, Boucher MC, Duval R. MBT: Model-Based Transformer for retinal optical coherence tomography image and video multi-classification. Int J Med Inform 2023; 178:105178. [PMID: 37657204 DOI: 10.1016/j.ijmedinf.2023.105178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 07/13/2023] [Accepted: 08/06/2023] [Indexed: 09/03/2023]
Abstract
BACKGROUND AND OBJECTIVE The detection of retinal diseases using optical coherence tomography (OCT) images and videos is a concrete example of a data classification problem. In recent years, Transformer architectures have been successfully applied to solve a variety of real-world classification problems. Although they have shown impressive discriminative abilities compared to other state-of-the-art models, improving their performance is essential, especially in healthcare-related problems. METHODS This paper presents an effective technique named model-based transformer (MBT). It is based on popular pre-trained transformer models, particularly, vision transformer, swin transformer for OCT image classification, and multiscale vision transformer for OCT video classification. The proposed approach is designed to represent OCT data by taking advantage of an approximate sparse representation technique. Then, it estimates the optimal features, and performs data classification. RESULTS The experiments are carried out using three real-world retinal datasets. The experimental results on OCT image and OCT video datasets show that the proposed method outperforms existing state-of-the-art deep learning approaches in terms of classification accuracy, precision, recall, and f1-score, kappa, AUC-ROC, and AUC-PR. It can also boost the performance of existing transformer models, including Vision transformer and Swin transformer for OCT image classification, and Multiscale Vision Transformers for OCT video classification. CONCLUSIONS This work presents an approach for the automated detection of retinal diseases. Although deep neural networks have proven great potential in ophthalmology applications, our findings demonstrate for the first time a new way to identify retinal pathologies using OCT videos instead of images. Moreover, our proposal can help researchers enhance the discriminative capacity of a variety of powerful deep learning models presented in published papers. This can be valuable for future directions in medical research and clinical practice.
Collapse
Affiliation(s)
- Badr Ait Hammou
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada; Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est-de-l'Île-de-Montréal, Montréal, Québec, Canada.
| | - Fares Antaki
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada; Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est-de-l'Île-de-Montréal, Montréal, Québec, Canada; Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Quebec, Canada
| | - Marie-Carole Boucher
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada; Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est-de-l'Île-de-Montréal, Montréal, Québec, Canada
| | - Renaud Duval
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada; Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l'Est-de-l'Île-de-Montréal, Montréal, Québec, Canada
| |
Collapse
|
31
|
Ye X, Gao K, He S, Zhong X, Shen Y, Wang Y, Shao H, Shen L. Artificial Intelligence-Based Quantification of Central Macular Fluid Volume and VA Prediction for Diabetic Macular Edema Using OCT Images. Ophthalmol Ther 2023; 12:2441-2452. [PMID: 37318706 PMCID: PMC10441848 DOI: 10.1007/s40123-023-00746-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 05/25/2023] [Indexed: 06/16/2023] Open
Abstract
INTRODUCTION We studied the correlation of central macular fluid volume (CMFV) and central subfield thickness (CST) with best-corrected visual acuity (BCVA) in treatment-naïve eyes with diabetic macular edema (DME) 1 month after anti-vascular endothelial growth factor (VEGF) therapy. METHODS This retrospective cohort study investigated eyes that received anti-VEGF therapy. All participants underwent comprehensive examinations and optical coherence tomography (OCT) volume scans at baseline (M0) and 1 month after the first treatment (M1). Two deep learning models were separately developed to automatically measure the CMFV and the CST. Correlations were analyzed between the CMFV and the logMAR BCVA at M0 and logMAR BCVA at M1. The area under the receiver operating characteristic curve (AUROC) of CMFV and CST for predicting eyes with BCVA [Formula: see text] 20/40 at M1 was analyzed. RESULTS This study included 156 DME eyes from 89 patients. The median CMFV decreased from 0.272 (0.061-0.568) at M0 to 0.096 (0.018-0.307) mm3 at M1. The CST decreased from 414 (293-575) to 322 (252-430) μm. The logMAR BCVA decreased from 0.523 (0.301-0.817) to 0.398 (0.222-0.699). Multivariate analysis demonstrated that the CMFV was the only significant factor for logMAR BCVA at both M0 (β = 0.199, p = 0.047) and M1 (β = 0.279, p = 0.004). The AUROC of CMFV for predicting eyes with BCVA [Formula: see text] 20/40 at M1 was 0.72, and the AUROC of CST was 0.69. CONCLUSIONS Anti-VEGF therapy is an effective treatment for DME. Automated measured CMFV is a more accurate prognostic factor than CST for the initial anti-VEGF treatment outcome of DME.
Collapse
Affiliation(s)
- Xin Ye
- Department of Ophthalmology, Center for Rehabilitation Medicine, Zhejiang Provincial People’s Hospital (Affiliated People’s Hospital, Hangzhou Medical College), Hangzhou, Zhejiang China
| | - Kun Gao
- Jiaxing Key Laboratory of Visual Big Data and Artificial Intelligence, Yangtze Delta Region Institute of Tsinghua University, Zhejiang, China
| | - Shucheng He
- Wenzhou Medical University, Wenzhou, Zhejiang China
| | | | | | - Yaqi Wang
- College of Media Engineering, Communication University of Zhejiang, Hangzhou, China
| | - Hang Shao
- Jiaxing Key Laboratory of Visual Big Data and Artificial Intelligence, Yangtze Delta Region Institute of Tsinghua University, Zhejiang, China
| | - Lijun Shen
- Department of Ophthalmology, Center for Rehabilitation Medicine, Zhejiang Provincial People’s Hospital (Affiliated People’s Hospital, Hangzhou Medical College), Hangzhou, Zhejiang China
- Wenzhou Medical University, Wenzhou, Zhejiang China
| |
Collapse
|
32
|
Paik KE, Hicklen R, Kaggwa F, Puyat CV, Nakayama LF, Ong BA, Shropshire JNI, Villanueva C. Digital Determinants of Health: Health data poverty amplifies existing health disparities-A scoping review. PLOS DIGITAL HEALTH 2023; 2:e0000313. [PMID: 37824445 PMCID: PMC10569513 DOI: 10.1371/journal.pdig.0000313] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 07/02/2023] [Indexed: 10/14/2023]
Abstract
Artificial intelligence (AI) and machine learning (ML) have an immense potential to transform healthcare as already demonstrated in various medical specialties. This scoping review focuses on the factors that influence health data poverty, by conducting a literature review, analysis, and appraisal of results. Health data poverty is often an unseen factor which leads to perpetuating or exacerbating health disparities. Improvements or failures in addressing health data poverty will directly impact the effectiveness of AI/ML systems. The potential causes are complex and may enter anywhere along the development process. The initial results highlighted studies with common themes of health disparities (72%), AL/ML bias (28%) and biases in input data (18%). To properly evaluate disparities that exist we recommend a strengthened effort to generate unbiased equitable data, improved understanding of the limitations of AI/ML tools, and rigorous regulation with continuous monitoring of the clinical outcomes of deployed tools.
Collapse
Affiliation(s)
- Kenneth Eugene Paik
- MIT Critical Data, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Rachel Hicklen
- Research Medical Library, MD Anderson Cancer Center, Houston, Texas, United States of America
| | - Fred Kaggwa
- Department of Computer Science, Mbarara University of Science & Technology, Mbarara, Uganda
| | | | - Luis Filipe Nakayama
- MIT Critical Data, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Ophthalmology, São Paulo Federal University, São Paulo, Brazil
| | - Bradley Ashley Ong
- Department of Neurology, Neurological Institute, Cleveland Clinic, Cleveland, Ohio, United States of America
| | | | - Cleva Villanueva
- Instituto Politécnico Nacional, Escuela Superior de Medicina, Mexico City, Mexico
| |
Collapse
|
33
|
Dobrzycka M, Sulewska A, Biecek P, Charkiewicz R, Karabowicz P, Charkiewicz A, Golaszewska K, Milewska P, Michalska-Falkowska A, Nowak K, Niklinski J, Konopińska J. miRNA Studies in Glaucoma: A Comprehensive Review of Current Knowledge and Future Perspectives. Int J Mol Sci 2023; 24:14699. [PMID: 37834147 PMCID: PMC10572595 DOI: 10.3390/ijms241914699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 09/25/2023] [Accepted: 09/27/2023] [Indexed: 10/15/2023] Open
Abstract
Glaucoma, a neurodegenerative disorder that leads to irreversible blindness, remains a challenge because of its complex nature. MicroRNAs (miRNAs) are crucial regulators of gene expression and are associated with glaucoma and other diseases. We aimed to review and discuss the advantages and disadvantages of miRNA-focused molecular studies in glaucoma through discussing their potential as biomarkers for early detection and diagnosis; offering insights into molecular pathways and mechanisms; and discussing their potential utility with respect to personalized medicine, their therapeutic potential, and non-invasive monitoring. Limitations, such as variability, small sample sizes, sample specificity, and limited accessibility to ocular tissues, are also addressed, underscoring the need for robust protocols and collaboration. Reproducibility and validation are crucial to establish the credibility of miRNA research findings, and the integration of bioinformatics tools for miRNA database creation is a valuable component of a comprehensive approach to investigate miRNA aberrations in patients with glaucoma. Overall, miRNA research in glaucoma has provided significant insights into the molecular mechanisms of the disease, offering potential biomarkers, diagnostic tools, and therapeutic targets. However, addressing challenges such as variability and limited tissue accessibility is essential, and further investigations and validation will contribute to a deeper understanding of the functional significance of miRNAs in glaucoma.
Collapse
Affiliation(s)
- Margarita Dobrzycka
- Department of Ophthalmology, Medical University of Bialystok, 15-276 Bialystok, Poland; (M.D.); (K.G.)
| | - Anetta Sulewska
- Department of Clinical Molecular Biology, Medical University of Bialystok, 15-269 Bialystok, Poland; (A.S.); (A.C.); (J.N.)
| | - Przemyslaw Biecek
- Faculty of Mathematics and Information Science, Warsaw University of Technology, 00-662 Warsaw, Poland;
| | - Radoslaw Charkiewicz
- Center of Experimental Medicine, Medical University of Bialystok, 15-369 Bialystok, Poland;
- Biobank, Medical University of Bialystok, 15-269 Bialystok, Poland; (P.K.); (P.M.); (A.M.-F.)
| | - Piotr Karabowicz
- Biobank, Medical University of Bialystok, 15-269 Bialystok, Poland; (P.K.); (P.M.); (A.M.-F.)
| | - Angelika Charkiewicz
- Department of Clinical Molecular Biology, Medical University of Bialystok, 15-269 Bialystok, Poland; (A.S.); (A.C.); (J.N.)
| | - Kinga Golaszewska
- Department of Ophthalmology, Medical University of Bialystok, 15-276 Bialystok, Poland; (M.D.); (K.G.)
| | - Patrycja Milewska
- Biobank, Medical University of Bialystok, 15-269 Bialystok, Poland; (P.K.); (P.M.); (A.M.-F.)
| | | | - Karolina Nowak
- Department of Obstetrics and Gynecology, C.S. Mott Center for Human Growth and Development, School of Medicine, Wayne State University, Detroit, MI 48201, USA;
| | - Jacek Niklinski
- Department of Clinical Molecular Biology, Medical University of Bialystok, 15-269 Bialystok, Poland; (A.S.); (A.C.); (J.N.)
| | - Joanna Konopińska
- Department of Ophthalmology, Medical University of Bialystok, 15-276 Bialystok, Poland; (M.D.); (K.G.)
| |
Collapse
|
34
|
Batool S, Gilani SO, Waris A, Iqbal KF, Khan NB, Khan MI, Eldin SM, Awwad FA. Deploying efficient net batch normalizations (BNs) for grading diabetic retinopathy severity levels from fundus images. Sci Rep 2023; 13:14462. [PMID: 37660096 PMCID: PMC10475020 DOI: 10.1038/s41598-023-41797-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Accepted: 08/31/2023] [Indexed: 09/04/2023] Open
Abstract
Diabetic retinopathy (DR) is one of the main causes of blindness in people around the world. Early diagnosis and treatment of DR can be accomplished by organizing large regular screening programs. Still, it is difficult to spot diabetic retinopathy timely because the situation might not indicate signs in the primary stages of the disease. Due to a drastic increase in diabetic patients, there is an urgent need for efficient diabetic retinopathy detecting systems. Auto-encoders, sparse coding, and limited Boltzmann machines were used as a few past deep learning (DL) techniques and features for the classification of DR. Convolutional Neural Networks (CNN) have been identified as a promising solution for detecting and classifying DR. We employ the deep learning capabilities of efficient net batch normalization (BNs) pre-trained models to automatically acquire discriminative features from fundus images. However, we successfully achieved F1 scores above 80% on all efficient net BNs in the EYE-PACS dataset (calculated F1 score for DeepDRiD another dataset) and the results are better than previous studies. In this paper, we improved the accuracy and F1 score of the efficient net BNs pre-trained models on the EYE-PACS dataset by applying a Gaussian Smooth filter and data augmentation transforms. Using our proposed technique, we have achieved F1 scores of 84% and 87% for EYE-PACS and DeepDRiD.
Collapse
Affiliation(s)
- Summiya Batool
- National University of Sciences and Technology, Islamabad, 44000, Pakistan
| | - Syed Omer Gilani
- National University of Sciences and Technology, Islamabad, 44000, Pakistan
| | - Asim Waris
- National University of Sciences and Technology, Islamabad, 44000, Pakistan
| | | | - Niaz B Khan
- National University of Sciences and Technology, Islamabad, 44000, Pakistan
- Mechanical Engineering Department, College of Engineering, University of Bahrain, Isa Town, 32038, Bahrain
| | - M Ijaz Khan
- Depaetment of Mechanical Engineering, Lebanese American University, Kraytem, Beirut, 1102-2801, Lebanon.
- Department of Mathematics and Statistics, Riphah International University I-14, Islamabad, 44000, Pakistan.
- Department of Mechanics and Engineering Science, Peking University, Beijing, 100871, China.
| | - Sayed M Eldin
- Faculty of Engineering, Center of Research, Future University in Egypt, New Cairo, 11835, Egypt
| | - Fuad A Awwad
- Department of Quantitative Analysis, College of Business Administration, King Saud University, P.O. Box 71115, 11587, Riyadh, Saudi Arabia
| |
Collapse
|
35
|
Chew EY. Publication of Datasets, a Step toward Advancing Data Science. OPHTHALMOLOGY SCIENCE 2023; 3:100381. [PMID: 37810588 PMCID: PMC10556280 DOI: 10.1016/j.xops.2023.100381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
|
36
|
Tan TF, Thirunavukarasu AJ, Jin L, Lim J, Poh S, Teo ZL, Ang M, Chan RVP, Ong J, Turner A, Karlström J, Wong TY, Stern J, Ting DSW. Artificial intelligence and digital health in global eye health: opportunities and challenges. Lancet Glob Health 2023; 11:e1432-e1443. [PMID: 37591589 DOI: 10.1016/s2214-109x(23)00323-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 06/26/2023] [Accepted: 07/04/2023] [Indexed: 08/19/2023]
Abstract
Global eye health is defined as the degree to which vision, ocular health, and function are maximised worldwide, thereby optimising overall wellbeing and quality of life. Improving eye health is a global priority as a key to unlocking human potential by reducing the morbidity burden of disease, increasing productivity, and supporting access to education. Although extraordinary progress fuelled by global eye health initiatives has been made over the last decade, there remain substantial challenges impeding further progress. The accelerated development of digital health and artificial intelligence (AI) applications provides an opportunity to transform eye health, from facilitating and increasing access to eye care to supporting clinical decision making with an objective, data-driven approach. Here, we explore the opportunities and challenges presented by digital health and AI in global eye health and describe how these technologies could be leveraged to improve global eye health. AI, telehealth, and emerging technologies have great potential, but require specific work to overcome barriers to implementation. We suggest that a global digital eye health task force could facilitate coordination of funding, infrastructural development, and democratisation of AI and digital health to drive progress forwards in this domain.
Collapse
Affiliation(s)
- Ting Fang Tan
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore
| | - Arun J Thirunavukarasu
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Corpus Christi College, University of Cambridge, Cambridge, UK; School of Clinical Medicine, University of Cambridge, Cambridge, UK
| | - Liyuan Jin
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore
| | - Joshua Lim
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore
| | - Stanley Poh
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore
| | - Zhen Ling Teo
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore
| | - Marcus Ang
- Singapore National Eye Centre, Singapore General Hospital, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore
| | - R V Paul Chan
- Illinois Eye and Ear Infirmary, University of Illinois College of Medicine, Urbana-Champaign, IL, USA
| | - Jasmine Ong
- Pharmacy Department, Singapore General Hospital, Singapore
| | - Angus Turner
- Lions Eye Institute, University of Western Australia, Nedlands, WA, Australia
| | - Jonas Karlström
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Tien Yin Wong
- Singapore National Eye Centre, Singapore General Hospital, Singapore; Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Jude Stern
- The International Agency for the Prevention of Blindness, London, UK
| | - Daniel Shu-Wei Ting
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore.
| |
Collapse
|
37
|
Nath S, Rahimy E, Kras A, Korot E. Toward safer ophthalmic artificial intelligence via distributed validation on real-world data. Curr Opin Ophthalmol 2023; 34:459-463. [PMID: 37459329 DOI: 10.1097/icu.0000000000000986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/12/2023]
Abstract
PURPOSE OF REVIEW The current article provides an overview of the present approaches to algorithm validation, which are variable and largely self-determined, as well as solutions to address inadequacies. RECENT FINDINGS In the last decade alone, numerous machine learning applications have been proposed for ophthalmic diagnosis or disease monitoring. Remarkably, of these, less than 15 have received regulatory approval for implementation into clinical practice. Although there exists a vast pool of structured and relatively clean datasets from which to develop and test algorithms in the computational 'laboratory', real-world validation remains key to allow for safe, equitable, and clinically reliable implementation. Bottlenecks in the validation process stem from a striking paucity of regulatory guidance surrounding safety and performance thresholds, lack of oversight on critical postdeployment monitoring and context-specific recalibration, and inherent complexities of heterogeneous disease states and clinical environments. Implementation of secure, third-party, unbiased, pre and postdeployment validation offers the potential to address existing shortfalls in the validation process. SUMMARY Given the criticality of validation to the algorithm pipeline, there is an urgent need for developers, machine learning researchers, and end-user clinicians to devise a consensus approach, allowing for the rapid introduction of safe, equitable, and clinically valid machine learning implementations.
Collapse
Affiliation(s)
- Siddharth Nath
- Department of Ophthalmology and Visual Sciences, McGill University, Montréal, Québec, Canada
| | - Ehsan Rahimy
- Byers Eye Institute, Stanford University, Palo Alto, California, USA
| | - Ashley Kras
- Save Sight Institute, Sydney University, Sydney, Australia
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Edward Korot
- Byers Eye Institute, Stanford University, Palo Alto, California, USA
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Retina Specialists of Michigan, Grand Rapids, Michigan, USA
| |
Collapse
|
38
|
Chou YB, Kale AU, Lanzetta P, Aslam T, Barratt J, Danese C, Eldem B, Eter N, Gale R, Korobelnik JF, Kozak I, Li X, Li X, Loewenstein A, Ruamviboonsuk P, Sakamoto T, Ting DS, van Wijngaarden P, Waldstein SM, Wong D, Wu L, Zapata MA, Zarranz-Ventura J. Current status and practical considerations of artificial intelligence use in screening and diagnosing retinal diseases: Vision Academy retinal expert consensus. Curr Opin Ophthalmol 2023; 34:403-413. [PMID: 37326222 PMCID: PMC10399944 DOI: 10.1097/icu.0000000000000979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE OF REVIEW The application of artificial intelligence (AI) technologies in screening and diagnosing retinal diseases may play an important role in telemedicine and has potential to shape modern healthcare ecosystems, including within ophthalmology. RECENT FINDINGS In this article, we examine the latest publications relevant to AI in retinal disease and discuss the currently available algorithms. We summarize four key requirements underlining the successful application of AI algorithms in real-world practice: processing massive data; practicability of an AI model in ophthalmology; policy compliance and the regulatory environment; and balancing profit and cost when developing and maintaining AI models. SUMMARY The Vision Academy recognizes the advantages and disadvantages of AI-based technologies and gives insightful recommendations for future directions.
Collapse
Affiliation(s)
- Yu-Bai Chou
- Department of Ophthalmology, Taipei Veterans General Hospital
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Aditya U. Kale
- Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Paolo Lanzetta
- Department of Medicine – Ophthalmology, University of Udine
- Istituto Europeo di Microchirurgia Oculare, Udine, Italy
| | - Tariq Aslam
- Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, University of Manchester School of Health Sciences, Manchester, UK
| | - Jane Barratt
- International Federation on Ageing, Toronto, Canada
| | - Carla Danese
- Department of Medicine – Ophthalmology, University of Udine
- Department of Ophthalmology, AP-HP Hôpital Lariboisière, Université Paris Cité, Paris, France
| | - Bora Eldem
- Department of Ophthalmology, Hacettepe University, Ankara, Turkey
| | - Nicole Eter
- Department of Ophthalmology, University of Münster Medical Center, Münster, Germany
| | - Richard Gale
- Department of Ophthalmology, York Teaching Hospital NHS Foundation Trust, York, UK
| | - Jean-François Korobelnik
- Service d’ophtalmologie, CHU Bordeaux
- University of Bordeaux, INSERM, BPH, UMR1219, F-33000 Bordeaux, France
| | - Igor Kozak
- Moorfields Eye Hospital Centre, Abu Dhabi, UAE
| | - Xiaorong Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin
| | - Xiaoxin Li
- Xiamen Eye Center, Xiamen University, Xiamen, China
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Paisan Ruamviboonsuk
- Department of Ophthalmology, College of Medicine, Rangsit University, Rajavithi Hospital, Bangkok, Thailand
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University, Kagoshima, Japan
| | - Daniel S.W. Ting
- Singapore National Eye Center, Duke-NUS Medical School, Singapore
| | - Peter van Wijngaarden
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Australia
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | | | - David Wong
- Unity Health Toronto – St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Lihteh Wu
- Macula, Vitreous and Retina Associates of Costa Rica, San José, Costa Rica
| | | | | |
Collapse
|
39
|
Berk A, Ozturan G, Delavari P, Maberley D, Yılmaz Ö, Oruc I. Learning from small data: Classifying sex from retinal images via deep learning. PLoS One 2023; 18:e0289211. [PMID: 37535591 PMCID: PMC10399793 DOI: 10.1371/journal.pone.0289211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 07/14/2023] [Indexed: 08/05/2023] Open
Abstract
Deep learning (DL) techniques have seen tremendous interest in medical imaging, particularly in the use of convolutional neural networks (CNNs) for the development of automated diagnostic tools. The facility of its non-invasive acquisition makes retinal fundus imaging particularly amenable to such automated approaches. Recent work in the analysis of fundus images using CNNs relies on access to massive datasets for training and validation, composed of hundreds of thousands of images. However, data residency and data privacy restrictions stymie the applicability of this approach in medical settings where patient confidentiality is a mandate. Here, we showcase results for the performance of DL on small datasets to classify patient sex from fundus images-a trait thought not to be present or quantifiable in fundus images until recently. Specifically, we fine-tune a Resnet-152 model whose last layer has been modified to a fully-connected layer for binary classification. We carried out several experiments to assess performance in the small dataset context using one private (DOVS) and one public (ODIR) data source. Our models, developed using approximately 2500 fundus images, achieved test AUC scores of up to 0.72 (95% CI: [0.67, 0.77]). This corresponds to a mere 25% decrease in performance despite a nearly 1000-fold decrease in the dataset size compared to prior results in the literature. Our results show that binary classification, even with a hard task such as sex categorization from retinal fundus images, is possible with very small datasets. Our domain adaptation results show that models trained with one distribution of images may generalize well to an independent external source, as in the case of models trained on DOVS and tested on ODIR. Our results also show that eliminating poor quality images may hamper training of the CNN due to reducing the already small dataset size even further. Nevertheless, using high quality images may be an important factor as evidenced by superior generalizability of results in the domain adaptation experiments. Finally, our work shows that ensembling is an important tool in maximizing performance of deep CNNs in the context of small development datasets.
Collapse
Affiliation(s)
- Aaron Berk
- Department of Mathematics & Statistics, McGill University, Montréal, Canada
| | - Gulcenur Ozturan
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| | - Parsa Delavari
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| | - David Maberley
- Department of Ophthalmology, University of Ottawa, Ottawa, Canada
| | - Özgür Yılmaz
- Department of Mathematics, University of British Columbia, Vancouver, Canada
| | - Ipek Oruc
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| |
Collapse
|
40
|
Jacoba CMP, Doan D, Salongcay RP, Aquino LAC, Silva JPY, Salva CMG, Zhang D, Alog GP, Zhang K, Locaylocay KLRB, Saunar AV, Ashraf M, Sun JK, Peto T, Aiello LP, Silva PS. Performance of Automated Machine Learning for Diabetic Retinopathy Image Classification from Multi-field Handheld Retinal Images. Ophthalmol Retina 2023; 7:703-712. [PMID: 36924893 DOI: 10.1016/j.oret.2023.03.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 02/07/2023] [Accepted: 03/01/2023] [Indexed: 03/17/2023]
Abstract
PURPOSE To create and validate code-free automated deep learning models (AutoML) for diabetic retinopathy (DR) classification from handheld retinal images. DESIGN Prospective development and validation of AutoML models for DR image classification. PARTICIPANTS A total of 17 829 deidentified retinal images from 3566 eyes with diabetes, acquired using handheld retinal cameras in a community-based DR screening program. METHODS AutoML models were generated based on previously acquired 5-field (macula-centered, disc-centered, superior, inferior, and temporal macula) handheld retinal images. Each individual image was labeled using the International DR and diabetic macular edema (DME) Classification Scale by 4 certified graders at a centralized reading center under oversight by a senior retina specialist. Images for model development were split 8-1-1 for training, optimization, and testing to detect referable DR ([refDR], defined as moderate nonproliferative DR or worse or any level of DME). Internal validation was performed using a published image set from the same patient population (N = 450 images from 225 eyes). External validation was performed using a publicly available retinal imaging data set from the Asia Pacific Tele-Ophthalmology Society (N = 3662 images). MAIN OUTCOME MEASURES Area under the precision-recall curve (AUPRC), sensitivity (SN), specificity (SP), positive predictive value (PPV), negative predictive value (NPV), accuracy, and F1 scores. RESULTS Referable DR was present in 17.3%, 39.1%, and 48.0% of the training set, internal validation, and external validation sets, respectively. The model's AUPRC was 0.995 with a precision and recall of 97% using a score threshold of 0.5. Internal validation showed that SN, SP, PPV, NPV, accuracy, and F1 scores were 0.96 (95% confidence interval [CI], 0.884-0.99), 0.98 (95% CI, 0.937-0.995), 0.96 (95% CI, 0.884-0.99), 0.98 (95% CI, 0.937-0.995), 0.97, and 0.96, respectively. External validation showed that SN, SP, PPV, NPV, accuracy, and F1 scores were 0.94 (95% CI, 0.929-0.951), 0.97 (95% CI, 0.957-0.974), 0.96 (95% CI, 0.952-0.971), 0.95 (95% CI, 0.935-0.956), 0.97, and 0.96, respectively. CONCLUSIONS This study demonstrates the accuracy and feasibility of code-free AutoML models for identifying refDR developed using handheld retinal imaging in a community-based screening program. Potentially, the use of AutoML may increase access to machine learning models that may be adapted for specific programs that are guided by the clinical need to rapidly address disparities in health care delivery. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Cris Martin P Jacoba
- Beetham Eye Institute, Joslin Diabetes Center, Boston, Massachusetts; Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts
| | - Duy Doan
- Beetham Eye Institute, Joslin Diabetes Center, Boston, Massachusetts
| | - Recivall P Salongcay
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines; Centre for Public Health, Queen's University Belfast, United Kingdom; Eyes and Vision Institute, the Medical City, Pasig City, Philippines
| | - Lizzie Anne C Aquino
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
| | - Joseph Paolo Y Silva
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
| | | | - Dean Zhang
- Beetham Eye Institute, Joslin Diabetes Center, Boston, Massachusetts
| | - Glenn P Alog
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines; Eyes and Vision Institute, the Medical City, Pasig City, Philippines
| | - Kexin Zhang
- Beetham Eye Institute, Joslin Diabetes Center, Boston, Massachusetts
| | - Kaye Lani Rea B Locaylocay
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines; Eyes and Vision Institute, the Medical City, Pasig City, Philippines
| | - Aileen V Saunar
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines; Eyes and Vision Institute, the Medical City, Pasig City, Philippines
| | - Mohamed Ashraf
- Beetham Eye Institute, Joslin Diabetes Center, Boston, Massachusetts; Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts
| | - Jennifer K Sun
- Beetham Eye Institute, Joslin Diabetes Center, Boston, Massachusetts; Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts
| | - Tunde Peto
- Centre for Public Health, Queen's University Belfast, United Kingdom
| | - Lloyd Paul Aiello
- Beetham Eye Institute, Joslin Diabetes Center, Boston, Massachusetts; Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts
| | - Paolo S Silva
- Beetham Eye Institute, Joslin Diabetes Center, Boston, Massachusetts; Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts; Philippine Eye Research Institute, University of the Philippines, Manila, Philippines; Eyes and Vision Institute, the Medical City, Pasig City, Philippines.
| |
Collapse
|
41
|
Muntean GA, Marginean A, Groza A, Damian I, Roman SA, Hapca MC, Muntean MV, Nicoară SD. The Predictive Capabilities of Artificial Intelligence-Based OCT Analysis for Age-Related Macular Degeneration Progression-A Systematic Review. Diagnostics (Basel) 2023; 13:2464. [PMID: 37510207 PMCID: PMC10378064 DOI: 10.3390/diagnostics13142464] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 06/16/2023] [Accepted: 07/13/2023] [Indexed: 07/30/2023] Open
Abstract
The era of artificial intelligence (AI) has revolutionized our daily lives and AI has become a powerful force that is gradually transforming the field of medicine. Ophthalmology sits at the forefront of this transformation thanks to the effortless acquisition of an abundance of imaging modalities. There has been tremendous work in the field of AI for retinal diseases, with age-related macular degeneration being at the top of the most studied conditions. The purpose of the current systematic review was to identify and evaluate, in terms of strengths and limitations, the articles that apply AI to optical coherence tomography (OCT) images in order to predict the future evolution of age-related macular degeneration (AMD) during its natural history and after treatment in terms of OCT morphological structure and visual function. After a thorough search through seven databases up to 1 January 2022 using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, 1800 records were identified. After screening, 48 articles were selected for full-text retrieval and 19 articles were finally included. From these 19 articles, 4 articles concentrated on predicting the anti-VEGF requirement in neovascular AMD (nAMD), 4 articles focused on predicting anti-VEGF efficacy in nAMD patients, 3 articles predicted the conversion from early or intermediate AMD (iAMD) to nAMD, 1 article predicted the conversion from iAMD to geographic atrophy (GA), 1 article predicted the conversion from iAMD to both nAMD and GA, 3 articles predicted the future growth of GA and 3 articles predicted the future outcome for visual acuity (VA) after anti-VEGF treatment in nAMD patients. Since using AI methods to predict future changes in AMD is only in its initial phase, a systematic review provides the opportunity of setting the context of previous work in this area and can present a starting point for future research.
Collapse
Affiliation(s)
- George Adrian Muntean
- Department of Ophthalmology, "Iuliu Hatieganu" University of Medicine and Pharmacy, Emergency County Hospital, 400347 Cluj-Napoca, Romania
| | - Anca Marginean
- Department of Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Adrian Groza
- Department of Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Ioana Damian
- Department of Ophthalmology, "Iuliu Hatieganu" University of Medicine and Pharmacy, Emergency County Hospital, 400347 Cluj-Napoca, Romania
| | - Sara Alexia Roman
- Faculty of Medicine, "Iuliu Hatieganu" University of Medicine and Pharmacy, 400347 Cluj-Napoca, Romania
| | - Mădălina Claudia Hapca
- Department of Ophthalmology, "Iuliu Hatieganu" University of Medicine and Pharmacy, Emergency County Hospital, 400347 Cluj-Napoca, Romania
| | - Maximilian Vlad Muntean
- Plastic Surgery Department, "Prof. Dr. I. Chiricuta" Institute of Oncology, 400015 Cluj-Napoca, Romania
| | - Simona Delia Nicoară
- Department of Ophthalmology, "Iuliu Hatieganu" University of Medicine and Pharmacy, Emergency County Hospital, 400347 Cluj-Napoca, Romania
| |
Collapse
|
42
|
Rajesh AE, Olvera-Barrios A, Warwick AN, Wu Y, Stuart KV, Biradar M, Ung CY, Khawaja AP, Luben R, Foster PJ, Lee CS, Tufail A, Lee AY, Egan C. Ethnicity is not biology: retinal pigment score to evaluate biological variability from ophthalmic imaging using machine learning. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.06.28.23291873. [PMID: 37461664 PMCID: PMC10350142 DOI: 10.1101/2023.06.28.23291873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 07/24/2023]
Abstract
Background Few metrics exist to describe phenotypic diversity within ophthalmic imaging datasets, with researchers often using ethnicity as an inappropriate marker for biological variability. Methods We derived a continuous, measured metric, the retinal pigment score (RPS), that quantifies the degree of pigmentation from a colour fundus photograph of the eye. RPS was validated using two large epidemiological studies with demographic and genetic data (UK Biobank and EPIC-Norfolk Study). Findings A genome-wide association study (GWAS) of RPS from UK Biobank identified 20 loci with known associations with skin, iris and hair pigmentation, of which 8 were replicated in the EPIC-Norfolk cohort. There was a strong association between RPS and ethnicity, however, there was substantial overlap between each ethnicity and the respective distributions of RPS scores. Interpretation RPS serves to decouple traditional demographic variables, such as ethnicity, from clinical imaging characteristics. RPS may serve as a useful metric to quantify the diversity of the training, validation, and testing datasets used in the development of AI algorithms to ensure adequate inclusion and explainability of the model performance, critical in evaluating all currently deployed AI models. The code to derive RPS is publicly available at: https://github.com/uw-biomedical-ml/retinal-pigmentation-score. Funding The authors did not receive support from any organisation for the submitted work.
Collapse
Affiliation(s)
- Anand E Rajesh
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
- The Roger and Angie Karalis Johnson Retina Center, Seattle, WA, USA
| | - Abraham Olvera-Barrios
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
| | - Alasdair N Warwick
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
| | - Yue Wu
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
- The Roger and Angie Karalis Johnson Retina Center, Seattle, WA, USA
| | - Kelsey V Stuart
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
| | - Mahantesh Biradar
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
- University of Cambridge, Cambridge, UK
| | | | - Anthony P Khawaja
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
- MRC Epidemiology Unit, University of Cambridge, Cambridge, UK
| | - Robert Luben
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
| | - Paul J Foster
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
- The Roger and Angie Karalis Johnson Retina Center, Seattle, WA, USA
| | - Adnan Tufail
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
- The Roger and Angie Karalis Johnson Retina Center, Seattle, WA, USA
| | - Catherine Egan
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust & University College London Institute of Ophthalmology, London, UK
| |
Collapse
|
43
|
Heindl LM, Li S, Ting DSW, Keane PA. Artificial intelligence in ophthalmological practice: when ideal meets reality. BMJ Open Ophthalmol 2023; 8:e001129. [PMID: 37493688 PMCID: PMC10255244 DOI: 10.1136/bmjophth-2022-001129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2023] Open
Affiliation(s)
- Ludwig M Heindl
- Department of Ophthalmology, University of Cologne, Koln, Germany
| | - Senmao Li
- Department of Ophthalmology, University of Cologne, Koln, Germany
- Department of Ophthalmology, The First Affiliated Hospital of Jinan University, Guangzhou, Guangdong, China
| | - Daniel S W Ting
- Singapore National Eye Center, Duke-NUS Medical School, Singapore
- Ophthalmology and Visual Sciences Department, Duke-NUS Medical School, Singapore
| | - Pearse A Keane
- Medical Retina, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| |
Collapse
|
44
|
Chen RJ, Wang JJ, Williamson DFK, Chen TY, Lipkova J, Lu MY, Sahai S, Mahmood F. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat Biomed Eng 2023; 7:719-742. [PMID: 37380750 PMCID: PMC10632090 DOI: 10.1038/s41551-023-01056-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 04/13/2023] [Indexed: 06/30/2023]
Abstract
In healthcare, the development and deployment of insufficiently fair systems of artificial intelligence (AI) can undermine the delivery of equitable care. Assessments of AI models stratified across subpopulations have revealed inequalities in how patients are diagnosed, treated and billed. In this Perspective, we outline fairness in machine learning through the lens of healthcare, and discuss how algorithmic biases (in data acquisition, genetic variation and intra-observer labelling variability, in particular) arise in clinical workflows and the resulting healthcare disparities. We also review emerging technology for mitigating biases via disentanglement, federated learning and model explainability, and their role in the development of AI-based software as a medical device.
Collapse
Affiliation(s)
- Richard J Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Judy J Wang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Boston University School of Medicine, Boston, MA, USA
| | - Drew F K Williamson
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Tiffany Y Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Jana Lipkova
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ming Y Lu
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Sharifa Sahai
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Systems Biology, Harvard Medical School, Boston, MA, USA
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA.
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA.
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
- Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
45
|
Wagner SK, Liefers B, Radia M, Zhang G, Struyven R, Faes L, Than J, Balal S, Hennings C, Kilduff C, Pooprasert P, Glinton S, Arunakirinathan M, Giannakis P, Braimah IZ, Ahmed ISH, Al-Feky M, Khalid H, Ferraz D, Vieira J, Jorge R, Husain S, Ravelo J, Hinds AM, Henderson R, Patel HI, Ostmo S, Campbell JP, Pontikos N, Patel PJ, Keane PA, Adams G, Balaskas K. Development and international validation of custom-engineered and code-free deep-learning models for detection of plus disease in retinopathy of prematurity: a retrospective study. Lancet Digit Health 2023; 5:e340-e349. [PMID: 37088692 PMCID: PMC10279502 DOI: 10.1016/s2589-7500(23)00050-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 01/08/2023] [Accepted: 02/14/2023] [Indexed: 04/25/2023]
Abstract
BACKGROUND Retinopathy of prematurity (ROP), a leading cause of childhood blindness, is diagnosed through interval screening by paediatric ophthalmologists. However, improved survival of premature neonates coupled with a scarcity of available experts has raised concerns about the sustainability of this approach. We aimed to develop bespoke and code-free deep learning-based classifiers for plus disease, a hallmark of ROP, in an ethnically diverse population in London, UK, and externally validate them in ethnically, geographically, and socioeconomically diverse populations in four countries and three continents. Code-free deep learning is not reliant on the availability of expertly trained data scientists, thus being of particular potential benefit for low resource health-care settings. METHODS This retrospective cohort study used retinal images from 1370 neonates admitted to a neonatal unit at Homerton University Hospital NHS Foundation Trust, London, UK, between 2008 and 2018. Images were acquired using a Retcam Version 2 device (Natus Medical, Pleasanton, CA, USA) on all babies who were either born at less than 32 weeks gestational age or had a birthweight of less than 1501 g. Each images was graded by two junior ophthalmologists with disagreements adjudicated by a senior paediatric ophthalmologist. Bespoke and code-free deep learning models (CFDL) were developed for the discrimination of healthy, pre-plus disease, and plus disease. Performance was assessed internally on 200 images with the majority vote of three senior paediatric ophthalmologists as the reference standard. External validation was on 338 retinal images from four separate datasets from the USA, Brazil, and Egypt with images derived from Retcam and the 3nethra neo device (Forus Health, Bengaluru, India). FINDINGS Of the 7414 retinal images in the original dataset, 6141 images were used in the final development dataset. For the discrimination of healthy versus pre-plus or plus disease, the bespoke model had an area under the curve (AUC) of 0·986 (95% CI 0·973-0·996) and the CFDL model had an AUC of 0·989 (0·979-0·997) on the internal test set. Both models generalised well to external validation test sets acquired using the Retcam for discriminating healthy from pre-plus or plus disease (bespoke range was 0·975-1·000 and CFDL range was 0·969-0·995). The CFDL model was inferior to the bespoke model on discriminating pre-plus disease from healthy or plus disease in the USA dataset (CFDL 0·808 [95% CI 0·671-0·909, bespoke 0·942 [0·892-0·982]], p=0·0070). Performance also reduced when tested on the 3nethra neo imaging device (CFDL 0·865 [0·742-0·965] and bespoke 0·891 [0·783-0·977]). INTERPRETATION Both bespoke and CFDL models conferred similar performance to senior paediatric ophthalmologists for discriminating healthy retinal images from ones with features of pre-plus or plus disease; however, CFDL models might generalise less well when considering minority classes. Care should be taken when testing on data acquired using alternative imaging devices from that used for the development dataset. Our study justifies further validation of plus disease classifiers in ROP screening and supports a potential role for code-free approaches to help prevent blindness in vulnerable neonates. FUNDING National Institute for Health Research Biomedical Research Centre based at Moorfields Eye Hospital NHS Foundation Trust and the University College London Institute of Ophthalmology. TRANSLATIONS For the Portuguese and Arabic translations of the abstract see Supplementary Materials section.
Collapse
Affiliation(s)
- Siegfried K Wagner
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Bart Liefers
- NIHR Moorfields Biomedical Research Centre, London, UK
| | - Meera Radia
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Gongyu Zhang
- NIHR Moorfields Biomedical Research Centre, London, UK
| | - Robbert Struyven
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Livia Faes
- NIHR Moorfields Biomedical Research Centre, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Jonathan Than
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Shafi Balal
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | | | | | | | | | | | - Periklis Giannakis
- Institute of Health Sciences Education, Queen Mary University of London, London, UK
| | - Imoro Zeba Braimah
- Lions International Eye Centre, Korle-Bu Teaching Hospital, Accra, Ghana
| | - Islam S H Ahmed
- Faculty of Medicine, Alexandria University, Alexandria, Egypt; Alexandria University Hospital, Alexandria, Egypt
| | - Mariam Al-Feky
- Department of Ophthalmology, Ain Shams University Hospitals, Cairo, Egypt; Watany Eye Hospital, Cairo, Egypt
| | - Hagar Khalid
- Moorfields Eye Hospital NHS Foundation Trust, London, UK; Department of Ophthalmology, Tanta University, Tanta, Egypt
| | - Daniel Ferraz
- Institute of Ophthalmology, University College London, London, UK; D'Or Institute for Research and Education, São Paulo, Brazil
| | - Juliana Vieira
- Department of Ophthalmology, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, Brazil
| | - Rodrigo Jorge
- Department of Ophthalmology, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, Brazil
| | - Shahid Husain
- The Blizard Institute, Queen Mary University of London, London, UK; Neonatology Department, Homerton University Hospital NHS Foundation Trust, London, UK
| | - Janette Ravelo
- Neonatology Department, Homerton University Hospital NHS Foundation Trust, London, UK
| | | | - Robert Henderson
- UCL Great Ormond Street Institute of Child Health, University College London, London, UK; Clinical and Academic Department of Ophthalmology, Great Ormond Street Hospital for Children, London, UK
| | - Himanshu I Patel
- Moorfields Eye Hospital NHS Foundation Trust, London, UK; The Royal London Hospital, Barts Health NHS Trust, London, UK
| | - Susan Ostmo
- Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
| | - J Peter Campbell
- Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
| | - Nikolas Pontikos
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Praveen J Patel
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Pearse A Keane
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Gill Adams
- NIHR Moorfields Biomedical Research Centre, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Konstantinos Balaskas
- NIHR Moorfields Biomedical Research Centre, London, UK; Institute of Ophthalmology, University College London, London, UK; Moorfields Eye Hospital NHS Foundation Trust, London, UK.
| |
Collapse
|
46
|
AlRyalat SA, Singh P, Kalpathy-Cramer J, Kahook MY. Artificial Intelligence and Glaucoma: Going Back to Basics. Clin Ophthalmol 2023; 17:1525-1530. [PMID: 37284059 PMCID: PMC10239633 DOI: 10.2147/opth.s410905] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 05/24/2023] [Indexed: 06/08/2023] Open
Abstract
There has been a recent surge in the number of publications centered on the use of artificial intelligence (AI) to diagnose various systemic diseases. The Food and Drug Administration has approved several algorithms for use in clinical practice. In ophthalmology, most advances in AI relate to diabetic retinopathy, which is a disease process with agreed upon diagnostic and classification criteria. However, this is not the case for glaucoma, which is a relatively complex disease without agreed-upon diagnostic criteria. Moreover, currently available public datasets that focus on glaucoma have inconstant label quality, further complicating attempts at training AI algorithms efficiently. In this perspective paper, we discuss specific details related to developing AI models for glaucoma and suggest potential steps to overcome current limitations.
Collapse
Affiliation(s)
| | - Praveer Singh
- Department of Ophthalmology, University of Colorado School of Medicine, Sue Anschutz-Rodgers Eye Center, Aurora, CO, USA
| | - Jayashree Kalpathy-Cramer
- Department of Ophthalmology, University of Colorado School of Medicine, Sue Anschutz-Rodgers Eye Center, Aurora, CO, USA
| | - Malik Y Kahook
- Department of Ophthalmology, University of Colorado School of Medicine, Sue Anschutz-Rodgers Eye Center, Aurora, CO, USA
| |
Collapse
|
47
|
Krzywicki T, Brona P, Zbrzezny AM, Grzybowski AE. A Global Review of Publicly Available Datasets Containing Fundus Images: Characteristics, Barriers to Access, Usability, and Generalizability. J Clin Med 2023; 12:jcm12103587. [PMID: 37240693 DOI: 10.3390/jcm12103587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/29/2023] [Accepted: 05/17/2023] [Indexed: 05/28/2023] Open
Abstract
This article provides a comprehensive and up-to-date overview of the repositories that contain color fundus images. We analyzed them regarding availability and legality, presented the datasets' characteristics, and identified labeled and unlabeled image sets. This study aimed to complete all publicly available color fundus image datasets to create a central catalog of available color fundus image datasets.
Collapse
Affiliation(s)
- Tomasz Krzywicki
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland
| | - Piotr Brona
- Department of Ophthalmology, Poznan City Hospital, 61-285 Poznań, Poland
| | - Agnieszka M Zbrzezny
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland
- Faculty of Design, SWPS University of Social Sciences and Humanities, Chodakowska 19/31, 03-815 Warsaw, Poland
| | - Andrzej E Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, 60-836 Poznań, Poland
| |
Collapse
|
48
|
Fraser AG, Biasin E, Bijnens B, Bruining N, Caiani EG, Cobbaert K, Davies RH, Gilbert SH, Hovestadt L, Kamenjasevic E, Kwade Z, McGauran G, O'Connor G, Vasey B, Rademakers FE. Artificial intelligence in medical device software and high-risk medical devices - a review of definitions, expert recommendations and regulatory initiatives. Expert Rev Med Devices 2023; 20:467-491. [PMID: 37157833 DOI: 10.1080/17434440.2023.2184685] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) encompasses a wide range of algorithms with risks when used to support decisions about diagnosis or treatment, so professional and regulatory bodies are recommending how they should be managed. AREAS COVERED AI systems may qualify as standalone medical device software (MDSW) or be embedded within a medical device. Within the European Union (EU) AI software must undergo a conformity assessment procedure to be approved as a medical device. The draft EU Regulation on AI proposes rules that will apply across industry sectors, while for devices the Medical Device Regulation also applies. In the CORE-MD project (Coordinating Research and Evidence for Medical Devices), we have surveyed definitions and summarize initiatives made by professional consensus groups, regulators, and standardization bodies. EXPERT OPINION The level of clinical evidence required should be determined according to each application and to legal and methodological factors that contribute to risk, including accountability, transparency, and interpretability. EU guidance for MDSW based on international recommendations does not yet describe the clinical evidence needed for medical AI software. Regulators, notified bodies, manufacturers, clinicians and patients would all benefit from common standards for the clinical evaluation of high-risk AI applications and transparency of their evidence and performance.
Collapse
Affiliation(s)
- Alan G Fraser
- University Hospital of Wales, School of Medicine, Cardiff University, Heath Park, Cardiff, U.K
- KU Leuven, Leuven, Belgium
| | | | - Bart Bijnens
- Engineering Sciences, Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | - Nico Bruining
- Department of Clinical and Experimental Information processing (Digital Cardiology), Erasmus Medical Center, Thoraxcenter, Rotterdam, the Netherlands
| | - Enrico G Caiani
- Department of Electronics, Information and Biomedical Engineering, Politecnico di Milano, Milan, Italy
| | | | - Rhodri H Davies
- Institute of Cardiovascular Science, University College London, London, U.K
| | - Stephen H Gilbert
- Technische Universität Dresden, Else Kröner Fresenius Center for Digital Health, Dresden, Germany
| | | | | | | | | | | | - Baptiste Vasey
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK
| | | |
Collapse
|
49
|
Ong CJT, Wong MYZ, Cheong KX, Zhao J, Teo KYC, Tan TE. Optical Coherence Tomography Angiography in Retinal Vascular Disorders. Diagnostics (Basel) 2023; 13:diagnostics13091620. [PMID: 37175011 PMCID: PMC10178415 DOI: 10.3390/diagnostics13091620] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 04/28/2023] [Accepted: 05/01/2023] [Indexed: 05/15/2023] Open
Abstract
Traditionally, abnormalities of the retinal vasculature and perfusion in retinal vascular disorders, such as diabetic retinopathy and retinal vascular occlusions, have been visualized with dye-based fluorescein angiography (FA). Optical coherence tomography angiography (OCTA) is a newer, alternative modality for imaging the retinal vasculature, which has some advantages over FA, such as its dye-free, non-invasive nature, and depth resolution. The depth resolution of OCTA allows for characterization of the retinal microvasculature in distinct anatomic layers, and commercial OCTA platforms also provide automated quantitative vascular and perfusion metrics. Quantitative and qualitative OCTA analysis in various retinal vascular disorders has facilitated the detection of pre-clinical vascular changes, greater understanding of known clinical signs, and the development of imaging biomarkers to prognosticate and guide treatment. With further technological improvements, such as a greater field of view and better image quality processing algorithms, it is likely that OCTA will play an integral role in the study and management of retinal vascular disorders. Artificial intelligence methods-in particular, deep learning-show promise in refining the insights to be gained from the use of OCTA in retinal vascular disorders. This review aims to summarize the current literature on this imaging modality in relation to common retinal vascular disorders.
Collapse
Affiliation(s)
- Charles Jit Teng Ong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
| | - Mark Yu Zheng Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
| | - Kai Xiong Cheong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
| | - Jinzhi Zhao
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
| | - Kelvin Yi Chong Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (EYE ACP), Duke-NUS Medical School, Singapore 169857, Singapore
| | - Tien-En Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (EYE ACP), Duke-NUS Medical School, Singapore 169857, Singapore
| |
Collapse
|
50
|
Arnould L, Meriaudeau F, Guenancia C, Germanese C, Delcourt C, Kawasaki R, Cheung CY, Creuzot-Garcher C, Grzybowski A. Using Artificial Intelligence to Analyse the Retinal Vascular Network: The Future of Cardiovascular Risk Assessment Based on Oculomics? A Narrative Review. Ophthalmol Ther 2023; 12:657-674. [PMID: 36562928 PMCID: PMC10011267 DOI: 10.1007/s40123-022-00641-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 12/09/2022] [Indexed: 12/24/2022] Open
Abstract
The healthcare burden of cardiovascular diseases remains a major issue worldwide. Understanding the underlying mechanisms and improving identification of people with a higher risk profile of systemic vascular disease through noninvasive examinations is crucial. In ophthalmology, retinal vascular network imaging is simple and noninvasive and can provide in vivo information of the microstructure and vascular health. For more than 10 years, different research teams have been working on developing software to enable automatic analysis of the retinal vascular network from different imaging techniques (retinal fundus photographs, OCT angiography, adaptive optics, etc.) and to provide a description of the geometric characteristics of its arterial and venous components. Thus, the structure of retinal vessels could be considered a witness of the systemic vascular status. A new approach called "oculomics" using retinal image datasets and artificial intelligence algorithms recently increased the interest in retinal microvascular biomarkers. Despite the large volume of associated research, the role of retinal biomarkers in the screening, monitoring, or prediction of systemic vascular disease remains uncertain. A PubMed search was conducted until August 2022 and yielded relevant peer-reviewed articles based on a set of inclusion criteria. This literature review is intended to summarize the state of the art in oculomics and cardiovascular disease research.
Collapse
Affiliation(s)
- Louis Arnould
- Ophthalmology Department, Dijon University Hospital, 14 Rue Paul Gaffarel, 21079, Dijon CEDEX, France. .,University of Bordeaux, Inserm, Bordeaux Population Health Research Center, UMR U1219, 33000, Bordeaux, France.
| | - Fabrice Meriaudeau
- Laboratory ImViA, IFTIM, Université Bourgogne Franche-Comté, 21078, Dijon, France
| | - Charles Guenancia
- Pathophysiology and Epidemiology of Cerebro-Cardiovascular Diseases, (EA 7460), Faculty of Health Sciences, Université de Bourgogne Franche-Comté, Dijon, France.,Cardiology Department, Dijon University Hospital, Dijon, France
| | - Clément Germanese
- Ophthalmology Department, Dijon University Hospital, 14 Rue Paul Gaffarel, 21079, Dijon CEDEX, France
| | - Cécile Delcourt
- University of Bordeaux, Inserm, Bordeaux Population Health Research Center, UMR U1219, 33000, Bordeaux, France
| | - Ryo Kawasaki
- Artificial Intelligence Center for Medical Research and Application, Osaka University Hospital, Osaka, Japan
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Catherine Creuzot-Garcher
- Ophthalmology Department, Dijon University Hospital, 14 Rue Paul Gaffarel, 21079, Dijon CEDEX, France.,Centre des Sciences du Goût et de l'Alimentation, AgroSup Dijon, CNRS, INRAE, Université Bourgogne Franche-Comté, Dijon, France
| | - Andrzej Grzybowski
- Department of Ophthalmology, University of Warmia and Mazury, Olsztyn, Poland.,Institute for Research in Ophthalmology, Poznan, Poland
| |
Collapse
|