1
|
Chen JS, Copado IA, Vallejos C, Kalaw FGP, Soe P, Cai CX, Toy BC, Borkar D, Sun CQ, Shantha JG, Baxter SL. Variations in Electronic Health Record-Based Definitions of Diabetic Retinopathy Cohorts: A Literature Review and Quantitative Analysis. Ophthalmology Science 2024; 4:100468. [PMID: 38560278 PMCID: PMC10973665 DOI: 10.1016/j.xops.2024.100468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 01/04/2024] [Accepted: 01/11/2024] [Indexed: 04/04/2024]
Abstract
Purpose Use of the electronic health record (EHR) has motivated the need for data standardization. A gap in knowledge exists regarding variations in existing terminologies for defining diabetic retinopathy (DR) cohorts. This study aimed to review the literature and analyze variations regarding codified definitions of DR. Design Literature review and quantitative analysis. Subjects Published manuscripts. Methods Four graders reviewed PubMed and Google Scholar for peer-reviewed studies. Studies were included if they used codified definitions of DR (e.g., billing codes). Data elements such as author names, publication year, purpose, data set type, and DR definitions were manually extracted. Each study was reviewed by ≥ 2 authors to validate inclusion eligibility. Quantitative analyses of the codified definitions were then performed to characterize the variation between DR cohort definitions. Main Outcome Measures Number of studies included and numeric counts of billing codes used to define codified cohorts. Results In total, 43 studies met the inclusion criteria. Half of the included studies used datasets based on structured EHR data (i.e., data registries, institutional EHR review), and half used claims data. All but 1 of the studies used billing codes such as the International Classification of Diseases 9th or 10th edition (ICD-9 or ICD-10), either alone or in addition to another terminology for defining disease. Of the 27 included studies that used ICD-9 and the 20 studies that used ICD-10 codes, the most common codes used pertained to the full spectrum of DR severity. Diabetic retinopathy complications (e.g., vitreous hemorrhage) were also used to define some DR cohorts. Conclusions Substantial variations exist among codified definitions for DR cohorts within retrospective studies. Variable definitions may limit generalizability and reproducibility of retrospective studies. More work is needed to standardize disease cohorts. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Jimmy S Chen
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
- UCSD Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| | - Ivan A Copado
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
- UCSD Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| | - Cecilia Vallejos
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
- UCSD Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| | - Fritz Gerald P Kalaw
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
- UCSD Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| | - Priyanka Soe
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
- UCSD Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| | - Cindy X Cai
- Wilmer Eye Institute, Johns Hopkins School of Medicine, Baltimore, Maryland
| | - Brian C Toy
- Department of Ophthalmology, Roski Eye Institute, Keck School of Medicine, University of Southern California, Los Angeles, California
| | - Durga Borkar
- Department of Ophthalmology, Duke Eye Center, Duke University, Durham, North Carolina
| | - Catherine Q Sun
- F.I. Proctor Foundation, University of California San Francisco, San Francisco, California
- Department of Ophthalmology, University of California San Francisco, San Francisco, California
| | - Jessica G Shantha
- F.I. Proctor Foundation, University of California San Francisco, San Francisco, California
- Department of Ophthalmology, University of California San Francisco, San Francisco, California
| | - Sally L Baxter
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
- UCSD Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| |
Collapse
|
2
|
Mohammadzadeh V, Wu S, Besharati S, Davis T, Vepa A, Morales E, Edalati K, Rafiee M, Martinyan A, Zhang D, Scalzo F, Caprioli J, Nouri-Mahdavi K. Prediction of Visual Field Progression with Baseline and Longitudinal Structural Measurements Using Deep Learning. Am J Ophthalmol 2024; 262:141-152. [PMID: 38354971 DOI: 10.1016/j.ajo.2024.02.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 02/03/2024] [Accepted: 02/05/2024] [Indexed: 02/16/2024]
Abstract
PURPOSE Identifying glaucoma patients at high risk of progression based on widely available structural data is an unmet task in clinical practice. We test the hypothesis that baseline or serial structural measures can predict visual field (VF) progression with deep learning (DL). DESIGN Development of a DL algorithm to predict VF progression. METHODS 3,079 eyes (1,765 patients) with various types of glaucoma and ≥5 VFs, and ≥3 years of follow-up from a tertiary academic center were included. Serial VF mean deviation (MD) rates of change were estimated with linear-regression. VF progression was defined as negative MD slope with p<0.05. A Siamese Neural Network with ResNet-152 backbone pre-trained on ImageNet was designed to predict VF progression using serial optic-disc photographs (ODP), and baseline retinal nerve fiber layer (RNFL) thickness. We tested the model on a separate dataset (427 eyes) with RNFL data from different OCT. The Main Outcome Measure was Area under ROC curve (AUC). RESULTS Baseline average (SD) MD was 3.4 (4.9)dB. VF progression was detected in 900 eyes (29%). AUC (95% CI) for model incorporating baseline ODP and RNFL thickness was 0.813 (0.757-0.869). After adding the second and third ODPs, AUC increased to 0.860 and 0.894, respectively (p<0.027). This model also had highest AUC (0.911) for predicting fast progression (MD rate <1.0 dB/year). Model's performance was similar when applied to second dataset using RNFL data from another OCT device (AUC=0.893; 0.837-0.948). CONCLUSIONS DL model predicted VF progression with clinically relevant accuracy using baseline RNFL thickness and serial ODPs and can be implemented as a clinical tool after further validation.
Collapse
Affiliation(s)
- Vahid Mohammadzadeh
- From the Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles (V.M., S.B., E.M., K.E., M.R., A.M., D.Z., J.C., K.N.-M.), Los Angeles, California, USA
| | - Sean Wu
- Department of Computer Science, Pepperdine University (S.W., F.S.), Malibu, California, USA
| | - Sajad Besharati
- From the Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles (V.M., S.B., E.M., K.E., M.R., A.M., D.Z., J.C., K.N.-M.), Los Angeles, California, USA
| | - Tyler Davis
- Department of Computer Science, University of California Los Angeles (T.D., A.V., F.S.), Los Angeles, California, USA
| | - Arvind Vepa
- Department of Computer Science, University of California Los Angeles (T.D., A.V., F.S.), Los Angeles, California, USA
| | - Esteban Morales
- From the Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles (V.M., S.B., E.M., K.E., M.R., A.M., D.Z., J.C., K.N.-M.), Los Angeles, California, USA
| | - Kiumars Edalati
- From the Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles (V.M., S.B., E.M., K.E., M.R., A.M., D.Z., J.C., K.N.-M.), Los Angeles, California, USA
| | - Mahshad Rafiee
- From the Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles (V.M., S.B., E.M., K.E., M.R., A.M., D.Z., J.C., K.N.-M.), Los Angeles, California, USA
| | - Arthur Martinyan
- From the Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles (V.M., S.B., E.M., K.E., M.R., A.M., D.Z., J.C., K.N.-M.), Los Angeles, California, USA
| | - David Zhang
- From the Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles (V.M., S.B., E.M., K.E., M.R., A.M., D.Z., J.C., K.N.-M.), Los Angeles, California, USA
| | - Fabien Scalzo
- Department of Computer Science, Pepperdine University (S.W., F.S.), Malibu, California, USA; Department of Computer Science, University of California Los Angeles (T.D., A.V., F.S.), Los Angeles, California, USA
| | - Joseph Caprioli
- From the Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles (V.M., S.B., E.M., K.E., M.R., A.M., D.Z., J.C., K.N.-M.), Los Angeles, California, USA
| | - Kouros Nouri-Mahdavi
- From the Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles (V.M., S.B., E.M., K.E., M.R., A.M., D.Z., J.C., K.N.-M.), Los Angeles, California, USA.
| |
Collapse
|
3
|
Salloch S, Eriksen A. What Are Humans Doing in the Loop? Co-Reasoning and Practical Judgment When Using Machine Learning-Driven Decision Aids. Am J Bioeth 2024:1-12. [PMID: 38767971 DOI: 10.1080/15265161.2024.2353800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Within the ethical debate on Machine Learning-driven decision support systems (ML_CDSS), notions such as "human in the loop" or "meaningful human control" are often cited as being necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point of reference in ethical guidance documents, stating that conflicts between principles need to be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired by Onora O'Neill, this article makes a concrete suggestion of how to interpret the role of the "human in the loop" and to overcome the perspective of rivaling ethical principles in the evaluation of AI in health care. We argue that patients should be perceived as "fellow workers" and epistemic partners in the interpretation of ML_CDSS outputs. We further highlight that a meaningful process of integrating (rather than weighing and balancing) ethical principles is most appropriate in the evaluation of medical AI.
Collapse
|
4
|
Serikbaeva A, Li Y, Ma S, Yi D, Kazlauskas A. "Resilience to diabetic retinopathy". Prog Retin Eye Res 2024:101271. [PMID: 38740254 DOI: 10.1016/j.preteyeres.2024.101271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 05/03/2024] [Accepted: 05/10/2024] [Indexed: 05/16/2024]
Abstract
Chronic elevation of blood glucose at first causes relatively minor changes to the neural and vascular components of the retina. As the duration of hyperglycemia persists, the nature and extent of damage increases and becomes readily detectable. While this second, overt manifestation of diabetic retinopathy (DR) has been studied extensively, what prevents maximal damage from the very start of hyperglycemia remains largely unexplored. Recent studies indicate that diabetes (DM) engages mitochondria-based defense during the retinopathy-resistant phase, and thereby enables the retina to remain healthy in the face of hyperglycemia. Such resilience is transient, and its deterioration results in progressive accumulation of retinal damage. The concepts that co-emerge with these discoveries set the stage for novel intellectual and therapeutic opportunities within the DR field. Identification of biomarkers and mediators of protection from DM-mediated damage will enable development of resilience-based therapies that will indefinitely delay the onset of DR.
Collapse
Affiliation(s)
- Anara Serikbaeva
- Department of Physiology and Biophysics, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612
| | - Yanliang Li
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612
| | - Simon Ma
- Department of Bioengineering, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612
| | - Darvin Yi
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612; Department of Bioengineering, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612
| | - Andrius Kazlauskas
- Department of Physiology and Biophysics, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612; Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, 1905 W Taylor St, Chicago, IL 60612.
| |
Collapse
|
5
|
Kang D, Wu H, Yuan L, Shi Y, Jin K, Grzybowski A. A Beginner's Guide to Artificial Intelligence for Ophthalmologists. Ophthalmol Ther 2024:10.1007/s40123-024-00958-3. [PMID: 38734807 DOI: 10.1007/s40123-024-00958-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Accepted: 04/22/2024] [Indexed: 05/13/2024] Open
Abstract
The integration of artificial intelligence (AI) in ophthalmology has promoted the development of the discipline, offering opportunities for enhancing diagnostic accuracy, patient care, and treatment outcomes. This paper aims to provide a foundational understanding of AI applications in ophthalmology, with a focus on interpreting studies related to AI-driven diagnostics. The core of our discussion is to explore various AI methods, including deep learning (DL) frameworks for detecting and quantifying ophthalmic features in imaging data, as well as using transfer learning for effective model training in limited datasets. The paper highlights the importance of high-quality, diverse datasets for training AI models and the need for transparent reporting of methodologies to ensure reproducibility and reliability in AI studies. Furthermore, we address the clinical implications of AI diagnostics, emphasizing the balance between minimizing false negatives to avoid missed diagnoses and reducing false positives to prevent unnecessary interventions. The paper also discusses the ethical considerations and potential biases in AI models, underscoring the importance of continuous monitoring and improvement of AI systems in clinical settings. In conclusion, this paper serves as a primer for ophthalmologists seeking to understand the basics of AI in their field, guiding them through the critical aspects of interpreting AI studies and the practical considerations for integrating AI into clinical practice.
Collapse
Affiliation(s)
- Daohuan Kang
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Hongkang Wu
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Lu Yuan
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Yu Shi
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
- Zhejiang University School of Medicine, Hangzhou, China
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| |
Collapse
|
6
|
Upadhyaya DP, Shaikh AG, Cakir GB, Prantzalos K, Golnari P, Ghasia FF, Sahoo SS. A 360º View for Large Language Models: Early Detection of Amblyopia in Children using Multi-View Eye Movement Recordings. medRxiv 2024:2024.05.03.24306688. [PMID: 38765973 PMCID: PMC11100845 DOI: 10.1101/2024.05.03.24306688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Amblyopia is a neurodevelopmental visual disorder that affects approximately 3-5% of children globally and it can lead to vision loss if it is not diagnosed and treated early. Traditional diagnostic methods, which rely on subjective assessments and expert interpretation of eye movement recordings presents challenges in resource-limited eye care centers. This study introduces a new approach that integrates the Gemini large language model (LLM) with eye-tracking data to develop a classification tool for diagnosis of patients with amblyopia. The study demonstrates: (1) LLMs can be successfully applied to the analysis of fixation eye movement data to diagnose patients with amblyopia; and (2) Input of medical subject matter expertise, introduced in this study in the form of medical expert augmented generation (MEAG), is an effective adaption of the generic retrieval augmented generation (RAG) approach for medical applications using LLMs. This study introduces a new multi-view prompting framework for ophthalmology applications that incorporates fine granularity feedback from pediatric ophthalmologist together with in-context learning to report an accuracy of 80% in diagnosing patients with amblyopia. In addition to the binary classification task, the classification tool is generalizable to specific subpopulations of amblyopic patients based on severity of amblyopia, type of amblyopia, and with or without nystagmus. The model reports an accuracy of: (1) 83% in classifying patients with moderate or severe amblyopia, (2) 81% in classifying patients with mild or treated amblyopia; and (3) 85% accuracy in classifying patients with nystagmus. To the best of our knowledge, this is the first study that defines a multiview prompting framework with MEAG to analyze eye tracking data for the diagnosis of amblyopic patients.
Collapse
|
7
|
Elangovan K, Lim G, Ting D. A comparative study of an on premise AutoML solution for medical image classification. Sci Rep 2024; 14:10483. [PMID: 38714764 PMCID: PMC11076477 DOI: 10.1038/s41598-024-60429-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Accepted: 04/23/2024] [Indexed: 05/10/2024] Open
Abstract
Automated machine learning (AutoML) allows for the simplified application of machine learning to real-world problems, by the implicit handling of necessary steps such as data pre-processing, feature engineering, model selection and hyperparameter optimization. This has encouraged its use in medical applications such as imaging. However, the impact of common parameter choices such as the number of trials allowed, and the resolution of the input images, has not been comprehensively explored in existing literature. We therefore benchmark AutoKeras (AK), an open-source AutoML framework, against several bespoke deep learning architectures, on five public medical datasets representing a wide range of imaging modalities. It was found that AK could outperform the bespoke models in general, although at the cost of increased training time. Moreover, our experiments suggest that a large number of trials and higher resolutions may not be necessary for optimal performance to be achieved.
Collapse
Affiliation(s)
- Kabilan Elangovan
- Artificial Intelligence and Digital Health Research Group, Singapore Eye Research Institute, Singapore, Singapore
- Artificial Intelligence Office, Singapore Health Service, Singapore, Singapore
| | - Gilbert Lim
- Artificial Intelligence and Digital Health Research Group, Singapore Eye Research Institute, Singapore, Singapore
- Artificial Intelligence Office, Singapore Health Service, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Daniel Ting
- Artificial Intelligence and Digital Health Research Group, Singapore Eye Research Institute, Singapore, Singapore.
- Artificial Intelligence Office, Singapore Health Service, Singapore, Singapore.
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore.
- Singapore National Eye Centre, Singapore General Hospital, 11 Third Hospital Avenue, Singapore, 168751, Singapore.
- Byers Eye Institute, Stanford University, Stanford, USA.
| |
Collapse
|
8
|
Zago Ribeiro L, Nakayama LF, Malerbi FK, Regatieri CVS. Automated machine learning model for fundus image classification by health-care professionals with no coding experience. Sci Rep 2024; 14:10395. [PMID: 38710726 PMCID: PMC11074250 DOI: 10.1038/s41598-024-60807-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 04/26/2024] [Indexed: 05/08/2024] Open
Abstract
To assess the feasibility of code-free deep learning (CFDL) platforms in the prediction of binary outcomes from fundus images in ophthalmology, evaluating two distinct online-based platforms (Google Vertex and Amazon Rekognition), and two distinct datasets. Two publicly available datasets, Messidor-2 and BRSET, were utilized for model development. The Messidor-2 consists of fundus photographs from diabetic patients and the BRSET is a multi-label dataset. The CFDL platforms were used to create deep learning models, with no preprocessing of the images, by a single ophthalmologist without coding expertise. The performance metrics employed to evaluate the models were F1 score, area under curve (AUC), precision and recall. The performance metrics for referable diabetic retinopathy and macular edema were above 0.9 for both tasks and CDFL. The Google Vertex models demonstrated superior performance compared to the Amazon models, with the BRSET dataset achieving the highest accuracy (AUC of 0.994). Multi-classification tasks using only BRSET achieved similar overall performance between platforms, achieving AUC of 0.994 for laterality, 0.942 for age grouping, 0.779 for genetic sex identification, 0.857 for optic, and 0.837 for normality with Google Vertex. The study demonstrates the feasibility of using automated machine learning platforms for predicting binary outcomes from fundus images in ophthalmology. It highlights the high accuracy achieved by the models in some tasks and the potential of CFDL as an entry-friendly platform for ophthalmologists to familiarize themselves with machine learning concepts.
Collapse
Affiliation(s)
- Lucas Zago Ribeiro
- Department of Ophthalmology and Visual Sciences, Federal University of São Paulo, São Paulo, SP, Brazil.
| | - Luis Filipe Nakayama
- Department of Ophthalmology and Visual Sciences, Federal University of São Paulo, São Paulo, SP, Brazil
- Massachusetts Institute of Technology, Institute for Medical Engineering and Science, Cambridge, MA, USA
| | - Fernando Korn Malerbi
- Department of Ophthalmology and Visual Sciences, Federal University of São Paulo, São Paulo, SP, Brazil
| | | |
Collapse
|
9
|
Qutieshat A, Al Rusheidi A, Al Ghammari S, Alarabi A, Salem A, Zelihic M. Comparative analysis of diagnostic accuracy in endodontic assessments: dental students vs. artificial intelligence. Diagnosis (Berl) 2024; 0:dx-2024-0034. [PMID: 38696271 DOI: 10.1515/dx-2024-0034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 04/22/2024] [Indexed: 05/04/2024]
Abstract
OBJECTIVES This study evaluates the comparative diagnostic accuracy of dental students and artificial intelligence (AI), specifically a modified ChatGPT 4, in endodontic assessments related to pulpal and apical conditions. The findings are intended to offer insights into the potential role of AI in augmenting dental education. METHODS Involving 109 dental students divided into junior (54) and senior (55) groups, the study compared their diagnostic accuracy against ChatGPT's across seven clinical scenarios. Juniors had the American Association of Endodontists (AEE) terminology assistance, while seniors relied on prior knowledge. Accuracy was measured against a gold standard by experienced endodontists, using statistical analysis including Kruskal-Wallis and Dwass-Steel-Critchlow-Fligner tests. RESULTS ChatGPT achieved significantly higher accuracy (99.0 %) compared to seniors (79.7 %) and juniors (77.0 %). Median accuracy was 100.0 % for ChatGPT, 85.7 % for seniors, and 82.1 % for juniors. Statistical tests indicated significant differences between ChatGPT and both student groups (p<0.001), with no notable difference between the student cohorts. CONCLUSIONS The study reveals AI's capability to outperform dental students in diagnostic accuracy regarding endodontic assessments. This underscores AIs potential as a reference tool that students could utilize to enhance their understanding and diagnostic skills. Nevertheless, the potential for overreliance on AI, which may affect the development of critical analytical and decision-making abilities, necessitates a balanced integration of AI with human expertise and clinical judgement in dental education. Future research is essential to navigate the ethical and legal frameworks for incorporating AI tools such as ChatGPT into dental education and clinical practices effectively.
Collapse
Affiliation(s)
- Abubaker Qutieshat
- Adult Restorative Dentistry, 442177 Oman Dental College , Muscat, Oman
- Restorative Dentistry, Dundee Dental Hospital and School, University of Dundee, Dundee, UK
| | | | | | | | - Abdurahman Salem
- Dental Technology, 1796 School of Health & Society, University of Bolton , Greater Manchester, UK
| | - Maja Zelihic
- Forbes School of Business and Technology, 191123 University of Arizona Global Campus , Chandler, AZ, USA
| |
Collapse
|
10
|
Yao H, Wu Z, Gao SS, Guymer RH, Steffen V, Chen H, Hejrati M, Zhang M. Deep Learning Approaches for Detecting of Nascent Geographic Atrophy in Age-Related Macular Degeneration. Ophthalmol Sci 2024; 4:100428. [PMID: 38284101 PMCID: PMC10818248 DOI: 10.1016/j.xops.2023.100428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 10/31/2023] [Accepted: 11/08/2023] [Indexed: 01/30/2024]
Abstract
Purpose Nascent geographic atrophy (nGA) refers to specific features seen on OCT B-scans, which are strongly associated with the future development of geographic atrophy (GA). This study sought to develop a deep learning model to screen OCT B-scans for nGA that warrant further manual review (an artificial intelligence [AI]-assisted approach), and to determine the extent of reduction in OCT B-scan load requiring manual review while maintaining near-perfect nGA detection performance. Design Development and evaluation of a deep learning model. Participants One thousand eight hundred and eighty four OCT volume scans (49 B-scans per volume) without neovascular age-related macular degeneration from 280 eyes of 140 participants with bilateral large drusen at baseline, seen at 6-monthly intervals up to a 36-month period (from which 40 eyes developed nGA). Methods OCT volume and B-scans were labeled for the presence of nGA. Their presence at the volume scan level provided the ground truth for training a deep learning model to identify OCT B-scans that potentially showed nGA requiring manual review. Using a threshold that provided a sensitivity of 0.99, the B-scans identified were assigned the ground truth label with the AI-assisted approach. The performance of this approach for detecting nGA across all visits, or at the visit of nGA onset, was evaluated using fivefold cross-validation. Main Outcome Measures Sensitivity for detecting nGA, and proportion of OCT B-scans requiring manual review. Results The AI-assisted approach (utilizing outputs from the deep learning model to guide manual review) had a sensitivity of 0.97 (95% confidence interval [CI] = 0.93-1.00) and 0.95 (95% CI = 0.87-1.00) for detecting nGA across all visits and at the visit of nGA onset, respectively, when requiring manual review of only 2.7% and 1.9% of selected OCT B-scans, respectively. Conclusions A deep learning model could be used to enable near-perfect detection of nGA onset while reducing the number of OCT B-scans requiring manual review by over 50-fold. This AI-assisted approach shows promise for substantially reducing the current burden of manual review of OCT B-scans to detect this crucial feature that portends future development of GA. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Heming Yao
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| | - Zhichao Wu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
- Ophthalmology Division, Department of Surgery, The University of Melbourne, Melbourne, Victoria, Australia
| | - Simon S. Gao
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| | - Robyn H. Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
- Ophthalmology Division, Department of Surgery, The University of Melbourne, Melbourne, Victoria, Australia
| | - Verena Steffen
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| | - Hao Chen
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| | - Mohsen Hejrati
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| | - Miao Zhang
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| |
Collapse
|
11
|
Salongcay RP, Aquino LAC, Alog GP, Locaylocay KB, Saunar AV, Peto T, Silva PS. Accuracy of Integrated Artificial Intelligence Grading Using Handheld Retinal Imaging in a Community Diabetic Eye Screening Program. Ophthalmol Sci 2024; 4:100457. [PMID: 38317871 PMCID: PMC10838904 DOI: 10.1016/j.xops.2023.100457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 11/08/2023] [Accepted: 12/11/2023] [Indexed: 02/07/2024]
Abstract
Purpose To evaluate mydriatic handheld retinal imaging performance assessed by point-of-care (POC) artificial intelligence (AI) as compared with retinal image graders at a centralized reading center (RC) in identifying diabetic retinopathy (DR) and diabetic macular edema (DME). Design Prospective, comparative study. Subjects Five thousand five hundred eighty-five eyes from 2793 adult patients with diabetes. Methods Point-of-care AI assessment of disc and macular handheld retinal images was compared with RC evaluation of validated 5-field handheld retinal images (disc, macula, superior, inferior, and temporal) in identifying referable DR (refDR; defined as moderate nonproliferative DR [NPDR], or worse, or any level of DME) and vision-threatening DR (vtDR; defined as severe NPDR or worse, or any level of center-involving DME [ciDME]). Reading center evaluation of the 5-field images followed the international DR/DME classification. Sensitivity (SN) and specificity (SP) for ungradable images, refDR, and vtDR were calculated. Main Outcome Measures Agreement for DR and DME; SN and SP for refDR, vtDR, and ungradable images. Results Diabetic retinopathy severity by RC evaluation: no DR, 67.3%; mild NPDR, 9.7%; moderate NPDR, 8.6%; severe NPDR, 4.8%; proliferative DR, 3.8%; and ungradable, 5.8%. Diabetic macular edema severity by RC evaluation was as follows: no DME (80.4%), non-ciDME (7.7%), ciDME (4.4%), and ungradable (7.5%). Referable DR was present in 25.3% and vtDR was present in 17.5% of eyes. Images were ungradable for DR or DME in 7.5% by RC evaluation and 15.4% by AI. There was substantial agreement between AI and RC for refDR (κ = 0.66) and moderate agreement for vtDR (κ = 0.54). The SN/SP of AI grading compared with RC evaluation was 0.86/0.86 for refDR and 0.92/0.80 for vtDR. Conclusions This study demonstrates that POC AI following a defined handheld retinal imaging protocol at the time of imaging has SN and SP for refDR that meets the current United States Food and Drug Administration thresholds of 85% and 82.5%, but not for vtDR. Integrating AI at the POC could substantially reduce centralized RC burden and speed information delivery to the patient, allowing more prompt eye care referral. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Recivall P. Salongcay
- Centre for Public Health, Queen’s University Belfast, Belfast, United Kingdom
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
- Eye and Vision Institute, The Medical City, Metro Manila, Philippines
| | - Lizzie Anne C. Aquino
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
| | - Glenn P. Alog
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
- Eye and Vision Institute, The Medical City, Metro Manila, Philippines
| | - Kaye B. Locaylocay
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
- Eye and Vision Institute, The Medical City, Metro Manila, Philippines
| | - Aileen V. Saunar
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
- Eye and Vision Institute, The Medical City, Metro Manila, Philippines
| | - Tunde Peto
- Centre for Public Health, Queen’s University Belfast, Belfast, United Kingdom
| | - Paolo S. Silva
- Philippine Eye Research Institute, University of the Philippines, Manila, Philippines
- Eye and Vision Institute, The Medical City, Metro Manila, Philippines
- Beetham Eye Institute, Joslin Diabetes Center, Boston, Massachusetts
- Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
12
|
Omar A, Williams RG, Whelan J, Noble J, Brent MH, Giunta M, Olivier S, Lhor M. Diabetic Disease of the Eye in Canada: Consensus Statements from a Retina Specialist Working Group. Ophthalmol Ther 2024; 13:1071-1102. [PMID: 38526804 DOI: 10.1007/s40123-024-00923-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 02/29/2024] [Indexed: 03/27/2024] Open
Abstract
Despite advances in systemic care, diabetic disease of the eye (DDE) remains the leading cause of blindness worldwide. There is a critical gap of up-to-date, evidence-based guidance for ophthalmologists in Canada that includes evidence from recent randomized controlled trials. Previous guidance has not always given special consideration to applying treatments and managing DDE in the context of the healthcare system. This consensus statement aims to assist practitioners in the field by providing a spectrum of acceptable opinions on DDE treatment and management from recognized experts in the field. In compiling evidence and generating consensus, a working group of retinal specialists in Canada addressed clinical questions surrounding the four themes of disease, patient, management, and collaboration. The working group reviewed literature representing the highest level of evidence on DDE and shared their opinions on topics surrounding the epidemiology and pathophysiology of diabetic retinopathy and diabetic macular edema; diagnosis and monitoring; considerations around diabetes medication use; strategic considerations for management given systemic comorbidities, ocular comorbidities, and pregnancy; treatment goals and modalities for diabetic macular edema, non-proliferative and proliferative diabetic retinopathy, and retinal detachment; and interdisciplinary collaboration. Ultimately, this work highlighted that the retinal examination in DDE not only informs the treating ophthalmologist but can serve as a global index for disease progression across many tissues of the body. It highlighted further that DDE can be treated regardless of diabetic control, that a systemic approach to patient care will result in the best health outcomes, and prevention of visual complications requires a multidisciplinary management approach. Ophthalmologists must tailor their clinical approach to the needs and circumstances of individual patients and work within the realities of their healthcare setting.
Collapse
Affiliation(s)
- Amer Omar
- Medical Retina Institute of Montreal, 2170 René-Lévesque Blvd Ouest, Bureau 101, Montréal, QC, H3H 2T8, Canada.
| | - R Geoff Williams
- Calgary Retina Consultants, University of Calgary, Calgary, AB, Canada
| | - James Whelan
- Faculty of Medicine, Memorial University, St. John's, NF, Canada
| | - Jason Noble
- Department of Ophthalmology and Vision Science, University of Toronto, Toronto, ON, Canada
| | - Michael H Brent
- Department of Ophthalmology and Vision Science, University of Toronto, Toronto, ON, Canada
| | - Michel Giunta
- Department of Ophthalmology, University of Sherbrooke, Sherbrooke, QC, Canada
| | - Sébastien Olivier
- Centre Universitaire d'ophtalmologie, Hôpital Maisonneuve-Rosemont, Université de Montréal, Montréal, QC, Canada
| | - Mustapha Lhor
- Medical and Scientific Affairs Ophthalmology, Bayer Inc., Mississauga, ON, Canada
| |
Collapse
|
13
|
Musetti D, Cutolo CA, Bonetto M, Giacomini M, Maggi D, Viviani GL, Gandin I, Traverso CE, Nicolò M. Autonomous artificial intelligence versus teleophthalmology for diabetic retinopathy. Eur J Ophthalmol 2024:11206721241248856. [PMID: 38656241 DOI: 10.1177/11206721241248856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
Purpose: To assess the role of artificial intelligence (AI) based automated software for detection of Diabetic Retinopathy (DR) compared with the evaluation of digital retinography by two double masked retina specialists. Methods: Two-hundred one patients (mean age 65 ± 13 years) with type 1 diabetes mellitus or type 2 diabetes mellitus were included. All patients were undergoing a retinography and spectral domain optical coherence tomography (SD-OCT, DRI 3D OCT-2000, Topcon) of the macula. The retinal photographs were graded using two validated AI DR screening software (Eye Art TM and IDx-DR) designed to identify more than mild DR. Results: Retinal images of 201 patients were graded. DR (more than mild DR) was detected by the ophthalmologists in 38 (18.9%) patients and by the AI-algorithms in 36 patients (with 30 eyes diagnosed by both algorithms). Ungradable patients by the AI software were 13 (6.5%) and 16 (8%) for the Eye Art and IDx-DR, respectively. Both AI software strategies showed a high sensitivity and specificity for detecting any more than mild DR without showing any statistically significant difference between them. Conclusions: The comparison between the diagnosis provided by artificial intelligence based automated software and the reference clinical diagnosis showed that they can work at a level of sensitivity that is similar to that achieved by experts.
Collapse
Affiliation(s)
- Donatella Musetti
- Clinica Oculistica DiNOGMI, Università di Genova, Ospedale Policlinico San Martino IRCCS, Genova, Italy
| | - Carlo Alberto Cutolo
- Clinica Oculistica DiNOGMI, Università di Genova, Ospedale Policlinico San Martino IRCCS, Genova, Italy
| | | | | | - Davide Maggi
- Clinica Diabetologica, Università di Genova, Ospedale Policlinico San Martino IRCCS, Genova, Italy
| | - Giorgio Luciano Viviani
- Clinica Diabetologica, Università di Genova, Ospedale Policlinico San Martino IRCCS, Genova, Italy
| | - Ilaria Gandin
- Sciences, Biostatistic Unit, University of Trieste, Italy
| | - Carlo Enrico Traverso
- Clinica Oculistica DiNOGMI, Università di Genova, Ospedale Policlinico San Martino IRCCS, Genova, Italy
| | - Massimo Nicolò
- Clinica Oculistica DiNOGMI, Università di Genova, Ospedale Policlinico San Martino IRCCS, Genova, Italy
- Fondazione per la Macula onlus, Genova, Italy
| |
Collapse
|
14
|
Driban M, Yan A, Selvam A, Ong J, Vupparaboina KK, Chhablani J. Artificial intelligence in chorioretinal pathology through fundoscopy: a comprehensive review. Int J Retina Vitreous 2024; 10:36. [PMID: 38654344 PMCID: PMC11036694 DOI: 10.1186/s40942-024-00554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/02/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Applications for artificial intelligence (AI) in ophthalmology are continually evolving. Fundoscopy is one of the oldest ocular imaging techniques but remains a mainstay in posterior segment imaging due to its prevalence, ease of use, and ongoing technological advancement. AI has been leveraged for fundoscopy to accomplish core tasks including segmentation, classification, and prediction. MAIN BODY In this article we provide a review of AI in fundoscopy applied to representative chorioretinal pathologies, including diabetic retinopathy and age-related macular degeneration, among others. We conclude with a discussion of future directions and current limitations. SHORT CONCLUSION As AI evolves, it will become increasingly essential for the modern ophthalmologist to understand its applications and limitations to improve patient outcomes and continue to innovate.
Collapse
Affiliation(s)
- Matthew Driban
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Audrey Yan
- Department of Medicine, West Virginia School of Osteopathic Medicine, Lewisburg, WV, USA
| | - Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, USA
| | | | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
| |
Collapse
|
15
|
Chang J, Lin BR, Wang TH, Chen CM. Deep learning model for pleural effusion detection via active learning and pseudo-labeling: a multisite study. BMC Med Imaging 2024; 24:92. [PMID: 38641591 PMCID: PMC11027341 DOI: 10.1186/s12880-024-01260-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 03/26/2024] [Indexed: 04/21/2024] Open
Abstract
BACKGROUND The study aimed to develop and validate a deep learning-based Computer Aided Triage (CADt) algorithm for detecting pleural effusion in chest radiographs using an active learning (AL) framework. This is aimed at addressing the critical need for a clinical grade algorithm that can timely diagnose pleural effusion, which affects approximately 1.5 million people annually in the United States. METHODS In this multisite study, 10,599 chest radiographs from 2006 to 2018 were retrospectively collected from an institution in Taiwan to train the deep learning algorithm. The AL framework utilized significantly reduced the need for expert annotations. For external validation, the algorithm was tested on a multisite dataset of 600 chest radiographs from 22 clinical sites in the United States and Taiwan, which were annotated by three U.S. board-certified radiologists. RESULTS The CADt algorithm demonstrated high effectiveness in identifying pleural effusion, achieving a sensitivity of 0.95 (95% CI: [0.92, 0.97]) and a specificity of 0.97 (95% CI: [0.95, 0.99]). The area under the receiver operating characteristic curve (AUC) was 0.97 (95% DeLong's CI: [0.95, 0.99]). Subgroup analyses showed that the algorithm maintained robust performance across various demographics and clinical settings. CONCLUSION This study presents a novel approach in developing clinical grade CADt solutions for the diagnosis of pleural effusion. The AL-based CADt algorithm not only achieved high accuracy in detecting pleural effusion but also significantly reduced the workload required for clinical experts in annotating medical data. This method enhances the feasibility of employing advanced technological solutions for prompt and accurate diagnosis in medical settings.
Collapse
Affiliation(s)
- Joseph Chang
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, No. 1, Sec. 1, Jen-Ai Road, Taipei 100, 100, Taipei, Taiwan
- EverFortune.AI Co., Ltd, Taichung, Taiwan
| | - Bo-Ru Lin
- The Data Science Degree Program, College of Electrical Engineering and Computer Science, National Taiwan University and Academia Sinica, Taipei, Taiwan
| | - Ti-Hao Wang
- Department of Radiation Oncology, China Medical University Hospital, Taichung, Taiwan.
- Department of Medicine, China Medical University, Taichung, Taiwan.
- EverFortune.AI Co., Ltd, Taichung, Taiwan.
| | - Chung-Ming Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, No. 1, Sec. 1, Jen-Ai Road, Taipei 100, 100, Taipei, Taiwan.
| |
Collapse
|
16
|
Tadayoni R, Massin P, Bonnin S, Magazzeni S, Lay B, Le Guilcher A, Vicaut E, Couturier A, Quellec G, Investigators E. Artificial intelligence-based prediction of diabetic retinopathy evolution (EviRed): protocol for a prospective cohort. BMJ Open 2024; 14:e084574. [PMID: 38626974 PMCID: PMC11029320 DOI: 10.1136/bmjopen-2024-084574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 02/19/2024] [Indexed: 04/19/2024] Open
Abstract
INTRODUCTION An important obstacle in the fight against diabetic retinopathy (DR) is the use of a classification system based on old imaging techniques and insufficient data to accurately predict its evolution. New imaging techniques generate new valuable data, but we lack an adapted classification based on these data. The main objective of the Evaluation Intelligente de la Rétinopathie Diabétique, Intelligent evaluation of DR (EviRed) project is to develop and validate a system assisting the ophthalmologist in decision-making during DR follow-up by improving the prediction of its evolution. METHODS AND ANALYSIS A cohort of up to 5000 patients with diabetes will be recruited from 18 diabetology departments and 14 ophthalmology departments, in public or private hospitals in France and followed for an average of 2 years. Each year, systemic health data as well as ophthalmological data will be collected. Both eyes will be imaged by using different imaging modalities including widefield photography, optical coherence tomography (OCT) and OCT-angiography. The EviRed cohort will be divided into two groups: one group will be randomly selected in each stratum during the inclusion period to be representative of the general diabetic population. Their data will be used for validating the algorithms (validation cohort). The data for the remaining patients (training cohort) will be used to train the algorithms. ETHICS AND DISSEMINATION The study protocol was approved by the French South-West and Overseas Ethics Committee 4 on 28 August 2020 (CPP2020-07-060b/2020-A01725-34/20.06.16.41433). Prior to the start of the study, each patient will provide a written informed consent documenting his or her agreement to participate in the clinical trial. Results of this research will be disseminated in peer-reviewed publications and conference presentations. The database will also be available for further study or development that could benefit patients. TRIAL REGISTRATION NUMBER NCT04624737.
Collapse
Affiliation(s)
- Ramin Tadayoni
- Ophthalmology Department, Université Paris Cité, AP-HP, Lariboisiere Hospital, Paris, France
- Ophthalmology Departement, Adolphe de Rothschild Ophthalmological Foundation, Paris, France
| | - Pascale Massin
- Ophthalmology Department, Université Paris Cité, AP-HP, Lariboisiere Hospital, Paris, France
| | - Sophie Bonnin
- Ophthalmology, Adolphe de Rothschild Ophthalmological Foundation, Paris, France
| | | | | | | | - Eric Vicaut
- Assistance Publique-Hopitaux de Paris, Paris, France
| | - Aude Couturier
- Ophthalmology Department, Université Paris Cité, AP-HP, Lariboisiere Hospital, Paris, France
| | | | | |
Collapse
|
17
|
Mares V, Nehemy MB, Bogunovic H, Frank S, Reiter GS, Schmidt-Erfurth U. AI-based support for optical coherence tomography in age-related macular degeneration. Int J Retina Vitreous 2024; 10:31. [PMID: 38589936 PMCID: PMC11000391 DOI: 10.1186/s40942-024-00549-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Accepted: 03/16/2024] [Indexed: 04/10/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a transformative technology across various fields, and its applications in the medical domain, particularly in ophthalmology, has gained significant attention. The vast amount of high-resolution image data, such as optical coherence tomography (OCT) images, has been a driving force behind AI growth in this field. Age-related macular degeneration (AMD) is one of the leading causes for blindness in the world, affecting approximately 196 million people worldwide in 2020. Multimodal imaging has been for a long time the gold standard for diagnosing patients with AMD, however, currently treatment and follow-up in routine disease management are mainly driven by OCT imaging. AI-based algorithms have by their precision, reproducibility and speed, the potential to reliably quantify biomarkers, predict disease progression and assist treatment decisions in clinical routine as well as academic studies. This review paper aims to provide a summary of the current state of AI in AMD, focusing on its applications, challenges, and prospects.
Collapse
Affiliation(s)
- Virginia Mares
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
- Department of Ophthalmology, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Marcio B Nehemy
- Department of Ophthalmology, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Hrvoje Bogunovic
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - Sophie Frank
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - Gregor S Reiter
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria.
| |
Collapse
|
18
|
Zhang R, Dong L, Fu X, Hua L, Zhou W, Li H, Wu H, Yu C, Li Y, Shi X, Ou Y, Zhang B, Wang B, Ma Z, Luo Y, Yang M, Chang X, Wang Z, Wei W. Trends in the Prevalence of Common Retinal and Optic Nerve Diseases in China: An Artificial Intelligence Based National Screening. Transl Vis Sci Technol 2024; 13:28. [PMID: 38648051 PMCID: PMC11044835 DOI: 10.1167/tvst.13.4.28] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 03/07/2024] [Indexed: 04/25/2024] Open
Abstract
Purpose Retinal and optic nerve diseases have become the primary cause of irreversible vision loss and blindness. However, there is still a lack of thorough evaluation regarding their prevalence in China. Methods This artificial intelligence-based national screening study applied a previously developed deep learning algorithm, named the Retinal Artificial Intelligence Diagnosis System (RAIDS). De-identified personal medical records from January 2019 to December 2021 were extracted from 65 examination centers in 19 provinces of China. Crude prevalence and age-sex-adjusted prevalence were calculated by mapping to the standard population in the seventh national census. Results In 2021, adjusted referral possible glaucoma (63.29, 95% confidence interval [CI] = 57.12-68.90 cases per 1000), epiretinal macular membrane (21.84, 95% CI = 15.64-29.22), age-related macular degeneration (13.93, 95% CI = 11.09-17.17), and diabetic retinopathy (11.33, 95% CI = 8.89-13.77) ranked the highest among 10 diseases. Female participants had significantly higher adjusted prevalence of pathologic myopia, yet a lower adjusted prevalence of diabetic retinopathy, referral possible glaucoma, and hypertensive retinopathy than male participants. From 2019 to 2021, the adjusted prevalence of retinal vein occlusion (0.99, 95% CI = 0.73-1.26 to 1.88, 95% CI = 1.42-2.44), macular hole (0.59, 95% CI = 0.41-0.82 to 1.12, 95% CI = 0.76-1.51), and hypertensive retinopathy (0.53, 95% CI = 0.40-0.67 to 0.77, 95% CI = 0.60-0.95) significantly increased. The prevalence of diabetic retinopathy in participants under 50 years old significant increased. Conclusions Retinal and optic nerve diseases are an important public health concern in China. Further well-conceived epidemiological studies are required to validate the observed increased prevalence of diabetic retinopathy, hypertensive retinopathy, retinal vein occlusion, and macular hole nationwide. Translational Relevance This artificial intelligence system can be a potential tool to monitor the prevalence of major retinal and optic nerve diseases over a wide geographic area.
Collapse
Affiliation(s)
- Ruiheng Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xuefei Fu
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Lin Hua
- School of Biomedical Engineering, Capital Medical University, Beijing, China
| | - Wenda Zhou
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Heyan Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Haotian Wu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chuyao Yu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yitong Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xuhan Shi
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yangjie Ou
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Bing Zhang
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Bin Wang
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Zhiqiang Ma
- iKang Guobin Healthcare Group Co., Ltd, Beijing, China
| | - Yuan Luo
- iKang Guobin Healthcare Group Co., Ltd, Beijing, China
| | - Meng Yang
- iKang Guobin Healthcare Group Co., Ltd, Beijing, China
| | | | - Zhaohui Wang
- iKang Guobin Healthcare Group Co., Ltd, Beijing, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
19
|
Kuwahara T, Hara K, Mizuno N, Haba S, Okuno N, Fukui T, Urata M, Yamamoto Y. Current status of artificial intelligence analysis for the treatment of pancreaticobiliary diseases using endoscopic ultrasonography and endoscopic retrograde cholangiopancreatography. DEN Open 2024; 4:e267. [PMID: 37397344 PMCID: PMC10312781 DOI: 10.1002/deo2.267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 06/18/2023] [Indexed: 07/04/2023]
Abstract
Pancreatic and biliary diseases encompass a range of conditions requiring accurate diagnosis for appropriate treatment strategies. This diagnosis relies heavily on imaging techniques like endoscopic ultrasonography and endoscopic retrograde cholangiopancreatography. Artificial intelligence (AI), including machine learning and deep learning, is becoming integral in medical imaging and diagnostics, such as the detection of colorectal polyps. AI shows great potential in diagnosing pancreatobiliary diseases. Unlike machine learning, which requires feature extraction and selection, deep learning can utilize images directly as input. Accurate evaluation of AI performance is a complex task due to varied terminologies, evaluation methods, and development stages. Essential aspects of AI evaluation involve defining the AI's purpose, choosing appropriate gold standards, deciding on the validation phase, and selecting reliable validation methods. AI, particularly deep learning, is increasingly employed in endoscopic ultrasonography and endoscopic retrograde cholangiopancreatography diagnostics, achieving high accuracy levels in detecting and classifying various pancreatobiliary diseases. The AI often performs better than doctors, even in tasks like differentiating benign from malignant pancreatic tumors, cysts, and subepithelial lesions, identifying gallbladder lesions, assessing endoscopic retrograde cholangiopancreatography difficulty, and evaluating the biliary strictures. The potential for AI in diagnosing pancreatobiliary diseases, especially where other modalities have limitations, is considerable. However, a crucial constraint is the need for extensive, high-quality annotated data for AI training. Future advances in AI, such as large language models, promise further applications in the medical field.
Collapse
Affiliation(s)
| | - Kazuo Hara
- Department of GastroenterologyAichi Cancer Center HospitalAichiJapan
| | - Nobumasa Mizuno
- Department of GastroenterologyAichi Cancer Center HospitalAichiJapan
| | - Shin Haba
- Department of GastroenterologyAichi Cancer Center HospitalAichiJapan
| | - Nozomi Okuno
- Department of GastroenterologyAichi Cancer Center HospitalAichiJapan
| | - Toshitaka Fukui
- Department of GastroenterologyAichi Cancer Center HospitalAichiJapan
| | - Minako Urata
- Department of GastroenterologyAichi Cancer Center HospitalAichiJapan
| | | |
Collapse
|
20
|
Alnahedh TA, Taha M. Role of Machine Learning and Artificial Intelligence in the Diagnosis and Treatment of Refractive Errors for Enhanced Eye Care: A Systematic Review. Cureus 2024; 16:e57706. [PMID: 38711688 PMCID: PMC11071623 DOI: 10.7759/cureus.57706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/04/2024] [Indexed: 05/08/2024] Open
Abstract
A significant contributor to blindness and visual impairment globally is uncorrected refractive error. To plan effective interventions, eye care professionals must promptly identify people at a high risk of acquiring myopia, and monitor disease progress. Artificial intelligence (AI) and machine learning (ML) have enormous potential to improve diagnosis and treatment. This systematic review explores the current state of ML and AI applications in the diagnoses and treatment of refractory errors in optometry. A systematic review and meta-analysis of studies evaluating the diagnostic performance of AI-based tools in PubMed was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. To find relevant studies on the use of ML or AI in the diagnosis or treatment of refractive errors in optometry, a thorough search was conducted in various electronic databases such as PubMed, Google Scholar, and Web of Science. The search was limited to studies published between January 2015 and December 2022. The search terms used were "refractive errors," "myopia," "optometry," "machine learning," "ophthalmology," and "artificial intelligence." A total of nine studies met the inclusion criteria and were included in the final analysis. ML is increasingly being utilized for automating clinical data processing as AI technology progresses, making the formerly labor-intensive work possible. AI models that primarily use a neural network demonstrated exceptional efficiency and performance in the analysis of vast medical data, rivaling board-certified, healthcare professionals. Several studies showed that ML models could support diagnosis and clinical decision-making. Moreover, an ML algorithm predicted future refraction values in patients with myopia. AI and ML models have great potential to improve the diagnosis and treatment of refractive errors in optometry.
Collapse
Affiliation(s)
- Taghreed A Alnahedh
- Optometry, King Abdullah International Medical Research Center (KAIMRC), National Guard Health Affairs, Riyadh, SAU
- Academic Affairs, King Saud Bin Abdulaziz University for Health Sciences College of Medicine, Riyadh, SAU
| | - Mohammed Taha
- Ophthalmology, King Saud Bin Abdulaziz University for Health Sciences College of Medicine, Riyadh, SAU
| |
Collapse
|
21
|
Mihalache A, Huang RS, Popovic MM, Patil NS, Pandya BU, Shor R, Pereira A, Kwok JM, Yan P, Wong DT, Kertes PJ, Muni RH. Accuracy of an Artificial Intelligence Chatbot's Interpretation of Clinical Ophthalmic Images. JAMA Ophthalmol 2024; 142:321-326. [PMID: 38421670 PMCID: PMC10905373 DOI: 10.1001/jamaophthalmol.2024.0017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 12/19/2023] [Indexed: 03/02/2024]
Abstract
Importance Ophthalmology is reliant on effective interpretation of multimodal imaging to ensure diagnostic accuracy. The new ability of ChatGPT-4 (OpenAI) to interpret ophthalmic images has not yet been explored. Objective To evaluate the performance of the novel release of an artificial intelligence chatbot that is capable of processing imaging data. Design, Setting, and Participants This cross-sectional study used a publicly available dataset of ophthalmic cases from OCTCases, a medical education platform based out of the Department of Ophthalmology and Vision Sciences at the University of Toronto, with accompanying clinical multimodal imaging and multiple-choice questions. Across 137 available cases, 136 contained multiple-choice questions (99%). Exposures The chatbot answered questions requiring multimodal input from October 16 to October 23, 2023. Main Outcomes and Measures The primary outcome was the accuracy of the chatbot in answering multiple-choice questions pertaining to image recognition in ophthalmic cases, measured as the proportion of correct responses. χ2 Tests were conducted to compare the proportion of correct responses across different ophthalmic subspecialties. Results A total of 429 multiple-choice questions from 136 ophthalmic cases and 448 images were included in the analysis. The chatbot answered 299 of multiple-choice questions correctly across all cases (70%). The chatbot's performance was better on retina questions than neuro-ophthalmology questions (77% vs 58%; difference = 18%; 95% CI, 7.5%-29.4%; χ21 = 11.4; P < .001). The chatbot achieved a better performance on nonimage-based questions compared with image-based questions (82% vs 65%; difference = 17%; 95% CI, 7.8%-25.1%; χ21 = 12.2; P < .001).The chatbot performed best on questions in the retina category (77% correct) and poorest in the neuro-ophthalmology category (58% correct). The chatbot demonstrated intermediate performance on questions from the ocular oncology (72% correct), pediatric ophthalmology (68% correct), uveitis (67% correct), and glaucoma (61% correct) categories. Conclusions and Relevance In this study, the recent version of the chatbot accurately responded to approximately two-thirds of multiple-choice questions pertaining to ophthalmic cases based on imaging interpretation. The multimodal chatbot performed better on questions that did not rely on the interpretation of imaging modalities. As the use of multimodal chatbots becomes increasingly widespread, it is imperative to stress their appropriate integration within medical contexts.
Collapse
Affiliation(s)
- Andrew Mihalache
- Temerty School of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Ryan S. Huang
- Temerty School of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Marko M. Popovic
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Nikhil S. Patil
- Michael G. DeGroote School of Medicine, McMaster University, Hamilton, Ontario, Canada
| | - Bhadra U. Pandya
- Temerty School of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Reut Shor
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Austin Pereira
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Jason M. Kwok
- Temerty School of Medicine, University of Toronto, Toronto, Ontario, Canada
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Peng Yan
- Temerty School of Medicine, University of Toronto, Toronto, Ontario, Canada
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - David T. Wong
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Department of Ophthalmology, St Michael’s Hospital/Unity Health Toronto, Toronto, Ontario, Canada
| | - Peter J. Kertes
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- John and Liz Tory Eye Centre, Sunnybrook Health Science Centre, Toronto, Ontario, Canada
| | - Rajeev H. Muni
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Department of Ophthalmology, St Michael’s Hospital/Unity Health Toronto, Toronto, Ontario, Canada
| |
Collapse
|
22
|
Rao DP, Shroff S, Savoy FM, S S, Hsu CK, Negiloni K, Pradhan ZS, P V J, Sivaraman A, Rao HL. Evaluation of an offline, artificial intelligence system for referable glaucoma screening using a smartphone-based fundus camera: a prospective study. Eye (Lond) 2024; 38:1104-1111. [PMID: 38092938 PMCID: PMC11009383 DOI: 10.1038/s41433-023-02826-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 10/27/2023] [Accepted: 11/01/2023] [Indexed: 04/13/2024] Open
Abstract
BACKGROUND/OBJECTIVES An affordable and scalable screening model is critical for undetected glaucoma. The study evaluated the performance of an offline, smartphone-based AI system for the detection of referable glaucoma against two benchmarks: specialist diagnosis following full glaucoma workup and consensus image grading. SUBJECTS/METHODS This prospective study (tertiary glaucoma centre, India) included 243 subjects with varying severity of glaucoma and control group without glaucoma. Disc-centred images were captured using a validated smartphone-based fundus camera analysed by the AI system and graded by specialists. Diagnostic ability of the AI in detecting referable Glaucoma (Confirmed glaucoma) and no referable Glaucoma (Suspects and No glaucoma) when compared to a final diagnosis (comprehensive glaucoma workup) and majority grading (image grading) by Glaucoma specialists (pre-defined criteria) were evaluated. RESULTS The AI system demonstrated a sensitivity and specificity of 93.7% (95% CI: 87.6-96.9%) and 85.6% (95% CI:78.6-90.6%), respectively, in the detection of referable glaucoma when compared against final diagnosis following full glaucoma workup. True negative rate in definite non-glaucoma cases was 94.7% (95% CI: 87.2-97.9%). Amongst the false negatives were 4 early and 3 moderate glaucoma. When the same set of images provided to the AI was also provided to the specialists for image grading, specialists detected 60% (67/111) of true glaucoma cases versus a detection rate of 94% (104/111) by the AI. CONCLUSION The AI tool showed robust performance when compared against a stringent benchmark. It had modest over-referral of normal subjects despite being challenged with fundus images alone. The next step involves a population-level assessment.
Collapse
Affiliation(s)
| | - Sujani Shroff
- Narayana Nethralaya Eye Hospital, Glaucoma Services, Bangalore, India
| | | | - Shruthi S
- Narayana Nethralaya Eye Hospital, Glaucoma Services, Bangalore, India
| | - Chao-Kai Hsu
- Medios Technologies Pte Ltd, Singapore, Singapore
| | | | | | - Jayasree P V
- Narayana Nethralaya Eye Hospital, Glaucoma Services, Bangalore, India
| | | | - Harsha L Rao
- Narayana Nethralaya Eye Hospital, Glaucoma Services, Bangalore, India
| |
Collapse
|
23
|
Shahamatdar S, Saeed-Vafa D, Linsley D, Khalil F, Lovinger K, Li L, McLeod HT, Ramachandran S, Serre T. Deceptive learning in histopathology. Histopathology 2024. [PMID: 38556922 DOI: 10.1111/his.15180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 03/08/2024] [Accepted: 03/10/2024] [Indexed: 04/02/2024]
Abstract
AIMS Deep learning holds immense potential for histopathology, automating tasks that are simple for expert pathologists and revealing novel biology for tasks that were previously considered difficult or impossible to solve by eye alone. However, the extent to which the visual strategies learned by deep learning models in histopathological analysis are trustworthy or not has yet to be systematically analysed. Here, we systematically evaluate deep neural networks (DNNs) trained for histopathological analysis in order to understand if their learned strategies are trustworthy or deceptive. METHODS AND RESULTS We trained a variety of DNNs on a novel data set of 221 whole-slide images (WSIs) from lung adenocarcinoma patients, and evaluated their effectiveness at (1) molecular profiling of KRAS versus EGFR mutations, (2) determining the primary tissue of a tumour and (3) tumour detection. While DNNs achieved above-chance performance on molecular profiling, they did so by exploiting correlations between histological subtypes and mutations, and failed to generalise to a challenging test set obtained through laser capture microdissection (LCM). In contrast, DNNs learned robust and trustworthy strategies for determining the primary tissue of a tumour as well as detecting and localising tumours in tissue. CONCLUSIONS Our work demonstrates that DNNs hold immense promise for aiding pathologists in analysing tissue. However, they are also capable of achieving seemingly strong performance by learning deceptive strategies that leverage spurious correlations, and are ultimately unsuitable for research or clinical work. The framework we propose for model evaluation and interpretation is an important step towards developing reliable automated systems for histopathological analysis.
Collapse
Affiliation(s)
- Sahar Shahamatdar
- Center for Computational Molecular Biology, Brown University, Providence, RI, USA
- The Warren Alpert Medical School, Brown University, Providence, RI, USA
| | - Daryoush Saeed-Vafa
- Department of Anatomic Pathology, H. Lee Moffitt Cancer and Research Institute, Tampa, FL, USA
| | - Drew Linsley
- Carney Institute for Brain Science, Brown University, Providence, RI, USA
- Department of Cognitive Linguistic and Psychological Sciences, Brown University, Providence, RI, USA
| | - Farah Khalil
- Department of Anatomic Pathology, H. Lee Moffitt Cancer and Research Institute, Tampa, FL, USA
| | - Katherine Lovinger
- Department of Molecular Biology, H. Lee Moffitt Cancer and Research Institute, Tampa, FL, USA
| | - Lester Li
- University of Rochester, Rochester, NY, USA
| | | | - Sohini Ramachandran
- Center for Computational Molecular Biology, Brown University, Providence, RI, USA
- Department of Ecology, Evolution and Organismal Biology, Brown University, Providence, RI, USA
- The Data Science Initiative, Brown University, Providence, RI, USA
| | - Thomas Serre
- Carney Institute for Brain Science, Brown University, Providence, RI, USA
- Department of Cognitive Linguistic and Psychological Sciences, Brown University, Providence, RI, USA
| |
Collapse
|
24
|
Arai Y, Takahashi H, Takayama T, Yousefi S, Tampo H, Yamashita T, Hasegawa T, Ohgami T, Sonoda S, Tanaka Y, Inoda S, Sakamoto S, Kawashima H, Yanagi Y. Predicting central choroidal thickness from colour fundus photographs using deep learning. PLoS One 2024; 19:e0301467. [PMID: 38551957 PMCID: PMC10980193 DOI: 10.1371/journal.pone.0301467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 03/15/2024] [Indexed: 04/01/2024] Open
Abstract
The estimation of central choroidal thickness from colour fundus images can improve disease detection. We developed a deep learning method to estimate central choroidal thickness from colour fundus images at a single institution, using independent datasets from other institutions for validation. A total of 2,548 images from patients who underwent same-day optical coherence tomography examination and colour fundus imaging at the outpatient clinic of Jichi Medical University Hospital were retrospectively analysed. For validation, 393 images from three institutions were used. Patients with signs of subretinal haemorrhage, central serous detachment, retinal pigment epithelial detachment, and/or macular oedema were excluded. All other fundus photographs with a visible pigment epithelium were included. The main outcome measure was the standard deviation of 10-fold cross-validation. Validation was performed using the original algorithm and the algorithm after learning based on images from all institutions. The standard deviation of 10-fold cross-validation was 73 μm. The standard deviation for other institutions was reduced by re-learning. We describe the first application and validation of a deep learning approach for the estimation of central choroidal thickness from fundus images. This algorithm is expected to help graders judge choroidal thickening and thinning.
Collapse
Affiliation(s)
- Yusuke Arai
- Department of Ophthalmology, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Hidenori Takahashi
- Department of Ophthalmology, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Takuya Takayama
- Department of Ophthalmology, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, Tennessee, United States of America
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, Tennessee, United States of America
| | - Hironobu Tampo
- Department of Ophthalmology, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | | | - Tetsuya Hasegawa
- Department of Ophthalmology, Saitama Medical Center, Jichi Medical University, Saitama, Japan
| | - Tomohiro Ohgami
- Department of Ophthalmology, Ibaraki Seinan Medical Center, Ibaraki, Japan
| | - Shozo Sonoda
- Department of Ophthalmology, Kagoshima University, Kagoshima, Japan
| | - Yoshiaki Tanaka
- Department of Ophthalmology, Saitama Medical Center, Jichi Medical University, Saitama, Japan
| | - Satoru Inoda
- Department of Ophthalmology, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Shinichi Sakamoto
- Department of Ophthalmology, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Hidetoshi Kawashima
- Department of Ophthalmology, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Yasuo Yanagi
- Department of Ophthalmology, Yokohama City University, Kanagawa, Japan
- Medical Retina, Singapore Eye Research Institute, Singapore, Singapore
| |
Collapse
|
25
|
Zhao N, Yu L, Fu X, Dai W, Han H, Bai J, Xu J, Hu J, Zhou Q. Application of a Diabetic Foot Smart APP in the measurement of diabetic foot ulcers. Int J Orthop Trauma Nurs 2024; 54:101095. [PMID: 38599150 DOI: 10.1016/j.ijotn.2024.101095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 02/25/2024] [Accepted: 03/05/2024] [Indexed: 04/12/2024]
Abstract
AIMS In the early stage, we developed an intelligent measurement APP for diabetic foot ulcers, named Diabetic Foot Smart APP. This study aimed to validate the APP in the measurement of ulcer area for diabetic foot ulcer (DFU). METHODS We selected 150 DFU images to measure the ulcer areas using three assessment tools: the Smart APP software package, the ruler method, and the gold standard Image J software, and compared the measurement results and measurement time of the three tools. The intra-rater and inter-rater reliability were described by Pearson correlation coefficient, intra-group correlation coefficient, and coefficient of variation. RESULTS The Image J software showed a median ulcer area of 4.02 cm2, with a mean measurement time of 66.37 ± 7.95 s. The ruler method showed a median ulcer area of 5.14 cm2, with a mean measurement time of 171.47 ± 46.43 s. The APP software showed a median ulcer area of 3.70 cm2, with a mean measurement time of 38.25 ± 6.81 s. There were significant differences between the ruler method and the golden standard Image J software (Z = -4.123, p < 0.05), but no significant difference between the APP software and the Image J software (Z = 1.103, p > 0.05). The APP software also showed good inter-rater reliability and intra-rater reliability, with both reaching 0.99. CONCLUSION The Diabetic Foot Smart APP is a fast and reliable measurement tool with high measurement accuracy that can be easily used in clinical practice for the measurement of ulcer areas of DFU. TRIAL REGISTRATION Chinese clinical trial registration number: ChiCTR2100047210.
Collapse
Affiliation(s)
- Nan Zhao
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, 410008, China; Zhengzhou Shuqing Medical College, Henan, 450052, China
| | - Ling Yu
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, 410008, China
| | - Xiaoai Fu
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, 410008, China
| | - Weiwei Dai
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, 410008, China; Department of Stoma Wound Care Center, Xiangya Hospital, Central South University, Changsha, 410008, China
| | - Huiwu Han
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, 410008, China; Department of Nursing, Xiangya Hospital, Central South University, Changsha, 410008, China
| | - Jiaojiao Bai
- Department of Nursing, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Jingcan Xu
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, 410008, China; Department of Nursing, Xiangya Hospital, Central South University, Changsha, 410008, China
| | - Jianzhong Hu
- Xiangya Hospital, Central South University, Changsha, 410008, China
| | - Qiuhong Zhou
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, 410008, China.
| |
Collapse
|
26
|
Guo M, Higashita R, Lin C, Hu L, Chen W, Li F, Lai GWK, Nguyen A, Sakata R, Okamoto K, Tang B, Xu Y, Fu H, Gao F, Aihara M, Zhang X, Yuan J, Lin S, Leung CKS, Liu J. Crystalline lens nuclear age prediction as a new biomarker of nucleus degeneration. Br J Ophthalmol 2024; 108:513-521. [PMID: 37495263 DOI: 10.1136/bjo-2023-323176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 05/22/2023] [Indexed: 07/28/2023]
Abstract
BACKGROUND The crystalline lens is a transparent structure of the eye to focus light on the retina. It becomes muddy, hard and dense with increasing age, which makes the crystalline lens gradually lose its function. We aim to develop a nuclear age predictor to reflect the degeneration of the crystalline lens nucleus. METHODS First we trained and internally validated the nuclear age predictor with a deep-learning algorithm, using 12 904 anterior segment optical coherence tomography (AS-OCT) images from four diverse Asian and American cohorts: Zhongshan Ophthalmic Center with Machine0 (ZOM0), Tomey Corporation (TOMEY), University of California San Francisco and the Chinese University of Hong Kong. External testing was done on three independent datasets: Tokyo University (TU), ZOM1 and Shenzhen People's Hospital (SPH). We also demonstrate the possibility of detecting nuclear cataracts (NCs) from the nuclear age gap. FINDINGS In the internal validation dataset, the nuclear age could be predicted with a mean absolute error (MAE) of 2.570 years (95% CI 1.886 to 2.863). Across the three external testing datasets, the algorithm achieved MAEs of 4.261 years (95% CI 3.391 to 5.094) in TU, 3.920 years (95% CI 3.332 to 4.637) in ZOM1-NonCata and 4.380 years (95% CI 3.730 to 5.061) in SPH-NonCata. The MAEs for NC eyes were 8.490 years (95% CI 7.219 to 9.766) in ZOM1-NC and 9.998 years (95% CI 5.673 to 14.642) in SPH-NC. The nuclear age gap outperformed both ophthalmologists in detecting NCs, with areas under the receiver operating characteristic curves of 0.853 years (95% CI 0.787 to 0.917) in ZOM1 and 0.909 years (95% CI 0.828 to 0.978) in SPH. INTERPRETATION The nuclear age predictor shows good performance, validating the feasibility of using AS-OCT images as an effective screening tool for nucleus degeneration. Our work also demonstrates the potential use of the nuclear age gap to detect NCs.
Collapse
Affiliation(s)
- Mengjie Guo
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, Guangdong, China
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Risa Higashita
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong, China
- Tomey Corporation, Nagoya, Aichi, Japan
| | - Chen Lin
- Shenzhen People's Hospital, Shenzhen, Guangdong, China
| | - Lingxi Hu
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, Guangdong, China
| | - Wan Chen
- Zhongshan Ophthalmic Center, State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Fei Li
- Zhongshan Ophthalmic Center, State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Gilda Wing Ki Lai
- Department of Ophthalmology, The University of Hong Kong, Hong Kong, Hong Kong
| | - Anwell Nguyen
- Department of Ophthalmology, University of California, San Francisco, California, USA
| | - Rei Sakata
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan
| | | | - Bo Tang
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, Guangdong, China
| | - Yanwu Xu
- Intelligent Healthcare Unit, Baidu Inc, Beijing, China
| | - Huazhu Fu
- Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore
| | - Fei Gao
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Makoto Aihara
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan
| | - Xiulan Zhang
- Zhongshan Ophthalmic Center, State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Jin Yuan
- Zhongshan Ophthalmic Center, State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Shan Lin
- Department of Ophthalmology, University of California, San Francisco, California, USA
- Glaucoma Center of San Francisco, San Francisco, California, USA
| | - Christopher Kai-Shun Leung
- Department of Ophthalmology, The University of Hong Kong, Hong Kong, Hong Kong
- Department of Ophthalmology and Visual Sciences, The Chinese University, Hong Kong, Hong Kong
| | - Jiang Liu
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, Guangdong, China
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong, China
- Cixi Institute of Biomedical Engineering, Chinese Academy of Sciences, Cixi, Zhejiang, China
| |
Collapse
|
27
|
Tao BKL, Hua N, Milkovich J, Micieli JA. ChatGPT-3.5 and Bing Chat in ophthalmology: an updated evaluation of performance, readability, and informative sources. Eye (Lond) 2024:10.1038/s41433-024-03037-w. [PMID: 38509182 DOI: 10.1038/s41433-024-03037-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Revised: 03/04/2024] [Accepted: 03/14/2024] [Indexed: 03/22/2024] Open
Abstract
BACKGROUND/OBJECTIVES Experimental investigation. Bing Chat (Microsoft) integration with ChatGPT-4 (OpenAI) integration has conferred the capability of accessing online data past 2021. We investigate its performance against ChatGPT-3.5 on a multiple-choice question ophthalmology exam. SUBJECTS/METHODS In August 2023, ChatGPT-3.5 and Bing Chat were evaluated against 913 questions derived from the Academy's Basic and Clinical Science Collection collection. For each response, the sub-topic, performance, Simple Measure of Gobbledygook readability score (measuring years of required education to understand a given passage), and cited resources were collected. The primary outcomes were the comparative scores between models, and qualitatively, the resources referenced by Bing Chat. Secondary outcomes included performance stratified by response readability, question type (explicit or situational), and BCSC sub-topic. RESULTS Across 913 questions, ChatGPT-3.5 scored 59.69% [95% CI 56.45,62.94] while Bing Chat scored 73.60% [95% CI 70.69,76.52]. Both models performed significantly better in explicit than clinical reasoning questions. Both models performed best on general medicine questions than ophthalmology subsections. Bing Chat referenced 927 online entities and provided at-least one citation to 836 of the 913 questions. The use of more reliable (peer-reviewed) sources was associated with higher likelihood of correct response. The most-cited resources were eyewiki.aao.org, aao.org, wikipedia.org, and ncbi.nlm.nih.gov. Bing Chat showed significantly better readability than ChatGPT-3.5, averaging a reading level of grade 11.4 [95% CI 7.14, 15.7] versus 12.4 [95% CI 8.77, 16.1], respectively (p-value < 0.0001, ρ = 0.25). CONCLUSIONS The online access, improved readability, and citation feature of Bing Chat confers additional utility for ophthalmology learners. We recommend critical appraisal of cited sources during response interpretation.
Collapse
Affiliation(s)
- Brendan Ka-Lok Tao
- Faculty of Medicine, The University of British Columbia, 317-2194 Health Sciences Mall, Vancouver, BC, V6T 1Z3, Canada
| | - Nicholas Hua
- Temerty Faculty of Medicine, University of Toronto, 1 King's College Circle, Toronto, ON, M5S 1A8, Canada
| | - John Milkovich
- Temerty Faculty of Medicine, University of Toronto, 1 King's College Circle, Toronto, ON, M5S 1A8, Canada
| | - Jonathan Andrew Micieli
- Temerty Faculty of Medicine, University of Toronto, 1 King's College Circle, Toronto, ON, M5S 1A8, Canada.
- Department of Ophthalmology and Vision Sciences, University of Toronto, 340 College Street, Toronto, ON, M5T 3A9, Canada.
- Division of Neurology, Department of Medicine, University of Toronto, 6 Queen's Park Crescent West, Toronto, ON, M5S 3H2, Canada.
- Kensington Vision and Research Center, 340 College Street, Toronto, ON, M5T 3A9, Canada.
- St. Michael's Hospital, 36 Queen Street East, Toronto, ON, M5B 1W8, Canada.
- Toronto Western Hospital, 399 Bathurst Street, Toronto, ON, M5T 2S8, Canada.
- University Health Network, 190 Elizabeth Street, Toronto, ON, M5G 2C4, Canada.
| |
Collapse
|
28
|
Aravindhan A, Fenwick EK, Chan AWD, Man REK, Tan NC, Wong WT, Soo WF, Lim SW, Wee SYM, Sabanayagam C, Finkelstein E, Tan G, Hamzah H, Chakraborty B, Acharyya S, Shyong TE, Scanlon P, Wong TY, Lamoureux EL. Extending the diabetic retinopathy screening intervals in Singapore: methodology and preliminary findings of a cohort study. BMC Public Health 2024; 24:786. [PMID: 38481239 PMCID: PMC10935797 DOI: 10.1186/s12889-024-18287-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 03/05/2024] [Indexed: 03/17/2024] Open
Abstract
BACKGROUND The Diabetic Retinopathy Extended Screening Study (DRESS) aims to develop and validate a new DR/diabetic macular edema (DME) risk stratification model in patients with Type 2 diabetes (DM) to identify low-risk groups who can be safely assigned to biennial or triennial screening intervals. We describe the study methodology, participants' baseline characteristics, and preliminary DR progression rates at the first annual follow-up. METHODS DRESS is a 3-year ongoing longitudinal study of patients with T2DM and no or mild non-proliferative DR (NPDR, non-referable) who underwent teleophthalmic screening under the Singapore integrated Diabetic Retinopathy Programme (SiDRP) at four SingHealth Polyclinics. Patients with referable DR/DME (> mild NPDR) or ungradable fundus images were excluded. Sociodemographic, lifestyle, medical and clinical information was obtained from medical records and interviewer-administered questionnaires at baseline. These data are extracted from medical records at 12, 24 and 36 months post-enrollment. Baseline descriptive characteristics stratified by DR severity at baseline and rates of progression to referable DR at 12-month follow-up were calculated. RESULTS Of 5,840 eligible patients, 78.3% (n = 4,570, median [interquartile range [IQR] age 61.0 [55-67] years; 54.7% male; 68.0% Chinese) completed the baseline assessment. At baseline, 97.4% and 2.6% had none and mild NPDR (worse eye), respectively. Most participants had hypertension (79.2%) and dyslipidemia (92.8%); and almost half were obese (43.4%, BMI ≥ 27.5 kg/m2). Participants without DR (vs mild DR) reported shorter DM duration, and had lower haemoglobin A1c, triglycerides and urine albumin/creatinine ratio (all p < 0.05). To date, we have extracted 41.8% (n = 1909) of the 12-month follow-up data. Of these, 99.7% (n = 1,904) did not progress to referable DR. Those who progressed to referable DR status (0.3%) had no DR at baseline. CONCLUSIONS In our prospective study of patients with T2DM and non-referable DR attending polyclinics, we found extremely low annual DR progression rates. These preliminary results suggest that extending screening intervals beyond 12 months may be viable and safe for most participants, although our 3-year follow up data are needed to substantiate this claim and develop the risk stratification model to identify low-risk patients with T2DM who can be assigned biennial or triennial screening intervals.
Collapse
Affiliation(s)
- Amudha Aravindhan
- Singapore Eye Research Institute and Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Eva K Fenwick
- Singapore Eye Research Institute and Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Aurora Wing Dan Chan
- Singapore Eye Research Institute and Singapore National Eye Centre, Singapore, Singapore
| | - Ryan Eyn Kidd Man
- Singapore Eye Research Institute and Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | | | | | | | | | | | - Charumathi Sabanayagam
- Singapore Eye Research Institute and Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | | | - Gavin Tan
- Singapore Eye Research Institute and Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Haslina Hamzah
- Singapore Eye Research Institute and Singapore National Eye Centre, Singapore, Singapore
| | | | | | - Tai E Shyong
- Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, Singapore, Singapore
| | - Peter Scanlon
- Gloucestershire Hospitals NHS Foundation Trust, Gloucester, UK
| | | | - Ecosse L Lamoureux
- Singapore Eye Research Institute and Singapore National Eye Centre, Singapore, Singapore.
- Duke-NUS Medical School, Singapore, Singapore.
- The University of Melbourne, Melbourne, Australia.
| |
Collapse
|
29
|
Doğan ME, Bilgin AB, Sari R, Bulut M, Akar Y, Aydemir M. Head to head comparison of diagnostic performance of three non-mydriatic cameras for diabetic retinopathy screening with artificial intelligence. Eye (Lond) 2024:10.1038/s41433-024-03000-9. [PMID: 38467864 DOI: 10.1038/s41433-024-03000-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Revised: 01/24/2024] [Accepted: 02/15/2024] [Indexed: 03/13/2024] Open
Abstract
BACKGROUND Diabetic Retinopathy (DR) is a leading cause of blindness worldwide, affecting people with diabetes. The timely diagnosis and treatment of DR are essential in preventing vision loss. Non-mydriatic fundus cameras and artificial intelligence (AI) software have been shown to improve DR screening efficiency. However, few studies have compared the diagnostic performance of different non-mydriatic cameras and AI software. METHODS This clinical study was conducted at the endocrinology clinic of Akdeniz University with 900 volunteer patients that were previously diagnosed with diabetes but not with diabetic retinopathy. Fundus images of each patient were taken using three non-mydriatic fundus cameras and EyeCheckup AI software was used to diagnose more than mild diabetic retinopathy, vision-threatening diabetic retinopathy, and clinically significant diabetic macular oedema using images from all three cameras. Then patients underwent dilation and 4 wide-field fundus photography. Three retina specialists graded the 4 wide-field fundus images according to the diabetic retinopathy treatment preferred practice patterns of the American Academy of Ophthalmology. The study was pre-registered on clinicaltrials.gov with the ClinicalTrials.gov Identifier: NCT04805541. RESULTS The Canon CR2 AF AF camera had a sensitivity and specificity of 95.65% / 95.92% for diagnosing more than mild DR, the Topcon TRC-NW400 had 95.19% / 96.46%, and the Optomed Aurora had 90.48% / 97.21%. For vision threatening diabetic retinopathy, the Canon CR2 AF had a sensitivity and specificity of 96.00% / 96.34%, the Topcon TRC-NW400 had 98.52% / 95.93%, and the Optomed Aurora had 95.12% / 98.82%. For clinically significant diabetic macular oedema, the Canon CR2 AF had a sensitivity and specificity of 95.83% / 96.83%, the Topcon TRC-NW400 had 98.50% / 96.52%, and the Optomed Aurora had 94.93% / 98.95%. CONCLUSION The study demonstrates the potential of using non-mydriatic fundus cameras combined with artificial intelligence software in detecting diabetic retinopathy. Several cameras were tested and, notably, each camera exhibited varying but adequate levels of sensitivity and specificity. The Canon CR2 AF emerged with the highest accuracy in identifying both more than mild diabetic retinopathy and vision-threatening cases, while the Topcon TRC-NW400 excelled in detecting clinically significant diabetic macular oedema. The findings from this study emphasize the importance of considering a non mydriatic camera and artificial intelligence software for diabetic retinopathy screening. However, further research is imperative to explore additional factors influencing the efficiency of diabetic retinopathy screening using AI and non mydriatic cameras such as costs involved and effects of screening using and on an ethnically diverse population.
Collapse
Affiliation(s)
- Mehmet Erkan Doğan
- Department of Ophthalmology, Akdeniz University Faculty of Medicine, Antalya, Turkey.
| | - Ahmet Burak Bilgin
- Department of Ophthalmology, Akdeniz University Faculty of Medicine, Antalya, Turkey
| | - Ramazan Sari
- Endocrinology and Metabolic Department, Akdeniz University Faculty of Medicine, Antalya, Turkey
| | - Mehmet Bulut
- Department of Ophthalmology, Antalya Training and Research Hospital, Antalya, Turkey
| | - Yusuf Akar
- Endocrinology and Metabolic Department, Akdeniz University Faculty of Medicine, Antalya, Turkey
| | - Mustafa Aydemir
- Department of Ophthalmology, Akdeniz University Faculty of Medicine, Antalya, Turkey
| |
Collapse
|
30
|
Roubelat FP, Soler V, Varenne F, Gualino V. Real-world artificial intelligence-based interpretation of fundus imaging as part of an eyewear prescription renewal protocol. J Fr Ophtalmol 2024:104130. [PMID: 38461084 DOI: 10.1016/j.jfo.2024.104130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 11/17/2023] [Accepted: 11/23/2023] [Indexed: 03/11/2024]
Abstract
OBJECTIVE A real-world evaluation of the diagnostic accuracy of the Opthai® software for artificial intelligence-based detection of fundus image abnormalities in the context of the French eyewear prescription renewal protocol (RNO). METHODS A single-center, retrospective review of the sensitivity and specificity of the software in detecting fundus abnormalities among consecutive patients seen in our ophthalmology center in the context of the RNO protocol from July 28 through October 22, 2021. We compared abnormalities detected by the software operated by ophthalmic technicians (index test) to diagnoses confirmed by the ophthalmologist following additional examinations and/or consultation (reference test). RESULTS The study included 2056 eyes/fundus images of 1028 patients aged 6-50years. The software detected fundus abnormalities in 149 (7.2%) eyes or 107 (10.4%) patients. After examining the same fundus images, the ophthalmologist detected abnormalities in 35 (1.7%) eyes or 20 (1.9%) patients. The ophthalmologist did not detect abnormalities in fundus images deemed normal by the software. The most frequent diagnoses made by the ophthalmologist were glaucoma suspect (0.5% of eyes), peripapillary atrophy (0.44% of eyes), and drusen (0.39% of eyes). The software showed an overall sensitivity of 100% (95% CI 0.879-1.00) and an overall specificity of 94.4% (95% CI 0.933-0.953). The majority of false-positive software detections (5.6%) were glaucoma suspect, with the differential diagnosis of large physiological optic cups. Immediate OCT imaging by the technician allowed diagnosis by the ophthalmologist without separate consultation for 43/53 (81%) patients. CONCLUSION Ophthalmic technicians can use this software for highly-sensitive screening for fundus abnormalities that require evaluation by an ophthalmologist.
Collapse
Affiliation(s)
- F-P Roubelat
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - V Soler
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - F Varenne
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - V Gualino
- Ophthalmology Department, Clinique Honoré-Cave, Montauban, France.
| |
Collapse
|
31
|
Joseph S, Selvaraj J, Mani I, Kumaragurupari T, Shang X, Mudgil P, Ravilla T, He M. Diagnostic Accuracy of Artificial Intelligence-Based Automated Diabetic Retinopathy Screening in Real-World Settings: A Systematic Review and Meta-Analysis. Am J Ophthalmol 2024; 263:214-230. [PMID: 38438095 DOI: 10.1016/j.ajo.2024.02.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 02/03/2024] [Accepted: 02/12/2024] [Indexed: 03/06/2024]
Abstract
PURPOSE To evaluate the diagnostic accuracy of artificial intelligence (AI)-based automated diabetic retinopathy (DR) screening in real-world settings. DESIGN Systematic review and meta-analysis METHODS: We conducted a systematic review of relevant literature from January 2012 to August 2022 using databases including PubMed, Scopus and Web of Science. The quality of studies was evaluated using Quality Assessment for Diagnostic Accuracy Studies 2 (QUADAS-2) checklist. We calculated pooled accuracy, sensitivity, specificity, and diagnostic odds ratio (DOR) as summary measures. The study protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO - CRD42022367034). RESULTS We included 34 studies which utilized AI algorithms for diagnosing DR based on real-world fundus images. Quality assessment of these studies indicated a low risk of bias and low applicability concern. Among gradable images, the overall pooled accuracy, sensitivity, specificity, and DOR were 81%, 94% (95% CI: 92.0-96.0), 89% (95% CI: 85.0-92.0) and 128 (95% CI: 80-204) respectively. Sub-group analysis showed that, when acceptable quality imaging could be obtained, non-mydriatic fundus images had a better DOR of 143 (95% CI: 82-251) and studies using 2 field images had a better DOR of 161 (95% CI 74-347). Our meta-regression analysis revealed a statistically significant association between DOR and variables such as the income status, and the type of fundus camera. CONCLUSION Our findings indicate that AI algorithms have acceptable performance in screening for DR using fundus images compared to human graders. Implementing a fundus camera with AI-based software has the potential to assist ophthalmologists in reducing their workload and improving the accuracy of DR diagnosis.
Collapse
Affiliation(s)
- Sanil Joseph
- From the Centre for Eye Research Australia (S.J, X.S, M.H), Royal Victorian Eye and Ear Hospital, East Melbourne, Australia; Department of Surgery (Ophthalmology) (S.J, X.S, M.H), The University of Melbourne, Melbourne, Australia; Lions Aravind Institute of Community Ophthalmology (S.J, J.S, T.R), Aravind Eye Care System. Madurai, India.
| | - Jerrome Selvaraj
- Lions Aravind Institute of Community Ophthalmology (S.J, J.S, T.R), Aravind Eye Care System. Madurai, India
| | - Iswarya Mani
- Aravind Eye Hospital and Postgraduate Institute of Ophthalmology (I.M, T.K), Madurai, India
| | | | - Xianwen Shang
- From the Centre for Eye Research Australia (S.J, X.S, M.H), Royal Victorian Eye and Ear Hospital, East Melbourne, Australia; Department of Surgery (Ophthalmology) (S.J, X.S, M.H), The University of Melbourne, Melbourne, Australia
| | - Poonam Mudgil
- School of Medicine (P.M), Western Sydney University, Campbell town, Australia; School of Rural Medicine (P.M), Charles Sturt University, Orange, NSW, Australia
| | - Thulasiraj Ravilla
- Lions Aravind Institute of Community Ophthalmology (S.J, J.S, T.R), Aravind Eye Care System. Madurai, India
| | - Mingguang He
- From the Centre for Eye Research Australia (S.J, X.S, M.H), Royal Victorian Eye and Ear Hospital, East Melbourne, Australia; Department of Surgery (Ophthalmology) (S.J, X.S, M.H), The University of Melbourne, Melbourne, Australia
| |
Collapse
|
32
|
Bragança CP, Torres JM, Macedo LO, Soares CPDA. Advancements in Glaucoma Diagnosis: The Role of AI in Medical Imaging. Diagnostics (Basel) 2024; 14:530. [PMID: 38473002 DOI: 10.3390/diagnostics14050530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 02/17/2024] [Accepted: 02/23/2024] [Indexed: 03/14/2024] Open
Abstract
The progress of artificial intelligence algorithms in digital image processing and automatic diagnosis studies of the eye disease glaucoma has been growing and presenting essential advances to guarantee better clinical care for the population. Given the context, this article describes the main types of glaucoma, traditional forms of diagnosis, and presents the global epidemiology of the disease. Furthermore, it explores how studies using artificial intelligence algorithms have been investigated as possible tools to aid in the early diagnosis of this pathology through population screening. Therefore, the related work section presents the main studies and methodologies used in the automatic classification of glaucoma from digital fundus images and artificial intelligence algorithms, as well as the main databases containing images labeled for glaucoma and publicly available for the training of machine learning algorithms.
Collapse
Affiliation(s)
- Clerimar Paulo Bragança
- ISUS Unit, Faculty of Science and Technology, University Fernando Pessoa, 4249-004 Porto, Portugal
- Department of Ophthalmology, Eye Hospital of Southern Minas Gerais State, Rua Joaquim Rosa 14, Itanhandu 37464-000, MG, Brazil
| | - José Manuel Torres
- ISUS Unit, Faculty of Science and Technology, University Fernando Pessoa, 4249-004 Porto, Portugal
- Artificial Intelligence and Computer Science Laboratory, LIACC, University of Porto, 4100-000 Porto, Portugal
| | - Luciano Oliveira Macedo
- Department of Ophthalmology, Eye Hospital of Southern Minas Gerais State, Rua Joaquim Rosa 14, Itanhandu 37464-000, MG, Brazil
| | - Christophe Pinto de Almeida Soares
- ISUS Unit, Faculty of Science and Technology, University Fernando Pessoa, 4249-004 Porto, Portugal
- Artificial Intelligence and Computer Science Laboratory, LIACC, University of Porto, 4100-000 Porto, Portugal
| |
Collapse
|
33
|
Xu J, Kuai Y, Chen Q, Wang X, Zhao Y, Sun B. Spatio-Temporal Feature Transformation Based Polyp Recognition for Automatic Detection: Higher Accuracy than Novice Endoscopists in Colorectal Polyp Detection and Diagnosis. Dig Dis Sci 2024; 69:911-921. [PMID: 38244123 PMCID: PMC10960915 DOI: 10.1007/s10620-024-08277-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 01/03/2024] [Indexed: 01/22/2024]
Abstract
BACKGROUND Artificial intelligence represents an emerging area with promising potential for improving colonoscopy quality. AIMS To develop a colon polyp detection model using STFT and evaluate its performance through a randomized sample experiment. METHODS Colonoscopy videos from the Digestive Endoscopy Center of the First Affiliated Hospital of Anhui Medical University, recorded between January 2018 and November 2022, were selected and divided into two datasets. To verify the model's practical application in clinical settings, 1500 colonoscopy images and 1200 polyp images of various sizes were randomly selected from the test set and compared with the STFT model's and endoscopists' recognition results with different years of experience. RESULTS In the randomized sample trial involving 1500 colonoscopy images, the STFT model demonstrated significantly higher accuracy and specificity compared to endoscopists with low years of experience (0.902 vs. 0.809, 0.898 vs. 0.826, respectively). Moreover, the model's sensitivity was 0.904, which was higher than that of endoscopists with low, medium, or high years of experience (0.80, 0.896, 0.895, respectively), with statistical significance (P < 0.05). In the randomized sample experiment of 1200 polyp images of different sizes, the accuracy of the STFT model was significantly higher than that of endoscopists with low years of experience when the polyp size was ≤ 0.5 cm and 0.6-1.0 cm (0.902 vs. 0.70, 0.953 vs. 0.865, respectively). CONCLUSIONS The STFT-based colon polyp detection model exhibits high accuracy in detecting polyps in colonoscopy videos, with a particular efficiency in detecting small polyps (≤ 0.5 cm)(0.902 vs. 0.70, P < 0.001).
Collapse
Affiliation(s)
- Jianhua Xu
- Anhui Medical University, Hefei, Anhui, 230032, China
- The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, 230022, China
| | - Yaxian Kuai
- Anhui Medical University, Hefei, Anhui, 230032, China
- The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, 230022, China
| | - Qianqian Chen
- Anhui Medical University, Hefei, Anhui, 230032, China
- The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, 230022, China
| | - Xu Wang
- The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, 230022, China
- Anhui Provincial Key Laboratory of Digestive Disease, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, 230022, China
| | - Yihang Zhao
- Anhui Medical University, Hefei, Anhui, 230032, China
- The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, 230022, China
| | - Bin Sun
- The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, 230022, China.
- Anhui Provincial Key Laboratory of Digestive Disease, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, 230022, China.
- Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Jixi Road 218, Hefei, Anhui, 230022, China.
| |
Collapse
|
34
|
Zhang X, Ma L, Sun D, Yi M, Wang Z. Artificial Intelligence in Telemedicine: A Global Perspective Visualization Analysis. Telemed J E Health 2024. [PMID: 38436235 DOI: 10.1089/tmj.2023.0704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2024] Open
Abstract
Background: The use of artificial intelligence (AI) in telemedicine has been a popular topic in academic research in recent years, resulting in a surge of literature publications. This study provides a scientific overview of AI in telemedicine through bibliometric and visualization analysis. Methods: The Web of Science Core Collection was used as the data source, and the search was conducted on June 1, 2023. A total of 2,860 articles and review studies published in English between 2010 and 2023 were included. This study analyzed general information on the field, trends in publication output, countries/regions, authors, journals, influential articles, keyword usage, and knowledge flows between disciplines. The Bibliometrix R package, VOSviewer, and CiteSpace were used for the analysis. Results: The rate of articles published on AI in telemedicine is increasing by ∼42.1% annually. The United States and China are the top two countries in terms of the number of articles published, accounting for 37.1% of the overall publication volume. In addition to AI and telemedicine, machine learning, digital health, and deep learning are the top three keywords in terms of frequency of occurrence. The keyword time trend graph shows that ChatGPT became one of the important keywords in 2023. The analysis of burst detection suggests that mobile health, based on mobile phones, may be a promising area for future research. Conclusions: This study systematically reviewed the development of AI in telemedicine and identified current research hotspots and future research directions. The results will provide impetus for the innovative development of this field.
Collapse
Affiliation(s)
- Xu Zhang
- School of Nursing, Peking University, Beijing, China
| | - Li Ma
- Department of Emergency Medicine, Peking University Third Hospital, Beijing, China
| | - Di Sun
- School of Nursing, Liaoning University of Traditional Chinese Medicine, Shenyang, Liaoning, China
| | - Mo Yi
- School of Nursing, Peking University, Beijing, China
| | - Zhiwen Wang
- School of Nursing, Peking University, Beijing, China
| |
Collapse
|
35
|
Wang X, Gao Y, Cai F, Zhang M. A commentary on 'Intelligent cataract surgery supervision and evaluation via deep learning'. Int J Surg 2024; 110:1855-1856. [PMID: 38126410 PMCID: PMC10942206 DOI: 10.1097/js9.0000000000001030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Accepted: 12/10/2023] [Indexed: 12/23/2023]
Affiliation(s)
- Xiaoli Wang
- Department of Ophthalmology, West China Hospital, Sichuan University
- Department of Ophthalmology, The People’s Hospital of Jianyang City, Sichuan, People’s Republic of China
| | - Yunxia Gao
- Department of Ophthalmology, West China Hospital, Sichuan University
| | - Fangrong Cai
- Department of Ophthalmology, The People’s Hospital of Jianyang City, Sichuan, People’s Republic of China
| | - Ming Zhang
- Department of Ophthalmology, West China Hospital, Sichuan University
| |
Collapse
|
36
|
Göndöcs D, Dörfler V. AI in medical diagnosis: AI prediction & human judgment. Artif Intell Med 2024; 149:102769. [PMID: 38462271 DOI: 10.1016/j.artmed.2024.102769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 12/02/2023] [Accepted: 01/14/2024] [Indexed: 03/12/2024]
Abstract
AI has long been regarded as a panacea for decision-making and many other aspects of knowledge work; as something that will help humans get rid of their shortcomings. We believe that AI can be a useful asset to support decision-makers, but not that it should replace decision-makers. Decision-making uses algorithmic analysis, but it is not solely algorithmic analysis; it also involves other factors, many of which are very human, such as creativity, intuition, emotions, feelings, and value judgments. We have conducted semi-structured open-ended research interviews with 17 dermatologists to understand what they expect from an AI application to deliver to medical diagnosis. We have found four aggregate dimensions along which the thinking of dermatologists can be described: the ways in which our participants chose to interact with AI, responsibility, 'explainability', and the new way of thinking (mindset) needed for working with AI. We believe that our findings will help physicians who might consider using AI in their diagnosis to understand how to use AI beneficially. It will also be useful for AI vendors in improving their understanding of how medics want to use AI in diagnosis. Further research will be needed to examine if our findings have relevance in the wider medical field and beyond.
Collapse
Affiliation(s)
| | - Viktor Dörfler
- University of Strathclyde Business School, United Kingdom.
| |
Collapse
|
37
|
Bhati A, Gour N, Khanna P, Ojha A, Werghi N. An interpretable dual attention network for diabetic retinopathy grading: IDANet. Artif Intell Med 2024; 149:102782. [PMID: 38462283 DOI: 10.1016/j.artmed.2024.102782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 01/05/2024] [Accepted: 01/15/2024] [Indexed: 03/12/2024]
Abstract
Diabetic retinopathy (DR) is the most prevalent cause of visual impairment in adults worldwide. Typically, patients with DR do not show symptoms until later stages, by which time it may be too late to receive effective treatment. DR Grading is challenging because of the small size and variation in lesion patterns. The key to fine-grained DR grading is to discover more discriminating elements such as cotton wool, hard exudates, hemorrhages, microaneurysms etc. Although deep learning models like convolutional neural networks (CNN) seem ideal for the automated detection of abnormalities in advanced clinical imaging, small-size lesions are very hard to distinguish by using traditional networks. This work proposes a bi-directional spatial and channel-wise parallel attention based network to learn discriminative features for diabetic retinopathy grading. The proposed attention block plugged with a backbone network helps to extract features specific to fine-grained DR-grading. This scheme boosts classification performance along with the detection of small-sized lesion parts. Extensive experiments are performed on four widely used benchmark datasets for DR grading, and performance is evaluated on different quality metrics. Also, for model interpretability, activation maps are generated using the LIME method to visualize the predicted lesion parts. In comparison with state-of-the-art methods, the proposed IDANet exhibits better performance for DR grading and lesion detection.
Collapse
Affiliation(s)
- Amit Bhati
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur 482005, India
| | - Neha Gour
- Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, United Arab Emirates
| | - Pritee Khanna
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur 482005, India.
| | - Aparajita Ojha
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur 482005, India
| | - Naoufel Werghi
- Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, United Arab Emirates
| |
Collapse
|
38
|
Li W, Tu Y, Zhou L, Ma R, Li Y, Hu D, Zhang C, Lu Y. Study of myopia progression and risk factors in Hubei children aged 7-10 years using machine learning: a longitudinal cohort. BMC Ophthalmol 2024; 24:93. [PMID: 38429630 PMCID: PMC10905806 DOI: 10.1186/s12886-024-03331-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 01/29/2024] [Indexed: 03/03/2024] Open
Abstract
BACKGROUND To investigate the trend of refractive error among elementary school students in grades 1 to 3 in Hubei Province, analyze the relevant factors affecting myopia progression, and develop a model to predict myopia progression and the risk of developing high myopia in children. METHODS Longitudinal study. Using a cluster-stratified sampling method, elementary school students in grades 1 to 3 (15,512 in total) from 17 cities in Hubei Province were included as study subjects. Visual acuity, cycloplegic autorefraction, and height and weight measurements were performed for three consecutive years from 2019 to 2021. Basic information about the students, parental myopia and education level, and the students' behavioral habits of using the eyes were collected through questionnaires. RESULTS The baseline refractive errors of children in grades 1 ~ 3 in Hubei Province in 2019 were 0.20 (0.11, 0.27)D, -0.14 (-0.21, 0.06)D, and - 0.29 (-0.37, -0.22)D, respectively, and the annual myopia progression was - 0.65 (-0.74, -0.63)D, -0.61 (-0.73, -0.59)D and - 0.59 (-0.64, -0.51)D, with the prevalence of myopia increasing from 17.56%, 20.9%, and 34.08% in 2019 to 24.16%, 32.24%, and 40.37% in 2021 (Χ2 = 63.29, P < 0.001). With growth, children's refractive error moved toward myopia, and the quantity of myopic progression gradually diminished. (F = 291.04, P = 0.027). The myopia progression in boys was less than that in girls in the same grade (P < 0.001). The change in spherical equivalent refraction in myopic children was smaller than that in hyperopic and emmetropic children (F = 59.28, P < 0.001), in which the refractive change in mild myopia, moderate myopia, and high myopia children gradually increased (F = 73.12, P < 0.001). Large baseline refractive error, large body mass index, and high frequency of eating sweets were risk factors for myopia progression, while parental intervention and strong eye-care awareness were protective factors for delaying myopia progression. The nomogram graph predicted the probability of developing high myopia in children and found that baseline refraction had the greatest predictive value. CONCLUSION Myopia progression varies by age, sex, and myopia severity. Baseline refraction is the most important factor in predicting high myopia in childhood. we should focus on children with large baseline refraction or young age of onset of myopia in clinical myopia prevention and control.
Collapse
Affiliation(s)
- Wenping Li
- Department of Ophthalmology, Renmin Hospital of Wuhan University, 238 Jiefang Road, 430060, Wuhan, China
| | - Yuyang Tu
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Lianhong Zhou
- Department of Ophthalmology, Renmin Hospital of Wuhan University, 238 Jiefang Road, 430060, Wuhan, China.
| | - Runting Ma
- Department of Ophthalmology, Renmin Hospital of Wuhan University, 238 Jiefang Road, 430060, Wuhan, China
| | - Yuanjin Li
- Department of Ophthalmology, Renmin Hospital of Wuhan University, 238 Jiefang Road, 430060, Wuhan, China
| | - Diewenjie Hu
- Department of Ophthalmology, Renmin Hospital of Wuhan University, 238 Jiefang Road, 430060, Wuhan, China
| | - Cancan Zhang
- Department of Ophthalmology, Renmin Hospital of Wuhan University, 238 Jiefang Road, 430060, Wuhan, China
| | - Yi Lu
- Department of Ophthalmology, Renmin Hospital of Wuhan University, 238 Jiefang Road, 430060, Wuhan, China
| |
Collapse
|
39
|
Pandey PU, Ballios BG, Christakis PG, Kaplan AJ, Mathew DJ, Ong Tone S, Wan MJ, Micieli JA, Wong JCY. Ensemble of deep convolutional neural networks is more accurate and reliable than board-certified ophthalmologists at detecting multiple diseases in retinal fundus photographs. Br J Ophthalmol 2024; 108:417-423. [PMID: 36720585 PMCID: PMC10894841 DOI: 10.1136/bjo-2022-322183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 01/11/2023] [Indexed: 02/02/2023]
Abstract
AIMS To develop an algorithm to classify multiple retinal pathologies accurately and reliably from fundus photographs and to validate its performance against human experts. METHODS We trained a deep convolutional ensemble (DCE), an ensemble of five convolutional neural networks (CNNs), to classify retinal fundus photographs into diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD) and normal eyes. The CNN architecture was based on the InceptionV3 model, and initial weights were pretrained on the ImageNet dataset. We used 43 055 fundus images from 12 public datasets. Five trained ensembles were then tested on an 'unseen' set of 100 images. Seven board-certified ophthalmologists were asked to classify these test images. RESULTS Board-certified ophthalmologists achieved a mean accuracy of 72.7% over all classes, while the DCE achieved a mean accuracy of 79.2% (p=0.03). The DCE had a statistically significant higher mean F1-score for DR classification compared with the ophthalmologists (76.8% vs 57.5%; p=0.01) and greater but statistically non-significant mean F1-scores for glaucoma (83.9% vs 75.7%; p=0.10), AMD (85.9% vs 85.2%; p=0.69) and normal eyes (73.0% vs 70.5%; p=0.39). The DCE had a greater mean agreement between accuracy and confident of 81.6% vs 70.3% (p<0.001). DISCUSSION We developed a deep learning model and found that it could more accurately and reliably classify four categories of fundus images compared with board-certified ophthalmologists. This work provides proof-of-principle that an algorithm is capable of accurate and reliable recognition of multiple retinal diseases using only fundus photographs.
Collapse
Affiliation(s)
- Prashant U Pandey
- School of Biomedical Engineering, The University of British Columbia, Vancouver, British Columbia, Canada
| | - Brian G Ballios
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Krembil Research Institute, University Health Network, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
| | - Panos G Christakis
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
| | - Alexander J Kaplan
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - David J Mathew
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Krembil Research Institute, University Health Network, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
| | - Stephan Ong Tone
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Sunnybrook Research Institute, Toronto, Ontario, Canada
| | - Michael J Wan
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Jonathan A Micieli
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
- Department of Ophthalmology, St. Michael's Hospital, Unity Health, Toronto, Ontario, Canada
| | - Jovi C Y Wong
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
40
|
Wang Y, Liu C, Hu W, Luo L, Shi D, Zhang J, Yin Q, Zhang L, Han X, He M. Economic evaluation for medical artificial intelligence: accuracy vs. cost-effectiveness in a diabetic retinopathy screening case. NPJ Digit Med 2024; 7:43. [PMID: 38383738 PMCID: PMC10881978 DOI: 10.1038/s41746-024-01032-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 02/05/2024] [Indexed: 02/23/2024] Open
Abstract
Artificial intelligence (AI) models have shown great accuracy in health screening. However, for real-world implementation, high accuracy may not guarantee cost-effectiveness. Improving AI's sensitivity finds more high-risk patients but may raise medical costs while increasing specificity reduces unnecessary referrals but may weaken detection capability. To evaluate the trade-off between AI model performance and the long-running cost-effectiveness, we conducted a cost-effectiveness analysis in a nationwide diabetic retinopathy (DR) screening program in China, comprising 251,535 participants with diabetes over 30 years. We tested a validated AI model in 1100 different diagnostic performances (presented as sensitivity/specificity pairs) and modeled annual screening scenarios. The status quo was defined as the scenario with the most accurate AI performance. The incremental cost-effectiveness ratio (ICER) was calculated for other scenarios against the status quo as cost-effectiveness metrics. Compared to the status quo (sensitivity/specificity: 93.3%/87.7%), six scenarios were cost-saving and seven were cost-effective. To achieve cost-saving or cost-effective, the AI model should reach a minimum sensitivity of 88.2% and specificity of 80.4%. The most cost-effective AI model exhibited higher sensitivity (96.3%) and lower specificity (80.4%) than the status quo. In settings with higher DR prevalence and willingness-to-pay levels, the AI needed higher sensitivity for optimal cost-effectiveness. Urban regions and younger patient groups also required higher sensitivity in AI-based screening. In real-world DR screening, the most accurate AI model may not be the most cost-effective. Cost-effectiveness should be independently evaluated, which is most likely to be affected by the AI's sensitivity.
Collapse
Affiliation(s)
- Yueye Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Chi Liu
- Faculty of Data Science, City University of Macau, Macao SAR, China
| | - Wenyi Hu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
| | - Lixia Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Danli Shi
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Jian Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Qiuxia Yin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lei Zhang
- Clinical Medical Research Center, Children's Hospital of Nanjing Medical University, Nanjing, Jiangsu, 210008, China.
- Melbourne Sexual Health Centre, Alfred Health, Melbourne, VIC, Australia.
- Central Clinical School, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, VIC, Australia.
| | - Xiaotong Han
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong.
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong.
- Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Shatin, Hong Kong.
| |
Collapse
|
41
|
Lakshmi KS, Sargunam B. Exploration of AI-powered DenseNet121 for effective diabetic retinopathy detection. Int Ophthalmol 2024; 44:90. [PMID: 38367098 DOI: 10.1007/s10792-024-03027-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 01/11/2024] [Indexed: 02/19/2024]
Abstract
OBJECTIVE Diabetic Retinopathy (DR) is a severe complication of diabetes that damages the retina and affects approximately 80% of patients with diabetes for 10 years or more. This condition primarily impacts young and productive individuals, resulting in significant long-term medical complications for patients and society. The early stages of diabetic retinopathy often advance without noticeable symptoms, resulting in delayed identification and intervention. Therefore, develop approaches employing transfer learning methodologies to enhance early detection capabilities, facilitating timely diagnosis and intervention to mitigate the progression of diabetic retinopathy. METHODS This study introduces a transfer learning approach for detecting four stages of DR: No DR, Mild, Moderate, and Severe. The methods AlexNet, VGG16, ResNet50, Inception v3, and DenseNet121 are utilized and trained using the Kaggle DR dataset. RESULTS To assess the efficiency of the suggested improved network, the Kaggle dataset is employed to analyze four performance metrics: Sensitivity, Precision, Accuracy, and F1 score. DenseNet121 demonstrated superior accuracy among the two models, outperforming other models, making it a suitable option for automatic DR sign detection. CONCLUSION The integration of the DenseNet121 model shows great promise in transforming the timely identification and treatment of DR, resulting in enhanced patient results in the long run and alleviating the burden on society.
Collapse
Affiliation(s)
- K Santhiya Lakshmi
- Department of Electronics and Communication Engineering, Avinashilingam Institute for Home Science and Higher Education for Women, Coimbatore, Tamil Nadu, India.
| | - B Sargunam
- Department of Electronics and Communication Engineering, School of Engineering, Avinashilingam Institute for Home Science and Higher Education for Women, Coimbatore, Tamil Nadu, India
| |
Collapse
|
42
|
Atcı ŞY, Güneş A, Zontul M, Arslan Z. Identifying Diabetic Retinopathy in the Human Eye: A Hybrid Approach Based on a Computer-Aided Diagnosis System Combined with Deep Learning. Tomography 2024; 10:215-230. [PMID: 38393285 PMCID: PMC10892594 DOI: 10.3390/tomography10020017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 01/16/2024] [Accepted: 02/01/2024] [Indexed: 02/25/2024] Open
Abstract
Diagnosing and screening for diabetic retinopathy is a well-known issue in the biomedical field. A component of computer-aided diagnosis that has advanced significantly over the past few years as a result of the development and effectiveness of deep learning is the use of medical imagery from a patient's eye to identify the damage caused to blood vessels. Issues with unbalanced datasets, incorrect annotations, a lack of sample images, and improper performance evaluation measures have negatively impacted the performance of deep learning models. Using three benchmark datasets of diabetic retinopathy, we conducted a detailed comparison study comparing various state-of-the-art approaches to address the effect caused by class imbalance, with precision scores of 93%, 89%, 81%, 76%, and 96%, respectively, for normal, mild, moderate, severe, and DR phases. The analyses of the hybrid modeling, including CNN analysis and SHAP model derivation results, are compared at the end of the paper, and ideal hybrid modeling strategies for deep learning classification models for automated DR detection are identified.
Collapse
Affiliation(s)
- Şükran Yaman Atcı
- Department of Computer Engineering, İstanbul Aydın University, Istanbul 34295, Turkey; (A.G.); (Z.A.)
| | - Ali Güneş
- Department of Computer Engineering, İstanbul Aydın University, Istanbul 34295, Turkey; (A.G.); (Z.A.)
| | - Metin Zontul
- Department of Computer Engineering, Sivas University of Science and Technology, Sivas 58140, Turkey;
| | - Zafer Arslan
- Department of Computer Engineering, İstanbul Aydın University, Istanbul 34295, Turkey; (A.G.); (Z.A.)
| |
Collapse
|
43
|
Abstract
Background: The current medical scenario is closely linked to recent progress in telecommunications, photodocumentation, and artificial intelligence (AI). Smartphone eye examination may represent a promising tool in the technological spectrum, with special interest for primary health care services. Obtaining fundus imaging with this technique has improved and democratized the teaching of fundoscopy, but in particular, it contributes greatly to screening diseases with high rates of blindness. Eye examination using smartphones essentially represents a cheap and safe method, thus contributing to public policies on population screening. This review aims to provide an update on the use of this resource and its future prospects, especially as a screening and ophthalmic diagnostic tool. Methods: In this review, we surveyed major published advances in retinal and anterior segment analysis using AI. We performed an electronic search on the Medical Literature Analysis and Retrieval System Online (MEDLINE), EMBASE, and Cochrane Library for published literature without a deadline. We included studies that compared the diagnostic accuracy of smartphone ophthalmoscopy for detecting prevalent diseases with an accurate or commonly employed reference standard. Results: There are few databases with complete metadata, providing demographic data, and few databases with sufficient images involving current or new therapies. It should be taken into consideration that these are databases containing images captured using different systems and formats, with information often being excluded without essential detailing of the reasons for exclusion, which further distances them from real-life conditions. The safety, portability, low cost, and reproducibility of smartphone eye images are discussed in several studies, with encouraging results. Conclusions: The high level of agreement between conventional and a smartphone method shows a powerful arsenal for screening and early diagnosis of the main causes of blindness, such as cataract, glaucoma, diabetic retinopathy, and age-related macular degeneration. In addition to streamlining the medical workflow and bringing benefits for public health policies, smartphone eye examination can make safe and quality assessment available to the population.
Collapse
Affiliation(s)
| | - Alessandro Arrigo
- Department of Ophthalmology, Scientific Institute San Raffaele, Milan, Italy
- University Vita-Salute, Milan, Italy
| | - Maurizio Battaglia Parodi
- Department of Ophthalmology, Scientific Institute San Raffaele, Milan, Italy
- University Vita-Salute, Milan, Italy
| | - Carolina da Silva Mengue
- Post-Graduation Ophthalmological School, Ivo Corrêa-Meyer/Cardiology Institute, Porto Alegre, Brazil
| |
Collapse
|
44
|
Gonçalves MB, Nakayama LF, Ferraz D, Faber H, Korot E, Malerbi FK, Regatieri CV, Maia M, Celi LA, Keane PA, Belfort R. Image quality assessment of retinal fundus photographs for diabetic retinopathy in the machine learning era: a review. Eye (Lond) 2024; 38:426-433. [PMID: 37667028 PMCID: PMC10858054 DOI: 10.1038/s41433-023-02717-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 06/26/2023] [Accepted: 08/25/2023] [Indexed: 09/06/2023] Open
Abstract
This study aimed to evaluate the image quality assessment (IQA) and quality criteria employed in publicly available datasets for diabetic retinopathy (DR). A literature search strategy was used to identify relevant datasets, and 20 datasets were included in the analysis. Out of these, 12 datasets mentioned performing IQA, but only eight specified the quality criteria used. The reported quality criteria varied widely across datasets, and accessing the information was often challenging. The findings highlight the importance of IQA for AI model development while emphasizing the need for clear and accessible reporting of IQA information. The study suggests that automated quality assessments can be a valid alternative to manual labeling and emphasizes the importance of establishing quality standards based on population characteristics, clinical use, and research purposes. In conclusion, image quality assessment is important for AI model development; however, strict data quality standards must not limit data sharing. Given the importance of IQA for developing, validating, and implementing deep learning (DL) algorithms, it's recommended that this information be reported in a clear, specific, and accessible way whenever possible. Automated quality assessments are a valid alternative to the traditional manual labeling process, and quality standards should be determined according to population characteristics, clinical use, and research purpose.
Collapse
Affiliation(s)
- Mariana Batista Gonçalves
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Luis Filipe Nakayama
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil.
- Massachusetts Institute of Technology, Laboratory for Computational Physiology, Cambridge, MA, USA.
| | - Daniel Ferraz
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Hanna Faber
- Department of Ophthalmology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Department of Ophthalmology, University of Tuebingen, Tuebingen, Germany
| | - Edward Korot
- Retina Specialists of Michigan, Grand Rapids, MI, USA
- Stanford University Byers Eye Institute Palo Alto, Palo Alto, CA, USA
| | | | | | - Mauricio Maia
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
| | - Leo Anthony Celi
- Massachusetts Institute of Technology, Laboratory for Computational Physiology, Cambridge, MA, USA
- Harvard TH Chan School of Public Health, Department of Biostatistics, Boston, MA, USA
- Beth Israel Deaconess Medical Center, Department of Medicine, Boston, MA, USA
| | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Rubens Belfort
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
| |
Collapse
|
45
|
Huang Y, Cheung CY, Li D, Tham YC, Sheng B, Cheng CY, Wang YX, Wong TY. AI-integrated ocular imaging for predicting cardiovascular disease: advancements and future outlook. Eye (Lond) 2024; 38:464-472. [PMID: 37709926 PMCID: PMC10858189 DOI: 10.1038/s41433-023-02724-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 07/26/2023] [Accepted: 08/25/2023] [Indexed: 09/16/2023] Open
Abstract
Cardiovascular disease (CVD) remains the leading cause of death worldwide. Assessing of CVD risk plays an essential role in identifying individuals at higher risk and enables the implementation of targeted intervention strategies, leading to improved CVD prevalence reduction and patient survival rates. The ocular vasculature, particularly the retinal vasculature, has emerged as a potential means for CVD risk stratification due to its anatomical similarities and physiological characteristics shared with other vital organs, such as the brain and heart. The integration of artificial intelligence (AI) into ocular imaging has the potential to overcome limitations associated with traditional semi-automated image analysis, including inefficiency and manual measurement errors. Furthermore, AI techniques may uncover novel and subtle features that contribute to the identification of ocular biomarkers associated with CVD. This review provides a comprehensive overview of advancements made in AI-based ocular image analysis for predicting CVD, including the prediction of CVD risk factors, the replacement of traditional CVD biomarkers (e.g., CT-scan measured coronary artery calcium score), and the prediction of symptomatic CVD events. The review covers a range of ocular imaging modalities, including colour fundus photography, optical coherence tomography, and optical coherence tomography angiography, and other types of images like external eye images. Additionally, the review addresses the current limitations of AI research in this field and discusses the challenges associated with translating AI algorithms into clinical practice.
Collapse
Affiliation(s)
- Yu Huang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Dawei Li
- College of Future Technology, Peking University, Beijing, China
| | - Yih Chung Tham
- Centre for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ching Yu Cheng
- Centre for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore.
- Tsinghua Medicine, Tsinghua University, Beijing, China.
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, China.
| |
Collapse
|
46
|
Dai L, Sheng B, Chen T, Wu Q, Liu R, Cai C, Wu L, Yang D, Hamzah H, Liu Y, Wang X, Guan Z, Yu S, Li T, Tang Z, Ran A, Che H, Chen H, Zheng Y, Shu J, Huang S, Wu C, Lin S, Liu D, Li J, Wang Z, Meng Z, Shen J, Hou X, Deng C, Ruan L, Lu F, Chee M, Quek TC, Srinivasan R, Raman R, Sun X, Wang YX, Wu J, Jin H, Dai R, Shen D, Yang X, Guo M, Zhang C, Cheung CY, Tan GSW, Tham YC, Cheng CY, Li H, Wong TY, Jia W. A deep learning system for predicting time to progression of diabetic retinopathy. Nat Med 2024; 30:584-594. [PMID: 38177850 PMCID: PMC10878973 DOI: 10.1038/s41591-023-02702-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 11/10/2023] [Indexed: 01/06/2024]
Abstract
Diabetic retinopathy (DR) is the leading cause of preventable blindness worldwide. The risk of DR progression is highly variable among different individuals, making it difficult to predict risk and personalize screening intervals. We developed and validated a deep learning system (DeepDR Plus) to predict time to DR progression within 5 years solely from fundus images. First, we used 717,308 fundus images from 179,327 participants with diabetes to pretrain the system. Subsequently, we trained and validated the system with a multiethnic dataset comprising 118,868 images from 29,868 participants with diabetes. For predicting time to DR progression, the system achieved concordance indexes of 0.754-0.846 and integrated Brier scores of 0.153-0.241 for all times up to 5 years. Furthermore, we validated the system in real-world cohorts of participants with diabetes. The integration with clinical workflow could potentially extend the mean screening interval from 12 months to 31.97 months, and the percentage of participants recommended to be screened at 1-5 years was 30.62%, 20.00%, 19.63%, 11.85% and 17.89%, respectively, while delayed detection of progression to vision-threatening DR was 0.18%. Altogether, the DeepDR Plus system could predict individualized risk and time to DR progression over 5 years, potentially allowing personalized screening intervals.
Collapse
Grants
- the National Key Research and Development Program of China (2022YFA1004804), the Shanghai Municipal Key Clinical Specialty, Shanghai Research Center for Endocrine and Metabolic Diseases (2022ZZ01002), and the Chinese Academy of Engineering (2022-XY-08)
- the General Program of NSFC (62272298), the National Key Research and Development Program of China (2022YFC2407000), the Interdisciplinary Program of Shanghai Jiao Tong University (YG2023LC11 and YG2022ZD007), National Natural Science Foundation of China (62272298 and 62077037), the College-level Project Fund of Shanghai Jiao Tong University Affiliated Sixth People’s Hospital (ynlc201909), and the Medical-industrial Cross-fund of Shanghai Jiao Tong University (YG2022QN089)
- the Clinical Special Program of Shanghai Municipal Health Commission (20224044) and Three-year action plan to strengthen the construction of public health system in Shanghai (GWVI-11.1-28)
- the National Natural Science Foundation of China (82100879)
- the National Key Research and Development Program of China (2022YFA1004804), Excellent Young Scientists Fund of NSFC (82022012), General Fund of NSFC (81870598), Innovative research team of high-level local universities in Shanghai (SHSMU-ZDCX20212700)
- the National Key R & D Program of China (2022YFC2502800) and National Natural Science Fund of China (8238810007)
Collapse
Affiliation(s)
- Ling Dai
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Bin Sheng
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China.
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Tingli Chen
- Department of Ophthalmology, Huadong Sanatorium, Wuxi, China
| | - Qiang Wu
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ruhan Liu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chun Cai
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Liang Wu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Haslina Hamzah
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Yuexing Liu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Xiangning Wang
- Department of Ophthalmology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhouyu Guan
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Shujie Yu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Tingyao Li
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ziqi Tang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Anran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Haoxuan Che
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Jia Shu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Shan Huang
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chan Wu
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Shiqun Lin
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Dan Liu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Jiajia Li
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zheyuan Wang
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ziyao Meng
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jie Shen
- Medical Records and Statistics Office, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xuhong Hou
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Chenxin Deng
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Lei Ruan
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Feng Lu
- National Engineering Research Center for Big Data Technology and System, Services Computing Technology and System Lab, Cluster and Grid Computing Lab, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Miaoli Chee
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ten Cheer Quek
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ramyaa Srinivasan
- Shri Bhagwan Mahavir Vitreoretinal Services, Medical Research Foundation, Sankara Nethralaya, Chennai, India
| | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Medical Research Foundation, Sankara Nethralaya, Chennai, India
| | - Xiaodong Sun
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Science Key Laboratory, Beijing, China
| | - Jiarui Wu
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
- Center for Excellence in Molecular Science, Chinese Academy of Sciences, Shanghai, China
| | - Hai Jin
- National Engineering Research Center for Big Data Technology and System, Services Computing Technology and System Lab, Cluster and Grid Computing Lab, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
| | - Rongping Dai
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Dinggang Shen
- School of Biomedical Engineering, Shanghai Tech University, Shanghai, China
- Shanghai United Imaging Intelligence, Shanghai, China
- Shanghai Clinical Research and Trial Center, Shanghai, China
| | - Xiaokang Yang
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Minyi Guo
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Cuntai Zhang
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Gavin Siew Wei Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Centre for Innovation and Precision Eye Health; and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
- Centre for Innovation and Precision Eye Health; and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Huating Li
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China.
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Tsinghua Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua University, Beijing, China.
| | - Weiping Jia
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China.
| |
Collapse
|
47
|
Skevas C, de Olaguer NP, Lleó A, Thiwa D, Schroeter U, Lopes IV, Mautone L, Linke SJ, Spitzer MS, Yap D, Xiao D. Implementing and evaluating a fully functional AI-enabled model for chronic eye disease screening in a real clinical environment. BMC Ophthalmol 2024; 24:51. [PMID: 38302908 PMCID: PMC10832120 DOI: 10.1186/s12886-024-03306-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 01/16/2024] [Indexed: 02/03/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to increase the affordability and accessibility of eye disease screening, especially with the recent approval of AI-based diabetic retinopathy (DR) screening programs in several countries. METHODS This study investigated the performance, feasibility, and user experience of a seamless hardware and software solution for screening chronic eye diseases in a real-world clinical environment in Germany. The solution integrated AI grading for DR, age-related macular degeneration (AMD), and glaucoma, along with specialist auditing and patient referral decision. The study comprised several components: (1) evaluating the entire system solution from recruitment to eye image capture and AI grading for DR, AMD, and glaucoma; (2) comparing specialist's grading results with AI grading results; (3) gathering user feedback on the solution. RESULTS A total of 231 patients were recruited, and their consent forms were obtained. The sensitivity, specificity, and area under the curve for DR grading were 100.00%, 80.10%, and 90.00%, respectively. For AMD grading, the values were 90.91%, 78.79%, and 85.00%, and for glaucoma grading, the values were 93.26%, 76.76%, and 85.00%. The analysis of all false positive cases across the three diseases and their comparison with the final referral decisions revealed that only 17 patients were falsely referred among the 231 patients. The efficacy analysis of the system demonstrated the effectiveness of the AI grading process in the study's testing environment. Clinical staff involved in using the system provided positive feedback on the disease screening process, particularly praising the seamless workflow from patient registration to image transmission and obtaining the final result. Results from a questionnaire completed by 12 participants indicated that most found the system easy, quick, and highly satisfactory. The study also revealed room for improvement in the AMD model, suggesting the need to enhance its training data. Furthermore, the performance of the glaucoma model grading could be improved by incorporating additional measures such as intraocular pressure. CONCLUSIONS The implementation of the AI-based approach for screening three chronic eye diseases proved effective in real-world settings, earning positive feedback on the usability of the integrated platform from both the screening staff and auditors. The auditing function has proven valuable for obtaining efficient second opinions from experts, pointing to its potential for enhancing remote screening capabilities. TRIAL REGISTRATION Institutional Review Board of the Hamburg Medical Chamber (Ethik-Kommission der Ärztekammer Hamburg): 2021-10574-BO-ff.
Collapse
Affiliation(s)
- Christos Skevas
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | | | - Albert Lleó
- TeleMedC GmbH, Raboisen 32, 20095, Hamburg, Germany
| | - David Thiwa
- Department of Otorhinolaryngology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Ulrike Schroeter
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Inês Valente Lopes
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany.
| | - Luca Mautone
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Stephan J Linke
- Zentrum Sehestaerke, Martinistraße 64, 20251, Hamburg, Germany
| | - Martin Stephan Spitzer
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Daniel Yap
- TeleMedC Pty Ltd, 61 Ubi Avenue 1, #06-11 UBPoint, Singapore, 40894, Singapore
| | - Di Xiao
- TeleMedC Pty Ltd, Brisbane Technology Park, Level 2, 1 Westlink Court, Darra, QLD 4076, Australia
| |
Collapse
|
48
|
Chia MA, Hersch F, Sayres R, Bavishi P, Tiwari R, Keane PA, Turner AW. Validation of a deep learning system for the detection of diabetic retinopathy in Indigenous Australians. Br J Ophthalmol 2024; 108:268-273. [PMID: 36746615 PMCID: PMC10850716 DOI: 10.1136/bjo-2022-322237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 12/31/2022] [Indexed: 02/08/2023]
Abstract
BACKGROUND/AIMS Deep learning systems (DLSs) for diabetic retinopathy (DR) detection show promising results but can underperform in racial and ethnic minority groups, therefore external validation within these populations is critical for health equity. This study evaluates the performance of a DLS for DR detection among Indigenous Australians, an understudied ethnic group who suffer disproportionately from DR-related blindness. METHODS We performed a retrospective external validation study comparing the performance of a DLS against a retinal specialist for the detection of more-than-mild DR (mtmDR), vision-threatening DR (vtDR) and all-cause referable DR. The validation set consisted of 1682 consecutive, single-field, macula-centred retinal photographs from 864 patients with diabetes (mean age 54.9 years, 52.4% women) at an Indigenous primary care service in Perth, Australia. Three-person adjudication by a panel of specialists served as the reference standard. RESULTS For mtmDR detection, sensitivity of the DLS was superior to the retina specialist (98.0% (95% CI, 96.5 to 99.4) vs 87.1% (95% CI, 83.6 to 90.6), McNemar's test p<0.001) with a small reduction in specificity (95.1% (95% CI, 93.6 to 96.4) vs 97.0% (95% CI, 95.9 to 98.0), p=0.006). For vtDR, the DLS's sensitivity was again superior to the human grader (96.2% (95% CI, 93.4 to 98.6) vs 84.4% (95% CI, 79.7 to 89.2), p<0.001) with a slight drop in specificity (95.8% (95% CI, 94.6 to 96.9) vs 97.8% (95% CI, 96.9 to 98.6), p=0.002). For all-cause referable DR, there was a substantial increase in sensitivity (93.7% (95% CI, 91.8 to 95.5) vs 74.4% (95% CI, 71.1 to 77.5), p<0.001) and a smaller reduction in specificity (91.7% (95% CI, 90.0 to 93.3) vs 96.3% (95% CI, 95.2 to 97.4), p<0.001). CONCLUSION The DLS showed improved sensitivity and similar specificity compared with a retina specialist for DR detection. This demonstrates its potential to support DR screening among Indigenous Australians, an underserved population with a high burden of diabetic eye disease.
Collapse
Affiliation(s)
- Mark A Chia
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Lions Outback Vision, Lions Eye Institute, Nedlands, Western Australia, Australia
| | | | | | | | | | - Pearse A Keane
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Angus W Turner
- Lions Outback Vision, Lions Eye Institute, Nedlands, Western Australia, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Nedlands, Western Australia, Australia
| |
Collapse
|
49
|
Zhu Y, Salowe R, Chow C, Li S, Bastani O, O'Brien JM. Advancing Glaucoma Care: Integrating Artificial Intelligence in Diagnosis, Management, and Progression Detection. Bioengineering (Basel) 2024; 11:122. [PMID: 38391608 PMCID: PMC10886285 DOI: 10.3390/bioengineering11020122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 01/23/2024] [Accepted: 01/24/2024] [Indexed: 02/24/2024] Open
Abstract
Glaucoma, the leading cause of irreversible blindness worldwide, comprises a group of progressive optic neuropathies requiring early detection and lifelong treatment to preserve vision. Artificial intelligence (AI) technologies are now demonstrating transformative potential across the spectrum of clinical glaucoma care. This review summarizes current capabilities, future outlooks, and practical translation considerations. For enhanced screening, algorithms analyzing retinal photographs and machine learning models synthesizing risk factors can identify high-risk patients needing diagnostic workup and close follow-up. To augment definitive diagnosis, deep learning techniques detect characteristic glaucomatous patterns by interpreting results from optical coherence tomography, visual field testing, fundus photography, and other ocular imaging. AI-powered platforms also enable continuous monitoring, with algorithms that analyze longitudinal data alerting physicians about rapid disease progression. By integrating predictive analytics with patient-specific parameters, AI can also guide precision medicine for individualized glaucoma treatment selections. Advances in robotic surgery and computer-based guidance demonstrate AI's potential to improve surgical outcomes and surgical training. Beyond the clinic, AI chatbots and reminder systems could provide patient education and counseling to promote medication adherence. However, thoughtful approaches to clinical integration, usability, diversity, and ethical implications remain critical to successfully implementing these emerging technologies. This review highlights AI's vast capabilities to transform glaucoma care while summarizing key achievements, future prospects, and practical considerations to progress from bench to bedside.
Collapse
Affiliation(s)
- Yan Zhu
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Rebecca Salowe
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Caven Chow
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Shuo Li
- Department of Computer & Information Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Osbert Bastani
- Department of Computer & Information Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Joan M O'Brien
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
50
|
Poh SSJ, Sia JT, Yip MYT, Tsai ASH, Lee SY, Tan GSW, Weng CY, Kadonosono K, Kim M, Yonekawa Y, Ho AC, Toth CA, Ting DSW. Artificial Intelligence, Digital Imaging, and Robotics Technologies for Surgical Vitreoretinal Diseases. Ophthalmol Retina 2024:S2468-6530(24)00044-7. [PMID: 38280425 DOI: 10.1016/j.oret.2024.01.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 01/14/2024] [Accepted: 01/19/2024] [Indexed: 01/29/2024]
Abstract
OBJECTIVE To review recent technological advancement in imaging, surgical visualization, robotics technology, and the use of artificial intelligence in surgical vitreoretinal (VR) diseases. BACKGROUND Technological advancements in imaging enhance both preoperative and intraoperative management of surgical VR diseases. Widefield imaging in fundal photography and OCT can improve assessment of peripheral retinal disorders such as retinal detachments, degeneration, and tumors. OCT angiography provides a rapid and noninvasive imaging of the retinal and choroidal vasculature. Surgical visualization has also improved with intraoperative OCT providing a detailed real-time assessment of retinal layers to guide surgical decisions. Heads-up display and head-mounted display utilize 3-dimensional technology to provide surgeons with enhanced visual guidance and improved ergonomics during surgery. Intraocular robotics technology allows for greater surgical precision and is shown to be useful in retinal vein cannulation and subretinal drug delivery. In addition, deep learning techniques leverage on diverse data including widefield retinal photography and OCT for better predictive accuracy in classification, segmentation, and prognostication of many surgical VR diseases. CONCLUSION This review article summarized the latest updates in these areas and highlights the importance of continuous innovation and improvement in technology within the field. These advancements have the potential to reshape management of surgical VR diseases in the very near future and to ultimately improve patient care. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Stanley S J Poh
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Josh T Sia
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Michelle Y T Yip
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Andrew S H Tsai
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Shu Yen Lee
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Gavin S W Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Christina Y Weng
- Department of Ophthalmology, Baylor College of Medicine, Houston, Texas
| | | | - Min Kim
- Department of Ophthalmology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Yoshihiro Yonekawa
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Allen C Ho
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Cynthia A Toth
- Departments of Ophthalmology and Biomedical Engineering, Duke University, Durham, North Carolina
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore; Byers Eye Institute, Stanford University, Palo Alto, California.
| |
Collapse
|