1
|
Świerczyński H, Pukacki J, Szczęsny S, Mazurek C, Wasilewicz R. Application of machine learning techniques in GlaucomAI system for glaucoma diagnosis and collaborative research support. Sci Rep 2025; 15:7940. [PMID: 40050329 PMCID: PMC11885539 DOI: 10.1038/s41598-025-89893-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Accepted: 02/10/2025] [Indexed: 03/09/2025] Open
Abstract
This paper proposes an architecture of the system that provides support for collaborative research focused on analysis of data acquired using Triggerfish contact lens sensor and devices for continuous monitoring of cardiovascular system properties. The system enables application of machine learning (ML) models for glaucoma diagnosis without direct intraocular pressure measurement and independently of complex imaging techniques used in clinical practice. We describe development of ML models based on sensor data and measurements of corneal biomechanical properties. Application scenarios involve collection, sharing and analysis of multi-sensor data. We give a view of issues concerning interpretability and evaluation of ML model predictions. We also refer to the problems related to personalized medicine and transdisciplinary research. The system can be a base for community-wide initiative including ophthalmologists, data scientists and machine learning experts that has the potential to leverage data acquired by the devices to understand glaucoma risk factors and the processes related to progression of the disease.
Collapse
Affiliation(s)
- Hubert Świerczyński
- Poznan Supercomputing and Networking Center, Poznań, Poland.
- Faculty of Computing and Telecommunications, Poznan University of Technology, Poznań, Poland.
| | | | - Szymon Szczęsny
- Faculty of Computing and Telecommunications, Poznan University of Technology, Poznań, Poland
| | - Cezary Mazurek
- Poznan Supercomputing and Networking Center, Poznań, Poland
| | | |
Collapse
|
2
|
Zhang Y, Zhang X, Zhang Q, Lv B, Hu M, Lv C, Ni Y, Xie G, Li S, Zebardast N, Shweikh Y, Wang N. Automated classification of angle-closure mechanisms based on anterior segment optical coherence tomography images via deep learning. Heliyon 2024; 10:e35236. [PMID: 39166052 PMCID: PMC11334645 DOI: 10.1016/j.heliyon.2024.e35236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Revised: 07/24/2024] [Accepted: 07/25/2024] [Indexed: 08/22/2024] Open
Abstract
Purpose To develop and validate deep learning algorithms that can identify and classify angle-closure (AC) mechanisms using anterior segment optical coherence tomography (AS-OCT) images. Methods This cross-sectional study included participants of the Handan Eye Study aged ≥35 years with AC detected via gonioscopy or on the AS-OCT images. These images were classified by human experts into the following to indicate the predominant AC mechanism (ground truth): pupillary block, plateau iris configuration, or thick peripheral iris roll. A deep learning architecture, known as comprehensive mechanism decision net (CMD-Net), was developed to simulate the identification of image-level AC mechanisms by human experts. Cross-validation was performed to optimize and evaluate the model. Human-machine comparisons were conducted using a held-out and separate test sets to establish generalizability. Results In total, 11,035 AS-OCT images of 1455 participants (2833 eyes) were included. Among these, 8828 and 2.207 images were included in the cross-validation and held-out test sets, respectively. A separate test was formed comprising 228 images of 35 consecutive patients with AC detected via gonioscopy at our eye center. In the classification of AC mechanisms, CMD-Net achieved a mean area under the receiver operating characteristic curve (AUC) of 0.980, 0.977, and 0.988 in the cross-validation, held-out, and separate test sets, respectively. The best-performing ophthalmologist achieved an AUC of 0.903 and 0.891 in the held-out and separate test sets, respectively. And CMD-Net outperformed glaucoma specialists, achieving an accuracy of 89.9 % and 93.0 % compared to 87.0 % and 86.8 % for the best-performing ophthalmologist in the held-out and separate test sets, respectively. Conclusions Our study suggests that CMD-Net has the potential to classify AC mechanisms using AS-OCT images, though further validation is needed.
Collapse
Affiliation(s)
- Ye Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Ophthalmology & Visual Science, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | | | - Qing Zhang
- Beijing Institute of Ophthalmology, Beijing, China
| | - Bin Lv
- Ping an Healthcare Technology, Beijing, China
| | - Man Hu
- National Key Discipline of Pediatrics, Ministry of Education, Department of Ophthalmology, Beijing Children's Hospital, Capital Medical University, China
| | | | - Yuan Ni
- Ping an Healthcare Technology, Beijing, China
| | - Guotong Xie
- Ping an Healthcare Technology, Beijing, China
- Ping an Health Cloud Company Limited, Shenzhen, China
- Ping an International Smart City Technology Company Limited, Shenzhen, China
| | - Shuning Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Ophthalmology & Visual Science, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Nazlee Zebardast
- Massachusetts Eye and Ear Infirmary, Harvard Medical School Department of Ophthalmology, Boston, MA, USA
| | - Yusrah Shweikh
- Massachusetts Eye and Ear Infirmary, Harvard Medical School Department of Ophthalmology, Boston, MA, USA
- Sussex Eye Hospital, University Hospitals Sussex NHS Foundation Trust, Sussex, UK
| | - Ningli Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Ophthalmology & Visual Science, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Institute of Ophthalmology, Beijing, China
| |
Collapse
|
3
|
Mercer R, Alaghband P. The value of virtual glaucoma clinics: a review. Eye (Lond) 2024; 38:1840-1844. [PMID: 38589461 PMCID: PMC11226713 DOI: 10.1038/s41433-024-03056-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 02/04/2024] [Accepted: 04/02/2024] [Indexed: 04/10/2024] Open
Abstract
Virtual clinics are being utilised to tackle the growing demand for glaucoma healthcare. We conducted a literature search on 28 February 2023 using MEDLINE (PubMed), EMBASE and Web of Science databases. We searched for studies on virtual glaucoma clinics, published in the English language between 2000 and 2023. Studies suggest that virtual glaucoma clinics are a safe and effective alternative to traditional face-to-face clinics for patients with stable and early-to-moderate glaucoma. Patient satisfaction is high across all clinics surveyed. Satisfaction appears to be linked to good communication, trust and improved waiting times. The majority of healthcare professionals are also content with virtual glaucoma clinics. There are no dedicated cost-benefit analyses for virtual glaucoma clinics in the UK. However, virtual clinics in other specialties have reported significant cost savings.
Collapse
Affiliation(s)
- Rachel Mercer
- Ophthalmology Department, York Hospital, Wigginton Road, York, YO318HE, UK
| | - Pouya Alaghband
- Ophthalmology Department, York Hospital, Wigginton Road, York, YO318HE, UK.
| |
Collapse
|
4
|
Gu B, Sidhu S, Weinreb RN, Christopher M, Zangwill LM, Baxter SL. Review of Visualization Approaches in Deep Learning Models of Glaucoma. Asia Pac J Ophthalmol (Phila) 2023; 12:392-401. [PMID: 37523431 DOI: 10.1097/apo.0000000000000619] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 05/11/2023] [Indexed: 08/02/2023] Open
Abstract
Glaucoma is a major cause of irreversible blindness worldwide. As glaucoma often presents without symptoms, early detection and intervention are important in delaying progression. Deep learning (DL) has emerged as a rapidly advancing tool to help achieve these objectives. In this narrative review, data types and visualization approaches for presenting model predictions, including models based on tabular data, functional data, and/or structural data, are summarized, and the importance of data source diversity for improving the utility and generalizability of DL models is explored. Examples of innovative approaches to understanding predictions of artificial intelligence (AI) models and alignment with clinicians are provided. In addition, methods to enhance the interpretability of clinical features from tabular data used to train AI models are investigated. Examples of published DL models that include interfaces to facilitate end-user engagement and minimize cognitive and time burdens are highlighted. The stages of integrating AI models into existing clinical workflows are reviewed, and challenges are discussed. Reviewing these approaches may help inform the generation of user-friendly interfaces that are successfully integrated into clinical information systems. This review details key principles regarding visualization approaches in DL models of glaucoma. The articles reviewed here focused on usability, explainability, and promotion of clinician trust to encourage wider adoption for clinical use. These studies demonstrate important progress in addressing visualization and explainability issues required for successful real-world implementation of DL models in glaucoma.
Collapse
Affiliation(s)
- Byoungyoung Gu
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| | - Sophia Sidhu
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| | - Robert N Weinreb
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Mark Christopher
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Linda M Zangwill
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Sally L Baxter
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| |
Collapse
|
5
|
Coppola F, Faggioni L, Gabelloni M, De Vietro F, Mendola V, Cattabriga A, Cocozza MA, Vara G, Piccinino A, Lo Monaco S, Pastore LV, Mottola M, Malavasi S, Bevilacqua A, Neri E, Golfieri R. Human, All Too Human? An All-Around Appraisal of the "Artificial Intelligence Revolution" in Medical Imaging. Front Psychol 2021; 12:710982. [PMID: 34650476 PMCID: PMC8505993 DOI: 10.3389/fpsyg.2021.710982] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 09/02/2021] [Indexed: 12/22/2022] Open
Abstract
Artificial intelligence (AI) has seen dramatic growth over the past decade, evolving from a niche super specialty computer application into a powerful tool which has revolutionized many areas of our professional and daily lives, and the potential of which seems to be still largely untapped. The field of medicine and medical imaging, as one of its various specialties, has gained considerable benefit from AI, including improved diagnostic accuracy and the possibility of predicting individual patient outcomes and options of more personalized treatment. It should be noted that this process can actively support the ongoing development of advanced, highly specific treatment strategies (e.g., target therapies for cancer patients) while enabling faster workflow and more efficient use of healthcare resources. The potential advantages of AI over conventional methods have made it attractive for physicians and other healthcare stakeholders, raising much interest in both the research and the industry communities. However, the fast development of AI has unveiled its potential for disrupting the work of healthcare professionals, spawning concerns among radiologists that, in the future, AI may outperform them, thus damaging their reputations or putting their jobs at risk. Furthermore, this development has raised relevant psychological, ethical, and medico-legal issues which need to be addressed for AI to be considered fully capable of patient management. The aim of this review is to provide a brief, hopefully exhaustive, overview of the state of the art of AI systems regarding medical imaging, with a special focus on how AI and the entire healthcare environment should be prepared to accomplish the goal of a more advanced human-centered world.
Collapse
Affiliation(s)
- Francesca Coppola
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
- SIRM Foundation, Italian Society of Medical and Interventional Radiology, Milan, Italy
| | - Lorenzo Faggioni
- Academic Radiology, Department of Translational Research, University of Pisa, Pisa, Italy
| | - Michela Gabelloni
- Academic Radiology, Department of Translational Research, University of Pisa, Pisa, Italy
| | - Fabrizio De Vietro
- Academic Radiology, Department of Translational Research, University of Pisa, Pisa, Italy
| | - Vincenzo Mendola
- Academic Radiology, Department of Translational Research, University of Pisa, Pisa, Italy
| | - Arrigo Cattabriga
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Maria Adriana Cocozza
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Giulio Vara
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Alberto Piccinino
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Silvia Lo Monaco
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Luigi Vincenzo Pastore
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Margherita Mottola
- Department of Computer Science and Engineering, University of Bologna, Bologna, Italy
| | - Silvia Malavasi
- Department of Computer Science and Engineering, University of Bologna, Bologna, Italy
| | - Alessandro Bevilacqua
- Department of Computer Science and Engineering, University of Bologna, Bologna, Italy
| | - Emanuele Neri
- SIRM Foundation, Italian Society of Medical and Interventional Radiology, Milan, Italy
- Academic Radiology, Department of Translational Research, University of Pisa, Pisa, Italy
| | - Rita Golfieri
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| |
Collapse
|
6
|
Lee EB, Wang SY, Chang RT. Interpreting Deep Learning Studies in Glaucoma: Unresolved Challenges. Asia Pac J Ophthalmol (Phila) 2021; 10:261-267. [PMID: 34383718 DOI: 10.1097/apo.0000000000000395] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
ABSTRACT Deep learning algorithms as tools for automated image classification have recently experienced rapid growth in imaging-dependent medical specialties, including ophthalmology. However, only a few algorithms tailored to specific health conditions have been able to achieve regulatory approval for autonomous diagnosis. There is now an international effort to establish optimized thresholds for algorithm performance benchmarking in a rapidly evolving artificial intelligence field. This review examines the largest deep learning studies in glaucoma, with special focus on identifying recurrent challenges and limitations within these studies which preclude widespread clinical deployment. We focus on the 3 most common input modalities when diagnosing glaucoma, namely, fundus photographs, spectral domain optical coherence tomography scans, and standard automated perimetry data. We then analyze 3 major challenges present in all studies: defining the algorithm output of glaucoma, determining reliable ground truth datasets, and compiling representative training datasets.
Collapse
Affiliation(s)
- Eric Boya Lee
- Byers Eye Institute, Department of Ophthalmology, Stanford University, CA
| | | | | |
Collapse
|
7
|
Masin L, Claes M, Bergmans S, Cools L, Andries L, Davis BM, Moons L, De Groef L. A novel retinal ganglion cell quantification tool based on deep learning. Sci Rep 2021; 11:702. [PMID: 33436866 PMCID: PMC7804414 DOI: 10.1038/s41598-020-80308-y] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 12/15/2020] [Indexed: 02/06/2023] Open
Abstract
Glaucoma is a disease associated with the loss of retinal ganglion cells (RGCs), and remains one of the primary causes of blindness worldwide. Major research efforts are presently directed towards the understanding of disease pathogenesis and the development of new therapies, with the help of rodent models as an important preclinical research tool. The ultimate goal is reaching neuroprotection of the RGCs, which requires a tool to reliably quantify RGC survival. Hence, we demonstrate a novel deep learning pipeline that enables fully automated RGC quantification in the entire murine retina. This software, called RGCode (Retinal Ganglion Cell quantification based On DEep learning), provides a user-friendly interface that requires the input of RBPMS-immunostained flatmounts and returns the total RGC count, retinal area and density, together with output images showing the computed counts and isodensity maps. The counting model was trained on RBPMS-stained healthy and glaucomatous retinas, obtained from mice subjected to microbead-induced ocular hypertension and optic nerve crush injury paradigms. RGCode demonstrates excellent performance in RGC quantification as compared to manual counts. Furthermore, we convincingly show that RGCode has potential for wider application, by retraining the model with a minimal set of training data to count FluoroGold-traced RGCs.
Collapse
Affiliation(s)
- Luca Masin
- grid.5596.f0000 0001 0668 7884Department of Biology, Neural Circuit Development and Regeneration Research Group, KU Leuven, Leuven, Belgium
| | - Marie Claes
- grid.5596.f0000 0001 0668 7884Department of Biology, Neural Circuit Development and Regeneration Research Group, KU Leuven, Leuven, Belgium
| | - Steven Bergmans
- grid.5596.f0000 0001 0668 7884Department of Biology, Neural Circuit Development and Regeneration Research Group, KU Leuven, Leuven, Belgium
| | - Lien Cools
- grid.5596.f0000 0001 0668 7884Department of Biology, Neural Circuit Development and Regeneration Research Group, KU Leuven, Leuven, Belgium
| | - Lien Andries
- grid.5596.f0000 0001 0668 7884Department of Biology, Neural Circuit Development and Regeneration Research Group, KU Leuven, Leuven, Belgium
| | - Benjamin M. Davis
- grid.83440.3b0000000121901201Glaucoma and Retinal Neurodegenerative Disease Research Group, Institute of Ophthalmology, University College London, London, UK ,grid.496779.2Central Laser Facility, Science and Technologies Facilities Council, UK Research and Innovation, Didcot, Oxfordshire UK
| | - Lieve Moons
- grid.5596.f0000 0001 0668 7884Department of Biology, Neural Circuit Development and Regeneration Research Group, KU Leuven, Leuven, Belgium
| | - Lies De Groef
- grid.5596.f0000 0001 0668 7884Department of Biology, Neural Circuit Development and Regeneration Research Group, KU Leuven, Leuven, Belgium
| |
Collapse
|