1
|
He YJ, Liu PL, Wei T, Liu T, Li YF, Yang J, Fan WX. Artificial intelligence in kidney transplantation: a 30-year bibliometric analysis of research trends, innovations, and future directions. Ren Fail 2025; 47:2458754. [PMID: 39910843 PMCID: PMC11803763 DOI: 10.1080/0886022x.2025.2458754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2024] [Revised: 01/16/2025] [Accepted: 01/21/2025] [Indexed: 02/07/2025] Open
Abstract
Kidney transplantation is the definitive treatment for end-stage renal disease (ESRD), yet challenges persist in optimizing donor-recipient matching, postoperative care, and immunosuppressive strategies. This study employs bibliometric analysis to evaluate 890 publications from 1993 to 2023, using tools such as CiteSpace and VOSviewer, to identify global trends, research hotspots, and future opportunities in applying artificial intelligence (AI) to kidney transplantation. Our analysis highlights the United States as the leading contributor to the field, with significant outputs from Mayo Clinic and leading authors like Cheungpasitporn W. Key research themes include AI-driven advancements in donor matching, deep learning for post-transplant monitoring, and machine learning algorithms for personalized immunosuppressive therapies. The findings underscore a rapid expansion in AI applications since 2017, with emerging trends in personalized medicine, multimodal data fusion, and telehealth. This bibliometric review provides a comprehensive resource for researchers and clinicians, offering insights into the evolution of AI in kidney transplantation and guiding future studies toward transformative applications in transplantation science.
Collapse
Affiliation(s)
- Ying Jia He
- Department of Nephrology, First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan Province, China
| | - Pin Lin Liu
- Department of Nephrology, First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan Province, China
| | - Tao Wei
- Department of Library, Kunming Medical University, Kunming, Yunnan Province, China
| | - Tao Liu
- Organ Transplantation Center, First Affiliated Hospital, Kunming Medical University, Kunming, Yunnan Province, China
| | - Yi Fei Li
- Organ Transplantation Center, First Affiliated Hospital, Kunming Medical University, Kunming, Yunnan Province, China
| | - Jing Yang
- Department of Nephrology, First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan Province, China
| | - Wen Xing Fan
- Department of Nephrology, First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan Province, China
| |
Collapse
|
2
|
Gou R, Ma X, Su N, Yuan S, Chen Q. Bilateral deformable attention transformer for screening of high myopia using optical coherence tomography. Comput Biol Med 2025; 191:110236. [PMID: 40253920 DOI: 10.1016/j.compbiomed.2025.110236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2024] [Revised: 04/16/2025] [Accepted: 04/17/2025] [Indexed: 04/22/2025]
Abstract
Myopia is a visual impairment caused by excessive refractive power of the cornea or lens or elongation of the eyeball. Due to the various classification criteria associated with high myopia, such as spherical equivalent (SE) and axial length (AL), existing methods primarily rely on individual classification criteria for model design. In this paper, to comprehensively utilize multiple indicators, we design a multi-label classification model for high myopia. Moreover, image data play a pivotal role in studying high myopia and pathological myopia. Notable features of high myopia, including increased retinal curvature, choroidal thinning, and scleral shadowing, are observable in Optical Coherence Tomography (OCT) images of the retina. We propose a model named Bilateral Deformable Attention Transformer (BDA-Tran) for multi-label screening of high myopia in OCT data. Based on the vision transformer, we introduce a bilateral deformable attention mechanism (BDA) where the queries in self-attention are composed of both the global queries and the data-dependent queries from the left and right sides. This flexible approach allows attention to focus on relevant regions and capture more myopia-related information features, thereby concentrating attention primarily on regions related to the choroid and sclera, among other areas associated with high myopia. BDA-Tran is trained and tested on OCT images of 243 patients, achieving the accuracies of 83.1 % and 87.7 % for SE and AL, respectively. Furthermore, we visualize attention maps to provide transparent and interpretable judgments. Experimental results demonstrate that BDA-Tran outperforms existing methods in terms of effectiveness and reliability under the same experimental conditions.
Collapse
Affiliation(s)
- Ruoxuan Gou
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Xiao Ma
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Na Su
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Songtao Yuan
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China.
| |
Collapse
|
3
|
Chen S, Bai W. Artificial intelligence technology in ophthalmology public health: current applications and future directions. Front Cell Dev Biol 2025; 13:1576465. [PMID: 40313720 PMCID: PMC12044197 DOI: 10.3389/fcell.2025.1576465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2025] [Accepted: 03/28/2025] [Indexed: 05/03/2025] Open
Abstract
Global eye health has become a critical public health challenge, with the prevalence of blindness and visual impairment expected to rise significantly in the coming decades. Traditional ophthalmic public health systems face numerous obstacles, including the uneven distribution of medical resources, insufficient training for primary healthcare workers, and limited public awareness of eye health. Addressing these challenges requires urgent, innovative solutions. Artificial intelligence (AI) has demonstrated substantial potential in enhancing ophthalmic public health across various domains. AI offers significant improvements in ophthalmic data management, disease screening and monitoring, risk prediction and early warning systems, medical resource allocation, and health education and patient management. These advancements substantially improve the quality and efficiency of healthcare, particularly in preventing and treating prevalent eye conditions such as cataracts, diabetic retinopathy, glaucoma, and myopia. Additionally, telemedicine and mobile applications have expanded access to healthcare services and enhanced the capabilities of primary healthcare providers. However, there are challenges in integrating AI into ophthalmic public health. Key issues include interoperability with electronic health records (EHR), data security and privacy, data quality and bias, algorithm transparency, and ethical and regulatory frameworks. Heterogeneous data formats and the lack of standardized metadata hinder seamless integration, while privacy risks necessitate advanced techniques such as anonymization. Data biases, stemming from racial or geographic disparities, and the "black box" nature of AI models, limit reliability and clinical trust. Ethical issues, such as ensuring accountability for AI-driven decisions and balancing innovation with patient safety, further complicate implementation. The future of ophthalmic public health lies in overcoming these barriers to fully harness the potential of AI, ensuring that advancements in technology translate into tangible benefits for patients worldwide.
Collapse
Affiliation(s)
| | - Wen Bai
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| |
Collapse
|
4
|
An S, Teo K, McConnell MV, Marshall J, Galloway C, Squirrell D. AI explainability in oculomics: How it works, its role in establishing trust, and what still needs to be addressed. Prog Retin Eye Res 2025; 106:101352. [PMID: 40086660 DOI: 10.1016/j.preteyeres.2025.101352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2024] [Revised: 03/07/2025] [Accepted: 03/10/2025] [Indexed: 03/16/2025]
Abstract
Recent developments in artificial intelligence (AI) have seen a proliferation of algorithms that are now capable of predicting a range of systemic diseases from retinal images. Unlike traditional retinal disease detection AI models which are trained on well-recognised retinal biomarkers, systemic disease detection or "oculomics" models use a range of often poorly characterised retinal biomarkers to arrive at their predictions. As the retinal phenotype that oculomics models use may not be intuitive, clinicians have to rely on the developers' explanations of how these algorithms work in order to understand them. The discipline of understanding how AI algorithms work employs two similar but distinct terms: Explainable AI and Interpretable AI (iAI). Explainable AI describes the holistic functioning of an AI system, including its impact and potential biases. Interpretable AI concentrates solely on examining and understanding the workings of the AI algorithm itself. iAI tools are therefore what the clinician must rely on if they are to understand how the algorithm works and whether its predictions are reliable. The iAI tools that developers use can be delineated into two broad categories: Intrinsic methods that improve transparency through architectural changes and post-hoc methods that explain trained models via external algorithms. Currently post-hoc methods, class activation maps in particular, are far more widely used than other techniques but they have their limitations especially when applied to oculomics AI models. Aimed at clinicians, we examine how the key iAI methods work, what they are designed to do and what their limitations are when applied to oculomics AI. We conclude by discussing how combining existing iAI techniques with novel approaches could allow AI developers to better explain how their oculomics models work and reassure clinicians that the results issued are reliable.
Collapse
Affiliation(s)
- Songyang An
- School of Optometry and Vision Science, University of Auckland, Auckland, New Zealand; Toku Eyes Limited NZ, 110 Carlton Gore Road, Newmarket, Auckland, 1023, New Zealand
| | - Kelvin Teo
- Singapore Eye Research Institute, The Academia, 20 College Road Discovery Tower Level 6, 169856, Singapore; Singapore National University, Singapore
| | - Michael V McConnell
- Division of Cardiovascular Medicine, Stanford University School of Medicine, Stanford, CA, USA; Toku Eyes Limited NZ, 110 Carlton Gore Road, Newmarket, Auckland, 1023, New Zealand
| | - John Marshall
- Institute of Ophthalmology University College London, 11-43 Bath Street, London, EC1V 9EL, UK
| | - Christopher Galloway
- Department of Business and Communication, Massey University, East Precinct Albany Expressway, SH17, Albany, Auckland, 0632, New Zealand
| | - David Squirrell
- Department of Ophthalmology, University of the Sunshine Coast, Queensland, Australia; Toku Eyes Limited NZ, 110 Carlton Gore Road, Newmarket, Auckland, 1023, New Zealand.
| |
Collapse
|
5
|
Goel I, Bhaskar Y, Kumar N, Singh S, Amanullah M, Dhar R, Karmakar S. Role of AI in empowering and redefining the oncology care landscape: perspective from a developing nation. Front Digit Health 2025; 7:1550407. [PMID: 40103737 PMCID: PMC11913822 DOI: 10.3389/fdgth.2025.1550407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2024] [Accepted: 02/17/2025] [Indexed: 03/20/2025] Open
Abstract
Early diagnosis and accurate prognosis play a pivotal role in the clinical management of cancer and in preventing cancer-related mortalities. The burgeoning population of Asia in general and South Asian countries like India in particular pose significant challenges to the healthcare system. Regrettably, the demand for healthcare services in India far exceeds the available resources, resulting in overcrowded hospitals, prolonged wait times, and inadequate facilities. The scarcity of trained manpower in rural settings, lack of awareness and low penetrance of screening programs further compounded the problem. Artificial Intelligence (AI), driven by advancements in machine learning, deep learning, and natural language processing, can profoundly transform the underlying shortcomings in the healthcare industry, more for populous nations like India. With about 1.4 million cancer cases reported annually and 0.9 million deaths, India has a significant cancer burden that surpassed several nations. Further, India's diverse and large ethnic population is a data goldmine for healthcare research. Under these circumstances, AI-assisted technology, coupled with digital health solutions, could support effective oncology care and reduce the economic burden of GDP loss in terms of years of potential productive life lost (YPPLL) due to India's stupendous cancer burden. This review explores different aspects of cancer management, such as prevention, diagnosis, precision treatment, prognosis, and drug discovery, where AI has demonstrated promising clinical results. By harnessing the capabilities of AI in oncology research, healthcare professionals can enhance their ability to diagnose cancers at earlier stages, leading to more effective treatments and improved patient outcomes. With continued research and development, AI and digital health can play a transformative role in mitigating the challenges posed by the growing population and advancing the fight against cancer in India. Moreover, AI-driven technologies can assist in tailoring personalized treatment plans, optimizing therapeutic strategies, and supporting oncologists in making well-informed decisions. However, it is essential to ensure responsible implementation and address potential ethical and privacy concerns associated with using AI in healthcare.
Collapse
Affiliation(s)
- Isha Goel
- Department of Biochemistry, All India Institute of Medical Sciences (AIIMS), New Delhi, India
- Department of Psychiatry, All India Institute of Medical Sciences (AIIMS), New Delhi, India
| | - Yogendra Bhaskar
- ICMR Computational Genomics Centre, Indian Council of Medical Research (ICMR), New Delhi, India
| | - Nand Kumar
- Department of Psychiatry, All India Institute of Medical Sciences (AIIMS), New Delhi, India
| | - Sunil Singh
- Department of Biochemistry, All India Institute of Medical Sciences (AIIMS), New Delhi, India
| | - Mohammed Amanullah
- Department of Clinical Biochemistry, College of Medicine, King Khalid University, Abha, Saudi Arabia
| | - Ruby Dhar
- Department of Biochemistry, All India Institute of Medical Sciences (AIIMS), New Delhi, India
| | - Subhradip Karmakar
- Department of Biochemistry, All India Institute of Medical Sciences (AIIMS), New Delhi, India
| |
Collapse
|
6
|
Sharma P, Takahashi N, Ninomiya T, Sato M, Miya T, Tsuda S, Nakazawa T. A hybrid multi model artificial intelligence approach for glaucoma screening using fundus images. NPJ Digit Med 2025; 8:130. [PMID: 40016437 PMCID: PMC11868628 DOI: 10.1038/s41746-025-01473-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 01/21/2025] [Indexed: 03/01/2025] Open
Abstract
Glaucoma, a leading cause of blindness, requires accurate early detection. We present an AI-based Glaucoma Screening (AI-GS) network comprising six lightweight deep learning models (total size: 110 MB) that analyze fundus images to identify early structural signs such as optic disc cupping, hemorrhages, and nerve fiber layer defects. The segmentation of the optic cup and disc closely matches that of expert ophthalmologists. AI-GS achieved a sensitivity of 0.9352 (95% CI 0.9277-0.9435) at 95% specificity. In real-world testing, sensitivity dropped to 0.5652 (95% CI 0.5218-0.6058) at ~0.9376 specificity (95% CI 0.9174-0.9562) for the standalone binary glaucoma classification model, whereas the full AI-GS network maintained higher sensitivity (0.8053, 95% CI 0.7704-0.8382) with good specificity (0.9112, 95% CI 0.8887-0.9356). The sub-models in AI-GS, with enhanced capabilities in detecting early glaucoma-related structural changes, drive these improvements. With low computational demands and tunable detection parameters, AI-GS promises widespread glaucoma screening, portable device integration, and improved understanding of disease progression.
Collapse
Affiliation(s)
- Parmanand Sharma
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan.
| | - Naoki Takahashi
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Takahiro Ninomiya
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Masataka Sato
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Takehiro Miya
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Satoru Tsuda
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Toru Nakazawa
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan.
| |
Collapse
|
7
|
Schmidt CC, Stahl R, Mueller F, Fischer TD, Forbrig R, Brem C, Isik H, Seelos K, Thon N, Stoecklein S, Liebig T, Rueckel J. Evaluation of AI-Powered Routine Screening of Clinically Acquired cMRIs for Incidental Intracranial Aneurysms. Diagnostics (Basel) 2025; 15:254. [PMID: 39941184 PMCID: PMC11816387 DOI: 10.3390/diagnostics15030254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2024] [Revised: 01/10/2025] [Accepted: 01/19/2025] [Indexed: 02/16/2025] Open
Abstract
Objectives: To quantify the clinical value of integrating a commercially available artificial intelligence (AI) algorithm for intracranial aneurysm detection in a screening setting that utilizes cranial magnetic resonance imaging (cMRI) scans acquired primarily for other clinical purposes. Methods: A total of 907 consecutive cMRI datasets, including time-of-flight-angiography (TOF-MRA), were retrospectively identified from patients unaware of intracranial aneurysms. cMRIs were analyzed by a commercial AI algorithm and reassessed by consultant-level neuroradiologists, who provided confidence scores and workup recommendations for suspicious findings. Patients with newly identified findings (relative to initial cMRI reports) were contacted for on-site consultations, including cMRI follow-up or catheter angiography. The number needed to screen (NNS) was defined as the cMRI quantity that must undergo AI screening to achieve various clinical endpoints. Results: The algorithm demonstrates high sensitivities (100% for findings >4 mm in diameter), a 17.8% MRA alert rate and positive predictive values of 11.5-43.8% (depending on whether inconclusive findings are considered or not). Initial cMRI reports missed 50 out of 59 suspicious findings, including 13 certain intradural aneurysms. The NNS for additionally identifying highly suspicious and therapeutically relevant (unruptured intracranial aneurysm treatment scores balanced or in favor of treatment) findings was 152. The NNS for recommending additional follow-/workup imaging (cMRI or catheter angiography) was 26, suggesting an additional up to 4% increase in imaging procedures resulting from a preceding AI screening. Conclusions: AI-powered routine screening of cMRIs clearly lowers the high risk of incidental aneurysm non-reporting but results in a substantial burden of additional imaging follow-up for minor or inconclusive findings.
Collapse
Affiliation(s)
| | - Robert Stahl
- Institute of Neuroradiology, University Hospital, LMU Munich, 81377 Munich, Germany; (C.C.S.)
| | - Franziska Mueller
- Department of Radiology, University Hospital, LMU Munich, 81377 Munich, Germany
| | - Thomas David Fischer
- Institute of Neuroradiology, University Hospital, LMU Munich, 81377 Munich, Germany; (C.C.S.)
| | - Robert Forbrig
- Institute of Neuroradiology, University Hospital, LMU Munich, 81377 Munich, Germany; (C.C.S.)
| | - Christian Brem
- Institute of Neuroradiology, University Hospital, LMU Munich, 81377 Munich, Germany; (C.C.S.)
| | - Hakan Isik
- Institute of Neuroradiology, University Hospital, LMU Munich, 81377 Munich, Germany; (C.C.S.)
| | - Klaus Seelos
- Institute of Neuroradiology, University Hospital, LMU Munich, 81377 Munich, Germany; (C.C.S.)
| | - Niklas Thon
- Department of Neurosurgery, University Hospital, LMU Munich, 81377 Munich, Germany
| | - Sophia Stoecklein
- Department of Radiology, University Hospital, LMU Munich, 81377 Munich, Germany
| | - Thomas Liebig
- Institute of Neuroradiology, University Hospital, LMU Munich, 81377 Munich, Germany; (C.C.S.)
| | - Johannes Rueckel
- Institute of Neuroradiology, University Hospital, LMU Munich, 81377 Munich, Germany; (C.C.S.)
| |
Collapse
|
8
|
Mikhail D, Milad D, Antaki F, Hammamji K, Qian CX, Rezende FA, Duval R. The role of artificial intelligence in macular hole management: A scoping review. Surv Ophthalmol 2025; 70:12-27. [PMID: 39357748 DOI: 10.1016/j.survophthal.2024.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 09/16/2024] [Accepted: 09/23/2024] [Indexed: 10/04/2024]
Abstract
We focus on the utility of artificial intelligence (AI) in the management of macular hole (MH). We synthesize 25 studies, comprehensively reporting on each AI model's development strategy, validation, tasks, performance, strengths, and limitations. All models analyzed ophthalmic images, and 5 (20 %) also analyzed clinical features. Study objectives were categorized based on 3 stages of MH care: diagnosis, identification of MH characteristics, and postoperative predictions of hole closure and vision recovery. Twenty-two (88 %) AI models underwent supervised learning, and the models were most often deployed to determine a MH diagnosis. None of the articles applied AI to guiding treatment plans. AI model performance was compared to other algorithms and to human graders. Of the 10 studies comparing AI to human graders (i.e., retinal specialists, general ophthalmologists, and ophthalmology trainees), 5 (50 %) reported equivalent or higher performance. Overall, AI analysis of images and clinical characteristics in MH demonstrated high diagnostic and predictive accuracy. Convolutional neural networks comprised the majority of included AI models, including those which were high performing. Future research may consider validating algorithms to propose personalized treatment plans and explore clinical use of the aforementioned algorithms.
Collapse
Affiliation(s)
- David Mikhail
- Temerty Faculty of Medicine, University of Toronto, Toronto, Canada; Department of Ophthalmology, University of Montreal, Montreal, Canada
| | - Daniel Milad
- Department of Ophthalmology, University of Montreal, Montreal, Canada; Department of Ophthalmology, Hôpital Maisonneuve-Rosemont, Montreal, Canada; Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Canada
| | - Fares Antaki
- Department of Ophthalmology, University of Montreal, Montreal, Canada; Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Canada
| | - Karim Hammamji
- Department of Ophthalmology, University of Montreal, Montreal, Canada; Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Canada
| | - Cynthia X Qian
- Department of Ophthalmology, University of Montreal, Montreal, Canada; Department of Ophthalmology, Hôpital Maisonneuve-Rosemont, Montreal, Canada
| | - Flavio A Rezende
- Department of Ophthalmology, University of Montreal, Montreal, Canada; Department of Ophthalmology, Hôpital Maisonneuve-Rosemont, Montreal, Canada
| | - Renaud Duval
- Department of Ophthalmology, University of Montreal, Montreal, Canada; Department of Ophthalmology, Hôpital Maisonneuve-Rosemont, Montreal, Canada.
| |
Collapse
|
9
|
Huang T, Huang X, Yin H. Deep learning methods for improving the accuracy and efficiency of pathological image analysis. Sci Prog 2025; 108:368504241306830. [PMID: 39814425 PMCID: PMC11736776 DOI: 10.1177/00368504241306830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2025]
Abstract
This study presents a novel integration of two advanced deep learning models, U-Net and EfficientNetV2, to achieve high-precision segmentation and rapid classification of pathological images. A key innovation is the development of a new heatmap generation algorithm, which leverages meticulous image preprocessing, data enhancement strategies, ensemble learning, attention mechanisms, and deep feature fusion techniques. This algorithm not only produces highly accurate and interpretatively rich heatmaps but also significantly improves the accuracy and efficiency of pathological image analysis. Unlike existing methods, our approach integrates these advanced techniques into a cohesive framework, enhancing its ability to reveal critical features in pathological images. Rigorous experimental validation demonstrated that our algorithm excels in key performance indicators such as accuracy, recall rate, and processing speed, underscoring its potential for broader applications in pathological image analysis and beyond.
Collapse
Affiliation(s)
- Tangsen Huang
- School of Communication Engineering, Hangzhou Dianzi University, Hangzhou, China
- School of Mathematics and Computer Science, Lishui University, Lishui, China
- School of Information Engineering, Hunan University of Science and Engineering, Yongzhou, China
| | - Xingru Huang
- School of Communication Engineering, Hangzhou Dianzi University, Hangzhou, China
| | - Haibing Yin
- School of Communication Engineering, Hangzhou Dianzi University, Hangzhou, China
- School of Mathematics and Computer Science, Lishui University, Lishui, China
| |
Collapse
|
10
|
Pachade S, Porwal P, Kokare M, Deshmukh G, Sahasrabuddhe V, Luo Z, Han F, Sun Z, Qihan L, Kamata SI, Ho E, Wang E, Sivajohan A, Youn S, Lane K, Chun J, Wang X, Gu Y, Lu S, Oh YT, Park H, Lee CY, Yeh H, Cheng KW, Wang H, Ye J, He J, Gu L, Müller D, Soto-Rey I, Kramer F, Arai H, Ochi Y, Okada T, Giancardo L, Quellec G, Mériaudeau F. RFMiD: Retinal Image Analysis for multi-Disease Detection challenge. Med Image Anal 2025; 99:103365. [PMID: 39395210 DOI: 10.1016/j.media.2024.103365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 07/16/2024] [Accepted: 10/02/2024] [Indexed: 10/14/2024]
Abstract
In the last decades, many publicly available large fundus image datasets have been collected for diabetic retinopathy, glaucoma, and age-related macular degeneration, and a few other frequent pathologies. These publicly available datasets were used to develop a computer-aided disease diagnosis system by training deep learning models to detect these frequent pathologies. One challenge limiting the adoption of a such system by the ophthalmologist is, computer-aided disease diagnosis system ignores sight-threatening rare pathologies such as central retinal artery occlusion or anterior ischemic optic neuropathy and others that ophthalmologists currently detect. Aiming to advance the state-of-the-art in automatic ocular disease classification of frequent diseases along with the rare pathologies, a grand challenge on "Retinal Image Analysis for multi-Disease Detection" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2021). This paper, reports the challenge organization, dataset, top-performing participants solutions, evaluation measures, and results based on a new "Retinal Fundus Multi-disease Image Dataset" (RFMiD). There were two principal sub-challenges: disease screening (i.e. presence versus absence of pathology - a binary classification problem) and disease/pathology classification (a 28-class multi-label classification problem). It received a positive response from the scientific community with 74 submissions by individuals/teams that effectively entered in this challenge. The top-performing methodologies utilized a blend of data-preprocessing, data augmentation, pre-trained model, and model ensembling. This multi-disease (frequent and rare pathologies) detection will enable the development of generalizable models for screening the retina, unlike the previous efforts that focused on the detection of specific diseases.
Collapse
Affiliation(s)
- Samiksha Pachade
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded 431606, India.
| | - Prasanna Porwal
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded 431606, India
| | - Manesh Kokare
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded 431606, India
| | | | - Vivek Sahasrabuddhe
- Department of Ophthalmology, Shankarrao Chavan Government Medical College, Nanded 431606, India
| | - Zhengbo Luo
- Graduate School of Information Production and Systems, Waseda University, Japan
| | - Feng Han
- University of Shanghai for Science and Technology, Shanghai, China
| | - Zitang Sun
- Graduate School of Information Production and Systems, Waseda University, Japan
| | - Li Qihan
- Graduate School of Information Production and Systems, Waseda University, Japan
| | - Sei-Ichiro Kamata
- Graduate School of Information Production and Systems, Waseda University, Japan
| | - Edward Ho
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Edward Wang
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Asaanth Sivajohan
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Saerom Youn
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Kevin Lane
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Jin Chun
- Schulich Applied Computing in Medicine, University of Western Ontario, Schulich School of Medicine and Dentistry, Canada
| | - Xinliang Wang
- Beihang University School of Computer Science, China
| | - Yunchao Gu
- Beihang University School of Computer Science, China
| | - Sixu Lu
- Beijing Normal University School of Artificial Intelligence, China
| | - Young-Tack Oh
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Republic of Korea
| | - Hyunjin Park
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea; School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, Republic of Korea
| | - Chia-Yen Lee
- Department of Electrical Engineering, National United University, Miaoli 360001, Taiwan, ROC
| | - Hung Yeh
- Department of Electrical Engineering, National United University, Miaoli 360001, Taiwan, ROC; Institute of Biomedical Engineering, National Yang Ming Chiao Tung University, 1001 Ta-Hsueh Road, Hsinchu, Taiwan, ROC
| | - Kai-Wen Cheng
- Department of Electrical Engineering, National United University, Miaoli 360001, Taiwan, ROC
| | - Haoyu Wang
- School of Biomedical Engineering, the Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Jin Ye
- ShenZhen Key Lab of Computer Vision and Pattern Recognition, Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Junjun He
- School of Biomedical Engineering, the Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China; ShenZhen Key Lab of Computer Vision and Pattern Recognition, Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lixu Gu
- School of Biomedical Engineering, the Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Dominik Müller
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Germany; Medical Data Integration Center, University Hospital Augsburg, Germany
| | - Iñaki Soto-Rey
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Germany; Medical Data Integration Center, University Hospital Augsburg, Germany
| | - Frank Kramer
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Germany
| | | | - Yuma Ochi
- National Institute of Technology, Kisarazu College, Japan
| | - Takami Okada
- Institute of Industrial Ecological Sciences, University of Occupational and Environmental Health, Japan
| | - Luca Giancardo
- Center for Precision Health, School of Biomedical Informatics, University of Texas Health Science Center at Houston (UTHealth), Houston, TX 77030, USA
| | | | | |
Collapse
|
11
|
Li F, Wang D, Yang Z, Zhang Y, Jiang J, Liu X, Kong K, Zhou F, Tham CC, Medeiros F, Han Y, Grzybowski A, Zangwill LM, Lam DSC, Zhang X. The AI revolution in glaucoma: Bridging challenges with opportunities. Prog Retin Eye Res 2024; 103:101291. [PMID: 39186968 DOI: 10.1016/j.preteyeres.2024.101291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 08/19/2024] [Accepted: 08/19/2024] [Indexed: 08/28/2024]
Abstract
Recent advancements in artificial intelligence (AI) herald transformative potentials for reshaping glaucoma clinical management, improving screening efficacy, sharpening diagnosis precision, and refining the detection of disease progression. However, incorporating AI into healthcare usages faces significant hurdles in terms of developing algorithms and putting them into practice. When creating algorithms, issues arise due to the intensive effort required to label data, inconsistent diagnostic standards, and a lack of thorough testing, which often limits the algorithms' widespread applicability. Additionally, the "black box" nature of AI algorithms may cause doctors to be wary or skeptical. When it comes to using these tools, challenges include dealing with lower-quality images in real situations and the systems' limited ability to work well with diverse ethnic groups and different diagnostic equipment. Looking ahead, new developments aim to protect data privacy through federated learning paradigms, improving algorithm generalizability by diversifying input data modalities, and augmenting datasets with synthetic imagery. The integration of smartphones appears promising for using AI algorithms in both clinical and non-clinical settings. Furthermore, bringing in large language models (LLMs) to act as interactive tool in medicine may signify a significant change in how healthcare will be delivered in the future. By navigating through these challenges and leveraging on these as opportunities, the field of glaucoma AI will not only have improved algorithmic accuracy and optimized data integration but also a paradigmatic shift towards enhanced clinical acceptance and a transformative improvement in glaucoma care.
Collapse
Affiliation(s)
- Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Deming Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Zefeng Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Yinhang Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Jiaxuan Jiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Xiaoyi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Kangjie Kong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Fengqi Zhou
- Ophthalmology, Mayo Clinic Health System, Eau Claire, WI, USA.
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Felipe Medeiros
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA.
| | - Ying Han
- University of California, San Francisco, Department of Ophthalmology, San Francisco, CA, USA; The Francis I. Proctor Foundation for Research in Ophthalmology, University of California, San Francisco, CA, USA.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, CA, USA.
| | - Dennis S C Lam
- The International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER Dennis Lam & Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
12
|
Shin JY, Son J, Kong ST, Park J, Park B, Park KH, Jung KH, Park SJ. Clinical Utility of Deep Learning Assistance for Detecting Various Abnormal Findings in Color Retinal Fundus Images: A Reader Study. Transl Vis Sci Technol 2024; 13:34. [PMID: 39441571 PMCID: PMC11512572 DOI: 10.1167/tvst.13.10.34] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 02/28/2024] [Indexed: 10/25/2024] Open
Abstract
Purpose To evaluate the clinical usefulness of a deep learning-based detection device for multiple abnormal findings on retinal fundus photographs for readers with varying expertise. Methods Fourteen ophthalmologists (six residents, eight specialists) assessed 399 fundus images with respect to 12 major ophthalmologic findings, with or without the assistance of a deep learning algorithm, in two separate reading sessions. Sensitivity, specificity, and reading time per image were compared. Results With algorithmic assistance, readers significantly improved in sensitivity for all 12 findings (P < 0.05) but tended to be less specific (P < 0.05) for hemorrhage, drusen, membrane, and vascular abnormality, more profoundly so in residents. Sensitivity without algorithmic assistance was significantly lower in residents (23.1%∼75.8%) compared to specialists (55.1%∼97.1%) in nine findings, but it improved to similar levels with algorithmic assistance (67.8%∼99.4% in residents, 83.2%∼99.5% in specialists) with only hemorrhage remaining statistically significantly lower. Variances in sensitivity were significantly reduced for all findings. Reading time per image decreased in images with fewer than three findings per image, more profoundly in residents. When simulated based on images acquired from a health screening center, average reading time was estimated to be reduced by 25.9% (from 16.4 seconds to 12.1 seconds per image) for residents, and by 2.0% (from 9.6 seconds to 9.4 seconds) for specialists. Conclusions Deep learning-based computer-assisted detection devices increase sensitivity, reduce inter-reader variance in sensitivity, and reduce reading time in less complicated images. Translational Relevance This study evaluated the influence that algorithmic assistance in detecting abnormal findings on retinal fundus photographs has on clinicians, possibly predicting its influence on clinical application.
Collapse
Affiliation(s)
- Joo Young Shin
- Department of Ophthalmology, Seoul Metropolitan Government Seoul National University Boramae Medical Centre, Seoul, Republic of Korea
| | | | | | | | | | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Kyu-Hwan Jung
- VUNO Inc., Seoul, Republic of Korea
- Department of Medical Device Research and Management, Samsung Advanced Institute for Health Sciences and Technology, Sungkyunkwan University, Seoul, Republic of Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| |
Collapse
|
13
|
Martin E, Cook AG, Frost SM, Turner AW, Chen FK, McAllister IL, Nolde JM, Schlaich MP. Ocular biomarkers: useful incidental findings by deep learning algorithms in fundus photographs. Eye (Lond) 2024; 38:2581-2588. [PMID: 38734746 PMCID: PMC11385472 DOI: 10.1038/s41433-024-03085-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 04/03/2024] [Accepted: 04/11/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND/OBJECTIVES Artificial intelligence can assist with ocular image analysis for screening and diagnosis, but it is not yet capable of autonomous full-spectrum screening. Hypothetically, false-positive results may have unrealized screening potential arising from signals persisting despite training and/or ambiguous signals such as from biomarker overlap or high comorbidity. The study aimed to explore the potential to detect clinically useful incidental ocular biomarkers by screening fundus photographs of hypertensive adults using diabetic deep learning algorithms. SUBJECTS/METHODS Patients referred for treatment-resistant hypertension were imaged at a hospital unit in Perth, Australia, between 2016 and 2022. The same 45° colour fundus photograph selected for each of the 433 participants imaged was processed by three deep learning algorithms. Two expert retinal specialists graded all false-positive results for diabetic retinopathy in non-diabetic participants. RESULTS Of the 29 non-diabetic participants misclassified as positive for diabetic retinopathy, 28 (97%) had clinically useful retinal biomarkers. The models designed to screen for fewer diseases captured more incidental disease. All three algorithms showed a positive correlation between severity of hypertensive retinopathy and misclassified diabetic retinopathy. CONCLUSIONS The results suggest that diabetic deep learning models may be responsive to hypertensive and other clinically useful retinal biomarkers within an at-risk, hypertensive cohort. Observing that models trained for fewer diseases captured more incidental pathology increases confidence in signalling hypotheses aligned with using self-supervised learning to develop autonomous comprehensive screening. Meanwhile, non-referable and false-positive outputs of other deep learning screening models could be explored for immediate clinical use in other populations.
Collapse
Affiliation(s)
- Eve Martin
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia.
- School of Population and Global Health, The University of Western Australia, Crawley, Australia.
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia.
- Australian e-Health Research Centre, Floreat, WA, Australia.
| | - Angus G Cook
- School of Population and Global Health, The University of Western Australia, Crawley, Australia
| | - Shaun M Frost
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia
- Australian e-Health Research Centre, Floreat, WA, Australia
| | - Angus W Turner
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Fred K Chen
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, The University of Melbourne, East Melbourne, VIC, Australia
- Ophthalmology Department, Royal Perth Hospital, Perth, Australia
| | - Ian L McAllister
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Janis M Nolde
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| | - Markus P Schlaich
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| |
Collapse
|
14
|
Grzybowski A, Jin K, Zhou J, Pan X, Wang M, Ye J, Wong TY. Retina Fundus Photograph-Based Artificial Intelligence Algorithms in Medicine: A Systematic Review. Ophthalmol Ther 2024; 13:2125-2149. [PMID: 38913289 PMCID: PMC11246322 DOI: 10.1007/s40123-024-00981-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 04/15/2024] [Indexed: 06/25/2024] Open
Abstract
We conducted a systematic review of research in artificial intelligence (AI) for retinal fundus photographic images. We highlighted the use of various AI algorithms, including deep learning (DL) models, for application in ophthalmic and non-ophthalmic (i.e., systemic) disorders. We found that the use of AI algorithms for the interpretation of retinal images, compared to clinical data and physician experts, represents an innovative solution with demonstrated superior accuracy in identifying many ophthalmic (e.g., diabetic retinopathy (DR), age-related macular degeneration (AMD), optic nerve disorders), and non-ophthalmic disorders (e.g., dementia, cardiovascular disease). There has been a significant amount of clinical and imaging data for this research, leading to the potential incorporation of AI and DL for automated analysis. AI has the potential to transform healthcare by improving accuracy, speed, and workflow, lowering cost, increasing access, reducing mistakes, and transforming healthcare worker education and training.
Collapse
Affiliation(s)
- Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznań , Poland.
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiangji Pan
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Meizhu Wang
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Juan Ye
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Tien Y Wong
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| |
Collapse
|
15
|
Hoffmann L, Runkel CB, Künzel S, Kabiri P, Rübsam A, Bonaventura T, Marquardt P, Haas V, Biniaminov N, Biniaminov S, Joussen AM, Zeitz O. Using Deep Learning to Distinguish Highly Malignant Uveal Melanoma from Benign Choroidal Nevi. J Clin Med 2024; 13:4141. [PMID: 39064181 PMCID: PMC11277885 DOI: 10.3390/jcm13144141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 06/24/2024] [Accepted: 07/11/2024] [Indexed: 07/28/2024] Open
Abstract
Background: This study aimed to evaluate the potential of human-machine interaction (HMI) in a deep learning software for discerning the malignancy of choroidal melanocytic lesions based on fundus photographs. Methods: The study enrolled individuals diagnosed with a choroidal melanocytic lesion at a tertiary clinic between 2011 and 2023, resulting in a cohort of 762 eligible cases. A deep learning-based assistant integrated into the software underwent training using a dataset comprising 762 color fundus photographs (CFPs) of choroidal lesions captured by various fundus cameras. The dataset was categorized into benign nevi, untreated choroidal melanomas, and irradiated choroidal melanomas. The reference standard for evaluation was established by retinal specialists using multimodal imaging. Trinary and binary models were trained, and their classification performance was evaluated on a test set consisting of 100 independent images. The discriminative performance of deep learning models was evaluated based on accuracy, recall, and specificity. Results: The final accuracy rates on the independent test set for multi-class and binary (benign vs. malignant) classification were 84.8% and 90.9%, respectively. Recall and specificity ranged from 0.85 to 0.90 and 0.91 to 0.92, respectively. The mean area under the curve (AUC) values were 0.96 and 0.99, respectively. Optimal discriminative performance was observed in binary classification with the incorporation of a single imaging modality, achieving an accuracy of 95.8%. Conclusions: The deep learning models demonstrated commendable performance in distinguishing the malignancy of choroidal lesions. The software exhibits promise for resource-efficient and cost-effective pre-stratification.
Collapse
Affiliation(s)
- Laura Hoffmann
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Constance B. Runkel
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Steffen Künzel
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Payam Kabiri
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Anne Rübsam
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Theresa Bonaventura
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | | | | | | | | | - Antonia M. Joussen
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Oliver Zeitz
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| |
Collapse
|
16
|
Driban M, Yan A, Selvam A, Ong J, Vupparaboina KK, Chhablani J. Artificial intelligence in chorioretinal pathology through fundoscopy: a comprehensive review. Int J Retina Vitreous 2024; 10:36. [PMID: 38654344 PMCID: PMC11036694 DOI: 10.1186/s40942-024-00554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/02/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Applications for artificial intelligence (AI) in ophthalmology are continually evolving. Fundoscopy is one of the oldest ocular imaging techniques but remains a mainstay in posterior segment imaging due to its prevalence, ease of use, and ongoing technological advancement. AI has been leveraged for fundoscopy to accomplish core tasks including segmentation, classification, and prediction. MAIN BODY In this article we provide a review of AI in fundoscopy applied to representative chorioretinal pathologies, including diabetic retinopathy and age-related macular degeneration, among others. We conclude with a discussion of future directions and current limitations. SHORT CONCLUSION As AI evolves, it will become increasingly essential for the modern ophthalmologist to understand its applications and limitations to improve patient outcomes and continue to innovate.
Collapse
Affiliation(s)
- Matthew Driban
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Audrey Yan
- Department of Medicine, West Virginia School of Osteopathic Medicine, Lewisburg, WV, USA
| | - Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, USA
| | | | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
| |
Collapse
|
17
|
Ayhan MS, Neubauer J, Uzel MM, Gelisken F, Berens P. Interpretable detection of epiretinal membrane from optical coherence tomography with deep neural networks. Sci Rep 2024; 14:8484. [PMID: 38605115 PMCID: PMC11009346 DOI: 10.1038/s41598-024-57798-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 03/21/2024] [Indexed: 04/13/2024] Open
Abstract
This study aimed to automatically detect epiretinal membranes (ERM) in various OCT-scans of the central and paracentral macula region and classify them by size using deep-neural-networks (DNNs). To this end, 11,061 OCT-images were included and graded according to the presence of an ERM and its size (small 100-1000 µm, large > 1000 µm). The data set was divided into training, validation and test sets (75%, 10%, 15% of the data, respectively). An ensemble of DNNs was trained and saliency maps were generated using Guided-Backprob. OCT-scans were also transformed into a one-dimensional-value using t-SNE analysis. The DNNs' receiver-operating-characteristics on the test set showed a high performance for no-ERM, small-ERM and large-ERM cases (AUC: 0.99, 0.92, 0.99, respectively; 3-way accuracy: 89%), with small-ERMs being the most difficult ones to detect. t-SNE analysis sorted cases by size and, in particular, revealed increased classification uncertainty at the transitions between groups. Saliency maps reliably highlighted ERM, regardless of the presence of other OCT features (i.e. retinal-thickening, intraretinal pseudo-cysts, epiretinal-proliferation) and entities such as ERM-retinoschisis, macular-pseudohole and lamellar-macular-hole. This study showed therefore that DNNs can reliably detect and grade ERMs according to their size not only in the fovea but also in the paracentral region. This is also achieved in cases of hard-to-detect, small-ERMs. In addition, the generated saliency maps can be used to highlight small-ERMs that might otherwise be missed. The proposed model could be used for screening-programs or decision-support-systems in the future.
Collapse
Affiliation(s)
- Murat Seçkin Ayhan
- Institute for Ophthalmic Research, University of Tübingen, Elfriede Aulhorn Str. 7, 72076, Tübingen, Germany
| | - Jonas Neubauer
- University Eye Clinic, University of Tübingen, Tübingen, Germany
| | - Mehmet Murat Uzel
- University Eye Clinic, University of Tübingen, Tübingen, Germany
- Department of Ophthalmology, Balıkesir University School of Medicine, Balıkesir, Turkey
| | - Faik Gelisken
- University Eye Clinic, University of Tübingen, Tübingen, Germany.
| | - Philipp Berens
- Institute for Ophthalmic Research, University of Tübingen, Elfriede Aulhorn Str. 7, 72076, Tübingen, Germany.
- Tübingen AI Center, Tübingen, Germany.
| |
Collapse
|
18
|
Bae SH, Go S, Kim J, Park KH, Lee S, Park SJ. A novel vector field analysis for quantitative structure changes after macular epiretinal membrane surgery. Sci Rep 2024; 14:8242. [PMID: 38589440 PMCID: PMC11002028 DOI: 10.1038/s41598-024-58089-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 03/25/2024] [Indexed: 04/10/2024] Open
Abstract
The aim of this study was to introduce novel vector field analysis for the quantitative measurement of retinal displacement after epiretinal membrane (ERM) removal. We developed a novel framework to measure retinal displacement from retinal fundus images as follows: (1) rigid registration of preoperative retinal fundus images in reference to postoperative retinal fundus images, (2) extraction of retinal vessel segmentation masks from these retinal fundus images, (3) non-rigid registration of preoperative vessel masks in reference to postoperative vessel masks, and (4) calculation of the transformation matrix required for non-rigid registration for each pixel. These pixel-wise vector field results were summarized according to predefined 24 sectors after standardization. We applied this framework to 20 patients who underwent ERM removal to obtain their retinal displacement vector fields between retinal fundus images taken preoperatively and at postoperative 1, 4, 10, and 22 months. The mean direction of displacement vectors was in the nasal direction. The mean standardized magnitudes of retinal displacement between preoperative and postoperative 1 month, postoperative 1 and 4, 4 and 10, and 10 and 22 months were 38.6, 14.9, 7.6, and 5.4, respectively. In conclusion, the proposed method provides a computerized, reproducible, and scalable way to analyze structural changes in the retina with a powerful visualization tool. Retinal structural changes were mostly concentrated in the early postoperative period and tended to move nasally.
Collapse
Affiliation(s)
- Seok Hyun Bae
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 173-82 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, South Korea
- Department of Ophthalmology, HanGil Eye Hospital, Incheon, South Korea
| | - Sojung Go
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 173-82 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, South Korea
| | - Jooyoung Kim
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 173-82 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, South Korea
| | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, South Korea
| | - Soochahn Lee
- School of Electrical Engineering, Kookmin University, Seoul, South Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 173-82 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, South Korea.
| |
Collapse
|
19
|
Liu Y, Xie H, Zhao X, Tang J, Yu Z, Wu Z, Tian R, Chen Y, Chen M, Ntentakis DP, Du Y, Chen T, Hu Y, Zhang S, Lei B, Zhang G. Automated detection of nine infantile fundus diseases and conditions in retinal images using a deep learning system. EPMA J 2024; 15:39-51. [PMID: 38463622 PMCID: PMC10923762 DOI: 10.1007/s13167-024-00350-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 01/21/2024] [Indexed: 03/12/2024]
Abstract
Purpose We developed an Infant Retinal Intelligent Diagnosis System (IRIDS), an automated system to aid early diagnosis and monitoring of infantile fundus diseases and health conditions to satisfy urgent needs of ophthalmologists. Methods We developed IRIDS by combining convolutional neural networks and transformer structures, using a dataset of 7697 retinal images (1089 infants) from four hospitals. It identifies nine fundus diseases and conditions, namely, retinopathy of prematurity (ROP) (mild ROP, moderate ROP, and severe ROP), retinoblastoma (RB), retinitis pigmentosa (RP), Coats disease, coloboma of the choroid, congenital retinal fold (CRF), and normal. IRIDS also includes depth attention modules, ResNet-18 (Res-18), and Multi-Axis Vision Transformer (MaxViT). Performance was compared to that of ophthalmologists using 450 retinal images. The IRIDS employed a five-fold cross-validation approach to generate the classification results. Results Several baseline models achieved the following metrics: accuracy, precision, recall, F1-score (F1), kappa, and area under the receiver operating characteristic curve (AUC) with best values of 94.62% (95% CI, 94.34%-94.90%), 94.07% (95% CI, 93.32%-94.82%), 90.56% (95% CI, 88.64%-92.48%), 92.34% (95% CI, 91.87%-92.81%), 91.15% (95% CI, 90.37%-91.93%), and 99.08% (95% CI, 99.07%-99.09%), respectively. In comparison, IRIDS showed promising results compared to ophthalmologists, demonstrating an average accuracy, precision, recall, F1, kappa, and AUC of 96.45% (95% CI, 96.37%-96.53%), 95.86% (95% CI, 94.56%-97.16%), 94.37% (95% CI, 93.95%-94.79%), 95.03% (95% CI, 94.45%-95.61%), 94.43% (95% CI, 93.96%-94.90%), and 99.51% (95% CI, 99.51%-99.51%), respectively, in multi-label classification on the test dataset, utilizing the Res-18 and MaxViT models. These results suggest that, particularly in terms of AUC, IRIDS achieved performance that warrants further investigation for the detection of retinal abnormalities. Conclusions IRIDS identifies nine infantile fundus diseases and conditions accurately. It may aid non-ophthalmologist personnel in underserved areas in infantile fundus disease screening. Thus, preventing severe complications. The IRIDS serves as an example of artificial intelligence integration into ophthalmology to achieve better outcomes in predictive, preventive, and personalized medicine (PPPM / 3PM) in the treatment of infantile fundus diseases. Supplementary Information The online version contains supplementary material available at 10.1007/s13167-024-00350-y.
Collapse
Affiliation(s)
- Yaling Liu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Hai Xie
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Xinyu Zhao
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Jiannan Tang
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Zhen Yu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Zhenquan Wu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Ruyin Tian
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Yi Chen
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| | - Miaohong Chen
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| | - Dimitrios P. Ntentakis
- Retina Service, Ines and Fred Yeatts Retina Research Laboratory, Angiogenesis Laboratory, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA USA
| | - Yueshanyi Du
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Tingyi Chen
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| | - Yarou Hu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Sifan Zhang
- Guizhou Medical University, Guiyang, Guizhou China
- Southern University of Science and Technology School of Medicine, Shenzhen, China
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| |
Collapse
|
20
|
Pandey PU, Ballios BG, Christakis PG, Kaplan AJ, Mathew DJ, Ong Tone S, Wan MJ, Micieli JA, Wong JCY. Ensemble of deep convolutional neural networks is more accurate and reliable than board-certified ophthalmologists at detecting multiple diseases in retinal fundus photographs. Br J Ophthalmol 2024; 108:417-423. [PMID: 36720585 PMCID: PMC10894841 DOI: 10.1136/bjo-2022-322183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 01/11/2023] [Indexed: 02/02/2023]
Abstract
AIMS To develop an algorithm to classify multiple retinal pathologies accurately and reliably from fundus photographs and to validate its performance against human experts. METHODS We trained a deep convolutional ensemble (DCE), an ensemble of five convolutional neural networks (CNNs), to classify retinal fundus photographs into diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD) and normal eyes. The CNN architecture was based on the InceptionV3 model, and initial weights were pretrained on the ImageNet dataset. We used 43 055 fundus images from 12 public datasets. Five trained ensembles were then tested on an 'unseen' set of 100 images. Seven board-certified ophthalmologists were asked to classify these test images. RESULTS Board-certified ophthalmologists achieved a mean accuracy of 72.7% over all classes, while the DCE achieved a mean accuracy of 79.2% (p=0.03). The DCE had a statistically significant higher mean F1-score for DR classification compared with the ophthalmologists (76.8% vs 57.5%; p=0.01) and greater but statistically non-significant mean F1-scores for glaucoma (83.9% vs 75.7%; p=0.10), AMD (85.9% vs 85.2%; p=0.69) and normal eyes (73.0% vs 70.5%; p=0.39). The DCE had a greater mean agreement between accuracy and confident of 81.6% vs 70.3% (p<0.001). DISCUSSION We developed a deep learning model and found that it could more accurately and reliably classify four categories of fundus images compared with board-certified ophthalmologists. This work provides proof-of-principle that an algorithm is capable of accurate and reliable recognition of multiple retinal diseases using only fundus photographs.
Collapse
Affiliation(s)
- Prashant U Pandey
- School of Biomedical Engineering, The University of British Columbia, Vancouver, British Columbia, Canada
| | - Brian G Ballios
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Krembil Research Institute, University Health Network, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
| | - Panos G Christakis
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
| | - Alexander J Kaplan
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - David J Mathew
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Krembil Research Institute, University Health Network, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
| | - Stephan Ong Tone
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Sunnybrook Research Institute, Toronto, Ontario, Canada
| | - Michael J Wan
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Jonathan A Micieli
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
- Department of Ophthalmology, St. Michael's Hospital, Unity Health, Toronto, Ontario, Canada
| | - Jovi C Y Wong
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
21
|
Choi JY, Ryu IH, Kim JK, Lee IS, Yoo TK. Development of a generative deep learning model to improve epiretinal membrane detection in fundus photography. BMC Med Inform Decis Mak 2024; 24:25. [PMID: 38273286 PMCID: PMC10811871 DOI: 10.1186/s12911-024-02431-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Accepted: 01/17/2024] [Indexed: 01/27/2024] Open
Abstract
BACKGROUND The epiretinal membrane (ERM) is a common retinal disorder characterized by abnormal fibrocellular tissue at the vitreomacular interface. Most patients with ERM are asymptomatic at early stages. Therefore, screening for ERM will become increasingly important. Despite the high prevalence of ERM, few deep learning studies have investigated ERM detection in the color fundus photography (CFP) domain. In this study, we built a generative model to enhance ERM detection performance in the CFP. METHODS This deep learning study retrospectively collected 302 ERM and 1,250 healthy CFP data points from a healthcare center. The generative model using StyleGAN2 was trained using single-center data. EfficientNetB0 with StyleGAN2-based augmentation was validated using independent internal single-center data and external datasets. We randomly assigned healthcare center data to the development (80%) and internal validation (20%) datasets. Data from two publicly accessible sources were used as external validation datasets. RESULTS StyleGAN2 facilitated realistic CFP synthesis with the characteristic cellophane reflex features of the ERM. The proposed method with StyleGAN2-based augmentation outperformed the typical transfer learning without a generative adversarial network. The proposed model achieved an area under the receiver operating characteristic (AUC) curve of 0.926 for internal validation. AUCs of 0.951 and 0.914 were obtained for the two external validation datasets. Compared with the deep learning model without augmentation, StyleGAN2-based augmentation improved the detection performance and contributed to the focus on the location of the ERM. CONCLUSIONS We proposed an ERM detection model by synthesizing realistic CFP images with the pathological features of ERM through generative deep learning. We believe that our deep learning framework will help achieve a more accurate detection of ERM in a limited data setting.
Collapse
Affiliation(s)
- Joon Yul Choi
- Department of Biomedical Engineering, Yonsei University, Wonju, South Korea
| | - Ik Hee Ryu
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
- Research and development department, VISUWORKS, Seoul, South Korea
| | - Jin Kuk Kim
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
- Research and development department, VISUWORKS, Seoul, South Korea
| | - In Sik Lee
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
| | - Tae Keun Yoo
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea.
- Research and development department, VISUWORKS, Seoul, South Korea.
| |
Collapse
|
22
|
Valentim CCS, Wu AK, Yu S, Manivannan N, Zhang Q, Cao J, Song W, Wang V, Kang H, Kalur A, Iyer AI, Conti T, Singh RP, Talcott KE. Deep learning-based algorithm for the detection of idiopathic full thickness macular holes in spectral domain optical coherence tomography. Int J Retina Vitreous 2024; 10:9. [PMID: 38263402 PMCID: PMC10804727 DOI: 10.1186/s40942-024-00526-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 01/04/2024] [Indexed: 01/25/2024] Open
Abstract
BACKGROUND Automated identification of spectral domain optical coherence tomography (SD-OCT) features can improve retina clinic workflow efficiency as they are able to detect pathologic findings. The purpose of this study was to test a deep learning (DL)-based algorithm for the identification of Idiopathic Full Thickness Macular Hole (IFTMH) features and stages of severity in SD-OCT B-scans. METHODS In this cross-sectional study, subjects solely diagnosed with either IFTMH or Posterior Vitreous Detachment (PVD) were identified excluding secondary causes of macular holes, any concurrent maculopathies, or incomplete records. SD-OCT scans (512 × 128) from all subjects were acquired with CIRRUS™ HD-OCT (ZEISS, Dublin, CA) and reviewed for quality. In order to establish a ground truth classification, each SD-OCT B-scan was labeled by two trained graders and adjudicated by a retina specialist when applicable. Two test sets were built based on different gold-standard classification methods. The sensitivity, specificity and accuracy of the algorithm to identify IFTMH features in SD-OCT B-scans were determined. Spearman's correlation was run to examine if the algorithm's probability score was associated with the severity stages of IFTMH. RESULTS Six hundred and one SD-OCT cube scans from 601 subjects (299 with IFTMH and 302 with PVD) were used. A total of 76,928 individual SD-OCT B-scans were labeled gradable by the algorithm and yielded an accuracy of 88.5% (test set 1, 33,024 B-scans) and 91.4% (test set 2, 43,904 B-scans) in identifying SD-OCT features of IFTMHs. A Spearman's correlation coefficient of 0.15 was achieved between the algorithm's probability score and the stages of the 299 (47 [15.7%] stage 2, 56 [18.7%] stage 3 and 196 [65.6%] stage 4) IFTMHs cubes studied. CONCLUSIONS The DL-based algorithm was able to accurately detect IFTMHs features on individual SD-OCT B-scans in both test sets. However, there was a low correlation between the algorithm's probability score and IFTMH severity stages. The algorithm may serve as a clinical decision support tool that assists with the identification of IFTMHs. Further training is necessary for the algorithm to identify stages of IFTMHs.
Collapse
Affiliation(s)
- Carolina C S Valentim
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
| | - Anna K Wu
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
- Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Sophia Yu
- Carl Zeiss Meditec, Inc, Dublin, CA, USA
| | | | | | - Jessica Cao
- Cole Eye Institute, Cleveland Clinic Foundation, Cleveland, OH, USA
| | - Weilin Song
- Cleveland Clinic Lerner College of Medicine, Cleveland, OH, USA
| | - Victoria Wang
- Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Hannah Kang
- Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Aneesha Kalur
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
| | - Amogh I Iyer
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
| | - Thais Conti
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
| | - Rishi P Singh
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
| | - Katherine E Talcott
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA.
| |
Collapse
|
23
|
Li B, Chen H, Yu W, Zhang M, Lu F, Ma J, Hao Y, Li X, Hu B, Shen L, Mao J, He X, Wang H, Ding D, Li X, Chen Y. The performance of a deep learning system in assisting junior ophthalmologists in diagnosing 13 major fundus diseases: a prospective multi-center clinical trial. NPJ Digit Med 2024; 7:8. [PMID: 38212607 PMCID: PMC10784504 DOI: 10.1038/s41746-023-00991-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 12/11/2023] [Indexed: 01/13/2024] Open
Abstract
Artificial intelligence (AI)-based diagnostic systems have been reported to improve fundus disease screening in previous studies. This multicenter prospective self-controlled clinical trial aims to evaluate the diagnostic performance of a deep learning system (DLS) in assisting junior ophthalmologists in detecting 13 major fundus diseases. A total of 1493 fundus images from 748 patients were prospectively collected from five tertiary hospitals in China. Nine junior ophthalmologists were trained and annotated the images with or without the suggestions proposed by the DLS. The diagnostic performance was evaluated among three groups: DLS-assisted junior ophthalmologist group (test group), junior ophthalmologist group (control group) and DLS group. The diagnostic consistency was 84.9% (95%CI, 83.0% ~ 86.9%), 72.9% (95%CI, 70.3% ~ 75.6%) and 85.5% (95%CI, 83.5% ~ 87.4%) in the test group, control group and DLS group, respectively. With the help of the proposed DLS, the diagnostic consistency of junior ophthalmologists improved by approximately 12% (95% CI, 9.1% ~ 14.9%) with statistical significance (P < 0.001). For the detection of 13 diseases, the test group achieved significant higher sensitivities (72.2% ~ 100.0%) and comparable specificities (90.8% ~ 98.7%) comparing with the control group (sensitivities, 50% ~ 100%; specificities 96.7 ~ 99.8%). The DLS group presented similar performance to the test group in the detection of any fundus abnormality (sensitivity, 95.7%; specificity, 87.2%) and each of the 13 diseases (sensitivity, 83.3% ~ 100.0%; specificity, 89.0 ~ 98.0%). The proposed DLS provided a novel approach for the automatic detection of 13 major fundus diseases with high diagnostic consistency and assisted to improve the performance of junior ophthalmologists, resulting especially in reducing the risk of missed diagnoses. ClinicalTrials.gov NCT04723160.
Collapse
Affiliation(s)
- Bing Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Huan Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Ming Zhang
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Fang Lu
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Jingxue Ma
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Yuhua Hao
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Xiaorong Li
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Bojie Hu
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Lijun Shen
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Jianbo Mao
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Xixi He
- School of Information Science and Technology, North China University of Technology, Beijing, China
- Beijing Key Laboratory on Integration and Analysis of Large-scale Stream Data, Beijing, China
| | - Hao Wang
- Visionary Intelligence Ltd., Beijing, China
| | | | - Xirong Li
- MoE Key Lab of DEKE, Renmin University of China, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China.
| |
Collapse
|
24
|
Fujinami-Yokokawa Y, Joo K, Liu X, Tsunoda K, Kondo M, Ahn SJ, Robson AG, Naka I, Ohashi J, Li H, Yang L, Arno G, Pontikos N, Park KH, Michaelides M, Tachimori H, Miyata H, Sui R, Woo SJ, Fujinami K. Distinct Clinical Effects of Two RP1L1 Hotspots in East Asian Patients With Occult Macular Dystrophy (Miyake Disease): EAOMD Report 4. Invest Ophthalmol Vis Sci 2024; 65:41. [PMID: 38265784 PMCID: PMC10810149 DOI: 10.1167/iovs.65.1.41] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 12/20/2023] [Indexed: 01/25/2024] Open
Abstract
Purpose To characterize the clinical effects of two RP1L1 hotspots in patients with East Asian occult macular dystrophy (OMD). Methods Fifty-one patients diagnosed with OMD harboring monoallelic pathogenic RP1L1 variants (Miyake disease) from Japan, South Korea, and China were enrolled. Patients were classified into two genotype groups: group A, p.R45W, and group B, missense variants located between amino acids (aa) 1196 and 1201. The clinical parameters of the two genotypes were compared, and deep learning based on spectral-domain optical coherence tomographic (SD-OCT) images was used to distinguish the morphologic differences. Results Groups A and B included 29 and 22 patients, respectively. The median age of onset in groups A and B was 14.0 and 40.0 years, respectively. The median logMAR visual acuity of groups A and B was 0.70 and 0.51, respectively, and the survival curve analysis revealed a 15-year difference in vision loss (logMAR 0.22). A statistically significant difference was observed in the visual field classification, but no significant difference was found in the multifocal electroretinographic classification. High accuracy (75.4%) was achieved in classifying genotype groups based on SD-OCT images using machine learning. Conclusions Distinct clinical severities and morphologic phenotypes supported by artificial intelligence-based classification were derived from the two investigated RP1L1 hotspots: a more severe phenotype (p.R45W) and a milder phenotype (1196-1201 aa). This newly identified genotype-phenotype association will be valuable for medical care and the design of therapeutic trials.
Collapse
Affiliation(s)
- Yu Fujinami-Yokokawa
- Department of Health Policy and Management, Keio University School of Medicine, Tokyo, Japan
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
- UCL Institute of Ophthalmology, London, United Kingdom
- Division of Public Health, Yokokawa Clinic, Suita, Japan
| | - Kwangsic Joo
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Xiao Liu
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
- Southwest Hospital, Army Medical University, Chongqing, China
- Key Lab of Visual Damage and Regeneration & Restoration of Chongqing, Chongqing, China
| | - Kazushige Tsunoda
- Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
| | - Mineo Kondo
- Department of Ophthalmology, Mie University Graduate School of Medicine, Mie, Japan
| | - Seong Joon Ahn
- Department of Ophthalmology, Hanyang University Hospital, Hanyang University College of Medicine, Seoul, Republic of Korea
| | - Anthony G. Robson
- UCL Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital, London, United Kingdom
| | - Izumi Naka
- Department of Biological Sciences, Graduate School of Science, The University of Tokyo, Tokyo, Japan
| | - Jun Ohashi
- Department of Biological Sciences, Graduate School of Science, The University of Tokyo, Tokyo, Japan
| | - Hui Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
| | - Lizhu Yang
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
| | - Gavin Arno
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
- UCL Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital, London, United Kingdom
| | - Nikolas Pontikos
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
- UCL Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital, London, United Kingdom
| | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Michel Michaelides
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
- UCL Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital, London, United Kingdom
| | - Hisateru Tachimori
- Endowed Course for Health System Innovation, Keio University School of Medicine, Tokyo, Japan
| | - Hiroaki Miyata
- Department of Health Policy and Management, Keio University School of Medicine, Tokyo, Japan
| | - Ruifang Sui
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
| | - Se Joon Woo
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Kaoru Fujinami
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
- UCL Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital, London, United Kingdom
| | - for the East Asia Inherited Retinal Disease Society Study Group*
- Department of Health Policy and Management, Keio University School of Medicine, Tokyo, Japan
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
- UCL Institute of Ophthalmology, London, United Kingdom
- Division of Public Health, Yokokawa Clinic, Suita, Japan
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
- Southwest Hospital, Army Medical University, Chongqing, China
- Key Lab of Visual Damage and Regeneration & Restoration of Chongqing, Chongqing, China
- Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
- Department of Ophthalmology, Mie University Graduate School of Medicine, Mie, Japan
- Department of Ophthalmology, Hanyang University Hospital, Hanyang University College of Medicine, Seoul, Republic of Korea
- Moorfields Eye Hospital, London, United Kingdom
- Department of Biological Sciences, Graduate School of Science, The University of Tokyo, Tokyo, Japan
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
- Department of Ophthalmology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
- Endowed Course for Health System Innovation, Keio University School of Medicine, Tokyo, Japan
| |
Collapse
|
25
|
Peng Z, Ma R, Zhang Y, Yan M, Lu J, Cheng Q, Liao J, Zhang Y, Wang J, Zhao Y, Zhu J, Qin B, Jiang Q, Shi F, Qian J, Chen X, Zhao C. Development and evaluation of multimodal AI for diagnosis and triage of ophthalmic diseases using ChatGPT and anterior segment images: protocol for a two-stage cross-sectional study. Front Artif Intell 2023; 6:1323924. [PMID: 38145231 PMCID: PMC10748413 DOI: 10.3389/frai.2023.1323924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 11/22/2023] [Indexed: 12/26/2023] Open
Abstract
Introduction Artificial intelligence (AI) technology has made rapid progress for disease diagnosis and triage. In the field of ophthalmic diseases, image-based diagnosis has achieved high accuracy but still encounters limitations due to the lack of medical history. The emergence of ChatGPT enables human-computer interaction, allowing for the development of a multimodal AI system that integrates interactive text and image information. Objective To develop a multimodal AI system using ChatGPT and anterior segment images for diagnosing and triaging ophthalmic diseases. To assess the AI system's performance through a two-stage cross-sectional study, starting with silent evaluation and followed by early clinical evaluation in outpatient clinics. Methods and analysis Our study will be conducted across three distinct centers in Shanghai, Nanjing, and Suqian. The development of the smartphone-based multimodal AI system will take place in Shanghai with the goal of achieving ≥90% sensitivity and ≥95% specificity for diagnosing and triaging ophthalmic diseases. The first stage of the cross-sectional study will explore the system's performance in Shanghai's outpatient clinics. Medical histories will be collected without patient interaction, and anterior segment images will be captured using slit lamp equipment. This stage aims for ≥85% sensitivity and ≥95% specificity with a sample size of 100 patients. The second stage will take place at three locations, with Shanghai serving as the internal validation dataset, and Nanjing and Suqian as the external validation dataset. Medical history will be collected through patient interviews, and anterior segment images will be captured via smartphone devices. An expert panel will establish reference standards and assess AI accuracy for diagnosis and triage throughout all stages. A one-vs.-rest strategy will be used for data analysis, and a post-hoc power calculation will be performed to evaluate the impact of disease types on AI performance. Discussion Our study may provide a user-friendly smartphone-based multimodal AI system for diagnosis and triage of ophthalmic diseases. This innovative system may support early detection of ocular abnormalities, facilitate establishment of a tiered healthcare system, and reduce the burdens on tertiary facilities. Trial registration The study was registered in ClinicalTrials.gov on June 25th, 2023 (NCT05930444).
Collapse
Affiliation(s)
- Zhiyu Peng
- Department of Ophthalmology, Fudan Eye & ENT Hospital, Shanghai, China
- Department of Ophthalmology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
| | - Ruiqi Ma
- Department of Ophthalmology, Fudan Eye & ENT Hospital, Shanghai, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
| | - Yihan Zhang
- Department of Ophthalmology, Fudan Eye & ENT Hospital, Shanghai, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
| | - Mingxu Yan
- Department of Ophthalmology, Fudan Eye & ENT Hospital, Shanghai, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- School of Basic Medical Sciences, Fudan University, Shanghai, China
| | - Jie Lu
- Department of Ophthalmology, Fudan Eye & ENT Hospital, Shanghai, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- School of Public Health, Fudan University, Shanghai, China
| | - Qian Cheng
- Medical Image Processing, Analysis, and Visualization (MIVAP) Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, China
| | - Jingjing Liao
- Medical Image Processing, Analysis, and Visualization (MIVAP) Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, China
| | - Yunqiu Zhang
- School of Public Health, Fudan University, Shanghai, China
| | - Jinghan Wang
- Department of Ophthalmology, Fudan Eye & ENT Hospital, Shanghai, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
| | - Yue Zhao
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
| | - Jiang Zhu
- Department of Ophthalmology, Suqian First Hospital, Suqian, China
| | - Bing Qin
- Department of Ophthalmology, Suqian First Hospital, Suqian, China
| | - Qin Jiang
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Fei Shi
- Medical Image Processing, Analysis, and Visualization (MIVAP) Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, China
| | - Jiang Qian
- Department of Ophthalmology, Fudan Eye & ENT Hospital, Shanghai, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
| | - Xinjian Chen
- Medical Image Processing, Analysis, and Visualization (MIVAP) Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, China
- State Key Laboratory of Radiation Medicine and Protection, Soochow University, Suzhou, China
| | - Chen Zhao
- Department of Ophthalmology, Fudan Eye & ENT Hospital, Shanghai, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
| |
Collapse
|
26
|
Yamashita T, Asaoka R, Terasaki H, Yoshihara N, Kakiuchi N, Sakamoto T. Three-year changes in sex judgment using color fundus parameters in elementary school students. PLoS One 2023; 18:e0295123. [PMID: 38033010 PMCID: PMC10688721 DOI: 10.1371/journal.pone.0295123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 11/14/2023] [Indexed: 12/02/2023] Open
Abstract
PURPOSE In a previous cross-sectional study, we reported that the sexes can be distinguished using known factors obtained from color fundus photography (CFP). However, it is not clear how sex differences in fundus parameters appear across the human lifespan. Therefore, we conducted a cohort study to investigate sex determination based on fundus parameters in elementary school students. METHODS This prospective observational longitudinal study investigated 109 right eyes of elementary school students over 4 years (age, 8.5 to 11.5 years). From each CFP, the tessellation fundus index was calculated as red/red + green + blue (R/[R+G+B]) using the mean value of red-green-blue intensity in eight locations around the optic disc and macular region. Optic disc area, ovality ratio, papillomacular angle, and retinal vessel angles and distances were quantified according to the data in our previous report. Using 54 fundus parameters, sex was predicted by L2 regularized binomial logistic regression for each grade. RESULTS The right eyes of 53 boys and 56 girls were analyzed. The discrimination accuracy rate significantly increased with age: 56.3% at 8.5 years, 46.1% at 9.5 years, 65.5% at 10.5 years and 73.1% at 11.5 years. CONCLUSIONS The accuracy of sex discrimination by fundus photography improved during a 3-year cohort study of elementary school students.
Collapse
Affiliation(s)
- Takehiro Yamashita
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Ryo Asaoka
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan
- School of Nursing, Seirei Christopher University, Hamamatsu, Shizuoka, Japan
- Nanovision Research Division, Research Institute of Electronics, Shizuoka University, Hamamatsu, Shizuoka, Japan
- The Graduate School for the Creation of New Photonics Industries, Hamamatsu, Shizuoka, Japan
| | - Hiroto Terasaki
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Naoya Yoshihara
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Naoko Kakiuchi
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| |
Collapse
|
27
|
Li L, Lin D, Lin Z, Li M, Lian Z, Zhao L, Wu X, Liu L, Liu J, Wei X, Luo M, Zeng D, Yan A, Iao WC, Shang Y, Xu F, Xiang W, He M, Fu Z, Wang X, Deng Y, Fan X, Ye Z, Wei M, Zhang J, Liu B, Li J, Ding X, Lin H. DeepQuality improves infant retinopathy screening. NPJ Digit Med 2023; 6:192. [PMID: 37845275 PMCID: PMC10579317 DOI: 10.1038/s41746-023-00943-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 10/05/2023] [Indexed: 10/18/2023] Open
Abstract
Image quality variation is a prominent cause of performance degradation for intelligent disease diagnostic models in clinical applications. Image quality issues are particularly prominent in infantile fundus photography due to poor patient cooperation, which poses a high risk of misdiagnosis. Here, we developed a deep learning-based image quality assessment and enhancement system (DeepQuality) for infantile fundus images to improve infant retinopathy screening. DeepQuality can accurately detect various quality defects concerning integrity, illumination, and clarity with area under the curve (AUC) values ranging from 0.933 to 0.995. It can also comprehensively score the overall quality of each fundus photograph. By analyzing 2,015,758 infantile fundus photographs from real-world settings using DeepQuality, we found that 58.3% of them had varying degrees of quality defects, and large variations were observed among different regions and categories of hospitals. Additionally, DeepQuality provides quality enhancement based on the results of quality assessment. After quality enhancement, the performance of retinopathy of prematurity (ROP) diagnosis of clinicians was significantly improved. Moreover, the integration of DeepQuality and AI diagnostic models can effectively improve the model performance for detecting ROP. This study may be an important reference for the future development of other image-based intelligent disease screening systems.
Collapse
Affiliation(s)
- Longhui Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Mingyuan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Zhangkai Lian
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Jiali Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaoyue Wei
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Mingjie Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Danqi Zeng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Anqi Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Wai Cheng Iao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Yuanjun Shang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Fabao Xu
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Wei Xiang
- Department of Clinical Laboratory Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Muchen He
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zhe Fu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xueyu Wang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yaru Deng
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xinyan Fan
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zhijun Ye
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Meirong Wei
- Department of Ophthalmology, Maternal and Children's Hospital, Liuzhou, Guangxi, China
| | - Jianping Zhang
- Department of Ophthalmology, Maternal and Children's Hospital, Liuzhou, Guangxi, China
| | - Baohai Liu
- Department of Ophthalmology, Maternal and Children's Hospital, Linyi, Shandong, China
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Xiaoyan Ding
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China.
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
28
|
An L, Qin J, Jiang W, Luo P, Luo X, Lai Y, Jin M. Non-invasive and accurate risk evaluation of cerebrovascular disease using retinal fundus photo based on deep learning. Front Neurol 2023; 14:1257388. [PMID: 37745652 PMCID: PMC10513168 DOI: 10.3389/fneur.2023.1257388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 08/25/2023] [Indexed: 09/26/2023] Open
Abstract
Background Cerebrovascular disease (CeVD) is a prominent contributor to global mortality and profound disability. Extensive research has unveiled a connection between CeVD and retinal microvascular abnormalities. Nonetheless, manual analysis of fundus images remains a laborious and time-consuming task. Consequently, our objective is to develop a risk prediction model that utilizes retinal fundus photo to noninvasively and accurately assess cerebrovascular risks. Materials and methods To leverage retinal fundus photo for CeVD risk evaluation, we proposed a novel model called Efficient Attention which combines the convolutional neural network with attention mechanism. This combination aims to reinforce the salient features present in fundus photos, consequently improving the accuracy and effectiveness of cerebrovascular risk assessment. Result Our proposed model demonstrates notable advancements compared to the conventional ResNet and Efficient-Net architectures. The accuracy (ACC) of our model is 0.834 ± 0.03, surpassing Efficient-Net by a margin of 3.6%. Additionally, our model exhibits an improved area under the receiver operating characteristic curve (AUC) of 0.904 ± 0.02, surpassing other methods by a margin of 2.2%. Conclusion This paper provides compelling evidence that Efficient-Attention methods can serve as effective and accurate tool for cerebrovascular risk. The results of the study strongly support the notion that retinal fundus photo holds great potential as a reliable predictor of CeVD, which offers a noninvasive, convenient and low-cost solution for large scale screening of CeVD.
Collapse
Affiliation(s)
- Lin An
- Guangdong Weiren Meditech Co., Ltd, Foshan, Guangdong, China
| | - Jia Qin
- Guangdong Weiren Meditech Co., Ltd, Foshan, Guangdong, China
| | - Weili Jiang
- Foshan Weizhi Meditech Co., Ltd, Foshan, Guangdong, China
| | - Penghao Luo
- Foshan Weizhi Meditech Co., Ltd, Foshan, Guangdong, China
| | - Xiaoyan Luo
- Department of Ophthalmology, Guangdong Provincial Hospital of Integrated Traditional Chinese and Western Medicine, Foshan, Guangdong, China
| | - Yuzheng Lai
- Department of Neurology, Guangdong Provincial Hospital of Integrated Traditional Chinese and Western Medicine, Foshan, Guangdong, China
| | - Mei Jin
- Department of Ophthalmology, Guangdong Provincial Hospital of Integrated Traditional Chinese and Western Medicine, Foshan, Guangdong, China
| |
Collapse
|
29
|
Danese C, Kale AU, Aslam T, Lanzetta P, Barratt J, Chou YB, Eldem B, Eter N, Gale R, Korobelnik JF, Kozak I, Li X, Li X, Loewenstein A, Ruamviboonsuk P, Sakamoto T, Ting DS, van Wijngaarden P, Waldstein SM, Wong D, Wu L, Zapata MA, Zarranz-Ventura J. The impact of artificial intelligence on retinal disease management: Vision Academy retinal expert consensus. Curr Opin Ophthalmol 2023; 34:396-402. [PMID: 37326216 PMCID: PMC10399953 DOI: 10.1097/icu.0000000000000980] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE OF REVIEW The aim of this review is to define the "state-of-the-art" in artificial intelligence (AI)-enabled devices that support the management of retinal conditions and to provide Vision Academy recommendations on the topic. RECENT FINDINGS Most of the AI models described in the literature have not been approved for disease management purposes by regulatory authorities. These new technologies are promising as they may be able to provide personalized treatments as well as a personalized risk score for various retinal diseases. However, several issues still need to be addressed, such as the lack of a common regulatory pathway and a lack of clarity regarding the applicability of AI-enabled medical devices in different populations. SUMMARY It is likely that current clinical practice will need to change following the application of AI-enabled medical devices. These devices are likely to have an impact on the management of retinal disease. However, a consensus needs to be reached to ensure they are safe and effective for the overall population.
Collapse
Affiliation(s)
- Carla Danese
- Department of Medicine – Ophthalmology, University of Udine, Udine, Italy
- Department of Ophthalmology, AP-HP Hôpital Lariboisière, Université Paris Cité, Paris, France
| | - Aditya U. Kale
- Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham
| | - Tariq Aslam
- Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, University of Manchester School of Health Sciences, Manchester, UK
| | - Paolo Lanzetta
- Department of Medicine – Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare, Udine, Italy
| | - Jane Barratt
- International Federation on Ageing, Toronto, Canada
| | - Yu-Bai Chou
- Department of Ophthalmology, Taipei Veterans General Hospital
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Bora Eldem
- Department of Ophthalmology, Hacettepe University, Ankara, Turkey
| | - Nicole Eter
- Department of Ophthalmology, University of Münster Medical Center, Münster, Germany
| | - Richard Gale
- Department of Ophthalmology, York Teaching Hospital NHS Foundation Trust, York, UK
| | - Jean-François Korobelnik
- Service d’ophtalmologie, CHU Bordeaux
- University of Bordeaux, INSERM, BPH, UMR1219, F-33000 Bordeaux, France
| | - Igor Kozak
- Moorfields Eye Hospital Centre, Abu Dhabi, UAE
| | - Xiaorong Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin
| | - Xiaoxin Li
- Xiamen Eye Center, Xiamen University, Xiamen, China
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Paisan Ruamviboonsuk
- Department of Ophthalmology, College of Medicine, Rangsit University, Rajavithi Hospital, Bangkok, Thailand
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University, Kagoshima, Japan
| | - Daniel S.W. Ting
- Singapore National Eye Center, Duke-NUS Medical School, Singapore
| | - Peter van Wijngaarden
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Australia
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | | | - David Wong
- Unity Health Toronto – St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Lihteh Wu
- Macula, Vitreous and Retina Associates of Costa Rica, San José, Costa Rica
| | | | | |
Collapse
|
30
|
Chou YB, Kale AU, Lanzetta P, Aslam T, Barratt J, Danese C, Eldem B, Eter N, Gale R, Korobelnik JF, Kozak I, Li X, Li X, Loewenstein A, Ruamviboonsuk P, Sakamoto T, Ting DS, van Wijngaarden P, Waldstein SM, Wong D, Wu L, Zapata MA, Zarranz-Ventura J. Current status and practical considerations of artificial intelligence use in screening and diagnosing retinal diseases: Vision Academy retinal expert consensus. Curr Opin Ophthalmol 2023; 34:403-413. [PMID: 37326222 PMCID: PMC10399944 DOI: 10.1097/icu.0000000000000979] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE OF REVIEW The application of artificial intelligence (AI) technologies in screening and diagnosing retinal diseases may play an important role in telemedicine and has potential to shape modern healthcare ecosystems, including within ophthalmology. RECENT FINDINGS In this article, we examine the latest publications relevant to AI in retinal disease and discuss the currently available algorithms. We summarize four key requirements underlining the successful application of AI algorithms in real-world practice: processing massive data; practicability of an AI model in ophthalmology; policy compliance and the regulatory environment; and balancing profit and cost when developing and maintaining AI models. SUMMARY The Vision Academy recognizes the advantages and disadvantages of AI-based technologies and gives insightful recommendations for future directions.
Collapse
Affiliation(s)
- Yu-Bai Chou
- Department of Ophthalmology, Taipei Veterans General Hospital
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Aditya U. Kale
- Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Paolo Lanzetta
- Department of Medicine – Ophthalmology, University of Udine
- Istituto Europeo di Microchirurgia Oculare, Udine, Italy
| | - Tariq Aslam
- Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, University of Manchester School of Health Sciences, Manchester, UK
| | - Jane Barratt
- International Federation on Ageing, Toronto, Canada
| | - Carla Danese
- Department of Medicine – Ophthalmology, University of Udine
- Department of Ophthalmology, AP-HP Hôpital Lariboisière, Université Paris Cité, Paris, France
| | - Bora Eldem
- Department of Ophthalmology, Hacettepe University, Ankara, Turkey
| | - Nicole Eter
- Department of Ophthalmology, University of Münster Medical Center, Münster, Germany
| | - Richard Gale
- Department of Ophthalmology, York Teaching Hospital NHS Foundation Trust, York, UK
| | - Jean-François Korobelnik
- Service d’ophtalmologie, CHU Bordeaux
- University of Bordeaux, INSERM, BPH, UMR1219, F-33000 Bordeaux, France
| | - Igor Kozak
- Moorfields Eye Hospital Centre, Abu Dhabi, UAE
| | - Xiaorong Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin
| | - Xiaoxin Li
- Xiamen Eye Center, Xiamen University, Xiamen, China
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Paisan Ruamviboonsuk
- Department of Ophthalmology, College of Medicine, Rangsit University, Rajavithi Hospital, Bangkok, Thailand
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University, Kagoshima, Japan
| | - Daniel S.W. Ting
- Singapore National Eye Center, Duke-NUS Medical School, Singapore
| | - Peter van Wijngaarden
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Australia
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | | | - David Wong
- Unity Health Toronto – St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Lihteh Wu
- Macula, Vitreous and Retina Associates of Costa Rica, San José, Costa Rica
| | | | | |
Collapse
|
31
|
Hadi MU, Qureshi R, Ahmed A, Iftikhar N. A lightweight CORONA-NET for COVID-19 detection in X-ray images. EXPERT SYSTEMS WITH APPLICATIONS 2023; 225:120023. [PMID: 37063778 PMCID: PMC10088342 DOI: 10.1016/j.eswa.2023.120023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 03/28/2023] [Accepted: 03/31/2023] [Indexed: 06/19/2023]
Abstract
Since December 2019, COVID-19 has posed the most serious threat to living beings. With the advancement of vaccination programs around the globe, the need to quickly diagnose COVID-19 in general with little logistics is fore important. As a consequence, the fastest diagnostic option to stop COVID-19 from spreading, especially among senior patients, should be the development of an automated detection system. This study aims to provide a lightweight deep learning method that incorporates a convolutional neural network (CNN), discrete wavelet transform (DWT), and a long short-term memory (LSTM), called CORONA-NET for diagnosing COVID-19 from chest X-ray images. In this system, deep feature extraction is performed by CNN, the feature vector is reduced yet strengthened by DWT, and the extracted feature is detected by LSTM for prediction. The dataset included 3000 X-rays, 1000 of which were COVID-19 obtained locally. Within minutes of the test, the proposed test platform's prototype can accurately detect COVID-19 patients. The proposed method achieves state-of-the-art performance in comparison with the existing deep learning methods. We hope that the suggested method will hasten clinical diagnosis and may be used for patients in remote areas where clinical labs are not easily accessible due to a lack of resources, location, or other factors.
Collapse
Affiliation(s)
- Muhammad Usman Hadi
- Nanotechnology and Integrated Bio-Engineering Centre (NIBEC), School of Engineering, Ulster University, BT15 1AP Belfast, UK
| | - Rizwan Qureshi
- Department of Imaging Physics, MD Anderson Cancer Center, The University of Texas, Houston, TX 77030, USA
| | - Ayesha Ahmed
- Department of Radiology, Aalborg University Hospital, Aalborg 9000, Denmark
| | - Nadeem Iftikhar
- University College of Northern Denmark, Aalborg 9200, Denmark
| |
Collapse
|
32
|
Matta S, Lamard M, Conze PH, Le Guilcher A, Lecat C, Carette R, Basset F, Massin P, Rottier JB, Cochener B, Quellec G. Towards population-independent, multi-disease detection in fundus photographs. Sci Rep 2023; 13:11493. [PMID: 37460629 DOI: 10.1038/s41598-023-38610-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 07/11/2023] [Indexed: 07/20/2023] Open
Abstract
Independent validation studies of automatic diabetic retinopathy screening systems have recently shown a drop of screening performance on external data. Beyond diabetic retinopathy, this study investigates the generalizability of deep learning (DL) algorithms for screening various ocular anomalies in fundus photographs, across heterogeneous populations and imaging protocols. The following datasets are considered: OPHDIAT (France, diabetic population), OphtaMaine (France, general population), RIADD (India, general population) and ODIR (China, general population). Two multi-disease DL algorithms were developed: a Single-Dataset (SD) network, trained on the largest dataset (OPHDIAT), and a Multiple-Dataset (MD) network, trained on multiple datasets simultaneously. To assess their generalizability, both algorithms were evaluated whenever training and test data originate from overlapping datasets or from disjoint datasets. The SD network achieved a mean per-disease area under the receiver operating characteristic curve (mAUC) of 0.9571 on OPHDIAT. However, it generalized poorly to the other three datasets (mAUC < 0.9). When all four datasets were involved in training, the MD network significantly outperformed the SD network (p = 0.0058), indicating improved generality. However, in leave-one-dataset-out experiments, performance of the MD network was significantly lower on populations unseen during training than on populations involved in training (p < 0.0001), indicating imperfect generalizability.
Collapse
Affiliation(s)
- Sarah Matta
- Université de Bretagne Occidentale, Brest, Bretagne, France.
- INSERM, UMR 1101, Brest, F-29 200, France.
| | - Mathieu Lamard
- Université de Bretagne Occidentale, Brest, Bretagne, France
- INSERM, UMR 1101, Brest, F-29 200, France
| | - Pierre-Henri Conze
- INSERM, UMR 1101, Brest, F-29 200, France
- IMT Atlantique, Brest, F-29200, France
| | | | - Clément Lecat
- Evolucare Technologies, Villers-Bretonneux, F-80800, France
| | | | - Fabien Basset
- Evolucare Technologies, Villers-Bretonneux, F-80800, France
| | - Pascale Massin
- Service d'Ophtalmologie, Hôpital Lariboisière, APHP, Paris, F-75475, France
| | - Jean-Bernard Rottier
- Bâtiment de consultation porte 14 Pôle Santé Sud CMCM, 28 Rue de Guetteloup, Le Mans, F-72100, France
| | - Béatrice Cochener
- Université de Bretagne Occidentale, Brest, Bretagne, France
- INSERM, UMR 1101, Brest, F-29 200, France
- Service d'Ophtalmologie, CHRU Brest, Brest, F-29200, France
| | | |
Collapse
|
33
|
Gomes RFT, Schuch LF, Martins MD, Honório EF, de Figueiredo RM, Schmith J, Machado GN, Carrard VC. Use of Deep Neural Networks in the Detection and Automated Classification of Lesions Using Clinical Images in Ophthalmology, Dermatology, and Oral Medicine-A Systematic Review. J Digit Imaging 2023; 36:1060-1070. [PMID: 36650299 PMCID: PMC10287602 DOI: 10.1007/s10278-023-00775-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 01/03/2023] [Accepted: 01/04/2023] [Indexed: 01/19/2023] Open
Abstract
Artificial neural networks (ANN) are artificial intelligence (AI) techniques used in the automated recognition and classification of pathological changes from clinical images in areas such as ophthalmology, dermatology, and oral medicine. The combination of enterprise imaging and AI is gaining notoriety for its potential benefits in healthcare areas such as cardiology, dermatology, ophthalmology, pathology, physiatry, radiation oncology, radiology, and endoscopic. The present study aimed to analyze, through a systematic literature review, the application of performance of ANN and deep learning in the recognition and automated classification of lesions from clinical images, when comparing to the human performance. The PRISMA 2020 approach (Preferred Reporting Items for Systematic Reviews and Meta-analyses) was used by searching four databases of studies that reference the use of IA to define the diagnosis of lesions in ophthalmology, dermatology, and oral medicine areas. A quantitative and qualitative analyses of the articles that met the inclusion criteria were performed. The search yielded the inclusion of 60 studies. It was found that the interest in the topic has increased, especially in the last 3 years. We observed that the performance of IA models is promising, with high accuracy, sensitivity, and specificity, most of them had outcomes equivalent to human comparators. The reproducibility of the performance of models in real-life practice has been reported as a critical point. Study designs and results have been progressively improved. IA resources have the potential to contribute to several areas of health. In the coming years, it is likely to be incorporated into everyday life, contributing to the precision and reducing the time required by the diagnostic process.
Collapse
Affiliation(s)
- Rita Fabiane Teixeira Gomes
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil.
| | - Lauren Frenzel Schuch
- Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, Brazil
| | - Manoela Domingues Martins
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil
- Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, Brazil
| | | | - Rodrigo Marques de Figueiredo
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Jean Schmith
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Giovanna Nunes Machado
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Vinicius Coelho Carrard
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil
- Department of Epidemiology, School of Medicine, TelessaúdeRS-UFRGS, Federal University of Rio Grande Do Sul, Porto Alegre, RS, Brazil
- Department of Oral Medicine, Otorhinolaryngology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, RS, Brazil
| |
Collapse
|
34
|
Chłopowiec AR, Karanowski K, Skrzypczak T, Grzesiuk M, Chłopowiec AB, Tabakov M. Counteracting Data Bias and Class Imbalance-Towards a Useful and Reliable Retinal Disease Recognition System. Diagnostics (Basel) 2023; 13:diagnostics13111904. [PMID: 37296756 DOI: 10.3390/diagnostics13111904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 05/22/2023] [Accepted: 05/25/2023] [Indexed: 06/12/2023] Open
Abstract
Multiple studies presented satisfactory performances for the treatment of various ocular diseases. To date, there has been no study that describes a multiclass model, medically accurate, and trained on large diverse dataset. No study has addressed a class imbalance problem in one giant dataset originating from multiple large diverse eye fundus image collections. To ensure a real-life clinical environment and mitigate the problem of biased medical image data, 22 publicly available datasets were merged. To secure medical validity only Diabetic Retinopathy (DR), Age-Related Macular Degeneration (AMD) and Glaucoma (GL) were included. The state-of-the-art models ConvNext, RegNet and ResNet were utilized. In the resulting dataset, there were 86,415 normal, 3787 GL, 632 AMD and 34,379 DR fundus images. ConvNextTiny achieved the best results in terms of recognizing most of the examined eye diseases with the most metrics. The overall accuracy was 80.46 ± 1.48. Specific accuracy values were: 80.01 ± 1.10 for normal eye fundus, 97.20 ± 0.66 for GL, 98.14 ± 0.31 for AMD, 80.66 ± 1.27 for DR. A suitable screening model for the most prevalent retinal diseases in ageing societies was designed. The model was developed on a diverse, combined large dataset which made the obtained results less biased and more generalizable.
Collapse
Affiliation(s)
- Adam R Chłopowiec
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| | - Konrad Karanowski
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| | - Tomasz Skrzypczak
- Faculty of Medicine, Wroclaw Medical University, Wybrzeże Ludwika Pasteura 1, 50-367 Wroclaw, Poland
| | - Mateusz Grzesiuk
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| | - Adrian B Chłopowiec
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| | - Martin Tabakov
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| |
Collapse
|
35
|
Wang Y, Jia X, Wei S, Li X. A deep learning model established for evaluating lid margin signs with colour anterior segment photography. Eye (Lond) 2023; 37:1377-1382. [PMID: 35739245 PMCID: PMC10170093 DOI: 10.1038/s41433-022-02088-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 03/30/2022] [Accepted: 05/04/2022] [Indexed: 11/09/2022] Open
Abstract
OBJECTIVES To evaluate the feasibility of applying a deep learning model to identify lid margin signs from colour anterior segment photography. METHODS We collected a total of 832 colour anterior segment photographs from 428 dry eye patients. Eight lid margin signs were labelled by human ophthalmologists. Eight deep learning models were constructed based on VGGNet-13 and trained to identify lid margin signs. Sensitivity, specificity, receiver operative characteristic (ROC) curves and area under the curve (AUC) were applied to evaluate the models. RESULTS The AUC for rounding of posterior lid margin was 0.979 and was 0.977 and 0.980 for lid margin irregularity and vascularization. For hyperkeratinization, the AUC was 0.964. The AUCs for meibomian gland orifice (MGO) retroplacement and plugging were 0.963 and 0.968. For the mucocutaneous junction (MCJ) anteroplacement and retroplacement model, the AUCs were 0.950 and 0.978. The sensitivity and specificity for rounding of posterior lid margin were 0.974 and 0.921. For irregularity, the sensitivity and specificity were 0.930 and 0.938, and those for vascularization were 0.923 and 0.961. The hyperkeratinization model achieved a sensitivity and specificity of 0.889 and 0.948. The model identifying MGO plugging and retroplacement achieved a sensitivity of 0.979 and 0.909 with a specificity of 0.867 and 0.967. The sensitivity of MCJ anteroplacement and retroplacement were 0.875/0.969, with a specificity of 0.966/0.888. CONCLUSIONS The deep learning model could identify lid margin signs with high sensitivity and specificity. The study provided the potentiality of applying artificial intelligence in lid margin evaluation to assist dry eye decision-making.
Collapse
Affiliation(s)
- Yuexin Wang
- Department of Ophthalmology, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Xingheng Jia
- School of Vehicle and Mobility, Tsinghua University, Beijing, China
| | - Shanshan Wei
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing, China
| | - Xuemin Li
- Department of Ophthalmology, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China.
| |
Collapse
|
36
|
Cao S, Zhang R, Jiang A, Kuerban M, Wumaier A, Wu J, Xie K, Aizezi M, Tuersun A, Liang X, Chen R. Application effect of an artificial intelligence-based fundus screening system: evaluation in a clinical setting and population screening. Biomed Eng Online 2023; 22:38. [PMID: 37095516 PMCID: PMC10127070 DOI: 10.1186/s12938-023-01097-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 03/24/2023] [Indexed: 04/26/2023] Open
Abstract
BACKGROUND To investigate the application effect of artificial intelligence (AI)-based fundus screening system in real-world clinical environment. METHODS A total of 637 color fundus images were included in the analysis of the application of the AI-based fundus screening system in the clinical environment and 20,355 images were analyzed in the population screening. RESULTS The AI-based fundus screening system demonstrated superior diagnostic effectiveness for diabetic retinopathy (DR), retinal vein occlusion (RVO) and pathological myopia (PM) according to gold standard referral. The sensitivity, specificity, accuracy, positive predictive value (PPV) and negative predictive value (NPV) of three fundus abnormalities were greater (all > 80%) than those for age-related macular degeneration (ARMD), referable glaucoma and other abnormalities. The percentages of different diagnostic conditions were similar in both the clinical environment and the population screening. CONCLUSIONS In a real-world setting, our AI-based fundus screening system could detect 7 conditions, with better performance for DR, RVO and PM. Testing in the clinical environment and through population screening demonstrated the clinical utility of our AI-based fundus screening system in the early detection of ocular fundus abnormalities and the prevention of blindness.
Collapse
Affiliation(s)
- Shujuan Cao
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Rongpei Zhang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Aixin Jiang
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Mayila Kuerban
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Aizezi Wumaier
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Jianhua Wu
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Kaihua Xie
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Mireayi Aizezi
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Abudurexiti Tuersun
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Xuanwei Liang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China.
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China.
| | - Rongxin Chen
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China.
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China.
| |
Collapse
|
37
|
Alam MN, Yamashita R, Ramesh V, Prabhune T, Lim JI, Chan RVP, Hallak J, Leng T, Rubin D. Contrastive learning-based pretraining improves representation and transferability of diabetic retinopathy classification models. Sci Rep 2023; 13:6047. [PMID: 37055475 PMCID: PMC10102012 DOI: 10.1038/s41598-023-33365-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 04/12/2023] [Indexed: 04/15/2023] Open
Abstract
Diabetic retinopathy (DR) is a major cause of vision impairment in diabetic patients worldwide. Due to its prevalence, early clinical diagnosis is essential to improve treatment management of DR patients. Despite recent demonstration of successful machine learning (ML) models for automated DR detection, there is a significant clinical need for robust models that can be trained with smaller cohorts of dataset and still perform with high diagnostic accuracy in independent clinical datasets (i.e., high model generalizability). Towards this need, we have developed a self-supervised contrastive learning (CL) based pipeline for classification of referable vs non-referable DR. Self-supervised CL based pretraining allows enhanced data representation, therefore, the development of robust and generalized deep learning (DL) models, even with small, labeled datasets. We have integrated a neural style transfer (NST) augmentation in the CL pipeline to produce models with better representations and initializations for the detection of DR in color fundus images. We compare our CL pretrained model performance with two state of the art baseline models pretrained with Imagenet weights. We further investigate the model performance with reduced labeled training data (down to 10 percent) to test the robustness of the model when trained with small, labeled datasets. The model is trained and validated on the EyePACS dataset and tested independently on clinical datasets from the University of Illinois, Chicago (UIC). Compared to baseline models, our CL pretrained FundusNet model had higher area under the receiver operating characteristics (ROC) curve (AUC) (CI) values (0.91 (0.898 to 0.930) vs 0.80 (0.783 to 0.820) and 0.83 (0.801 to 0.853) on UIC data). At 10 percent labeled training data, the FundusNet AUC was 0.81 (0.78 to 0.84) vs 0.58 (0.56 to 0.64) and 0.63 (0.60 to 0.66) in baseline models, when tested on the UIC dataset. CL based pretraining with NST significantly improves DL classification performance, helps the model generalize well (transferable from EyePACS to UIC data), and allows training with small, annotated datasets, therefore reducing ground truth annotation burden of the clinicians.
Collapse
Affiliation(s)
- Minhaj Nur Alam
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA.
- Department of Electrical and Computer Engineering, University of North Carolina at Charlotte, 9201 University City Boulevard, Charlotte, NC, 28223, USA.
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA.
| | - Rikiya Yamashita
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Vignav Ramesh
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Tejas Prabhune
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Jennifer I Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA
| | - R V P Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA
| | - Joelle Hallak
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA
| | - Theodore Leng
- Department of Ophthalmology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| |
Collapse
|
38
|
Son J, Shin JY, Kong ST, Park J, Kwon G, Kim HD, Park KH, Jung KH, Park SJ. An interpretable and interactive deep learning algorithm for a clinically applicable retinal fundus diagnosis system by modelling finding-disease relationship. Sci Rep 2023; 13:5934. [PMID: 37045856 PMCID: PMC10097752 DOI: 10.1038/s41598-023-32518-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 03/28/2023] [Indexed: 04/14/2023] Open
Abstract
The identification of abnormal findings manifested in retinal fundus images and diagnosis of ophthalmic diseases are essential to the management of potentially vision-threatening eye conditions. Recently, deep learning-based computer-aided diagnosis systems (CADs) have demonstrated their potential to reduce reading time and discrepancy amongst readers. However, the obscure reasoning of deep neural networks (DNNs) has been the leading cause to reluctance in its clinical use as CAD systems. Here, we present a novel architectural and algorithmic design of DNNs to comprehensively identify 15 abnormal retinal findings and diagnose 8 major ophthalmic diseases from macula-centered fundus images with the accuracy comparable to experts. We then define a notion of counterfactual attribution ratio (CAR) which luminates the system's diagnostic reasoning, representing how each abnormal finding contributed to its diagnostic prediction. By using CAR, we show that both quantitative and qualitative interpretation and interactive adjustment of the CAD result can be achieved. A comparison of the model's CAR with experts' finding-disease diagnosis correlation confirms that the proposed model identifies the relationship between findings and diseases similarly as ophthalmologists do.
Collapse
Affiliation(s)
| | - Joo Young Shin
- Department of Ophthalmology, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, Republic of Korea
| | | | | | | | - Hoon Dong Kim
- Department of Ophthalmology, College of Medicine, Soonchunhyang University, Cheonan, Republic of Korea
| | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, Republic of Korea
| | - Kyu-Hwan Jung
- Department of Medical Device Research and Management, Samsung Advanced Institute for Health Sciences and Technology, Sungkyunkwan University, 81 Irwon-ro, Gangnam-gu, Seoul, Republic of Korea.
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, Republic of Korea.
| |
Collapse
|
39
|
Qu JH, Qin XR, Li CD, Peng RM, Xiao GG, Cheng J, Gu SF, Wang HK, Hong J. Fully automated grading system for the evaluation of punctate epithelial erosions using deep neural networks. Br J Ophthalmol 2023; 107:453-460. [PMID: 34670751 PMCID: PMC10086304 DOI: 10.1136/bjophthalmol-2021-319755] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 10/08/2021] [Indexed: 11/04/2022]
Abstract
PURPOSE The goal was to develop a fully automated grading system for the evaluation of punctate epithelial erosions (PEEs) using deep neural networks. METHODS A fully automated system was developed to detect corneal position and grade staining severity given a corneal fluorescein staining image. The fully automated pipeline consists of the following three steps: a corneal segmentation model extracts corneal area; five image patches are cropped from the staining image based on the five subregions of extracted cornea; a staining grading model predicts a score for each image patch from 0 to 3, and automated grading score for the whole cornea is obtained from 0 to 15. Finally, the clinical grading scores annotated by three ophthalmologists were compared with automated grading scores. RESULTS For corneal segmentation, the segmentation model achieved an intersection over union of 0.937. For punctate staining grading, the grading model achieved a classification accuracy of 76.5% and an area under the receiver operating characteristic curve of 0.940 (95% CI 0.932 to 0.949). For the fully automated pipeline, Pearson's correlation coefficient between the clinical and automated grading scores was 0.908 (p<0.01). Bland-Altman analysis revealed 95% limits of agreement between the clinical and automated grading scores of between -4.125 and 3.720 (concordance correlation coefficient=0.904). The average time required for processing a single stained image during pipeline was 0.58 s. CONCLUSION A fully automated grading system was developed to evaluate PEEs. The grading results may serve as a reference for ophthalmologists in clinical trials and residency training procedures.
Collapse
Affiliation(s)
- Jing-Hao Qu
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Xiao-Ran Qin
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Chen-Di Li
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Rong-Mei Peng
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Ge-Ge Xiao
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Jian Cheng
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Shao-Feng Gu
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Hai-Kun Wang
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Jing Hong
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| |
Collapse
|
40
|
Chan YK, Cheng CY, Sabanayagam C. Eyes as the windows into cardiovascular disease in the era of big data. Taiwan J Ophthalmol 2023; 13:151-167. [PMID: 37484607 PMCID: PMC10361436 DOI: 10.4103/tjo.tjo-d-23-00018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 04/11/2023] [Indexed: 07/25/2023] Open
Abstract
Cardiovascular disease (CVD) is a major cause of mortality and morbidity worldwide and imposes significant socioeconomic burdens, especially with late diagnoses. There is growing evidence of strong correlations between ocular images, which are information-dense, and CVD progression. The accelerating development of deep learning algorithms (DLAs) is a promising avenue for research into CVD biomarker discovery, early CVD diagnosis, and CVD prognostication. We review a selection of 17 recent DLAs on the less-explored realm of DL as applied to ocular images to produce CVD outcomes, potential challenges in their clinical deployment, and the path forward. The evidence for CVD manifestations in ocular images is well documented. Most of the reviewed DLAs analyze retinal fundus photographs to predict CV risk factors, in particular hypertension. DLAs can predict age, sex, smoking status, alcohol status, body mass index, mortality, myocardial infarction, stroke, chronic kidney disease, and hematological disease with significant accuracy. While the cardio-oculomics intersection is now burgeoning, very much remain to be explored. The increasing availability of big data, computational power, technological literacy, and acceptance all prime this subfield for rapid growth. We pinpoint the specific areas of improvement toward ubiquitous clinical deployment: increased generalizability, external validation, and universal benchmarking. DLAs capable of predicting CVD outcomes from ocular inputs are of great interest and promise to individualized precision medicine and efficiency in the provision of health care with yet undetermined real-world efficacy with impactful initial results.
Collapse
Affiliation(s)
- Yarn Kit Chan
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Ching-Yu Cheng
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Center for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Charumathi Sabanayagam
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| |
Collapse
|
41
|
Field EL, Tam W, Moore N, McEntee M. Efficacy of Artificial Intelligence in the Categorisation of Paediatric Pneumonia on Chest Radiographs: A Systematic Review. CHILDREN 2023; 10:children10030576. [PMID: 36980134 PMCID: PMC10047666 DOI: 10.3390/children10030576] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/04/2023] [Accepted: 03/15/2023] [Indexed: 03/19/2023]
Abstract
This study aimed to systematically review the literature to synthesise and summarise the evidence surrounding the efficacy of artificial intelligence (AI) in classifying paediatric pneumonia on chest radiographs (CXRs). Following the initial search of studies that matched the pre-set criteria, their data were extracted using a data extraction tool, and the included studies were assessed via critical appraisal tools and risk of bias. Results were accumulated, and outcome measures analysed included sensitivity, specificity, accuracy, and area under the curve (AUC). Five studies met the inclusion criteria. The highest sensitivity was by an ensemble AI algorithm (96.3%). DenseNet201 obtained the highest level of specificity and accuracy (94%, 95%). The most outstanding AUC value was achieved by the VGG16 algorithm (96.2%). Some of the AI models achieved close to 100% diagnostic accuracy. To assess the efficacy of AI in a clinical setting, these AI models should be compared to that of radiologists. The included and evaluated AI algorithms showed promising results. These algorithms can potentially ease and speed up diagnosis once the studies are replicated and their performances are assessed in clinical settings, potentially saving millions of lives.
Collapse
Affiliation(s)
- Erica Louise Field
- Discipline of Medical Imaging and Radiation Therapy, University College Cork, College Road, T12 K8AF Cork, Ireland
| | - Winnie Tam
- Department of Midwifery and Radiography, University of London, Northampton Square, London EC1V 0HB, UK
- Correspondence:
| | - Niamh Moore
- Discipline of Medical Imaging and Radiation Therapy, University College Cork, College Road, T12 K8AF Cork, Ireland
| | - Mark McEntee
- Discipline of Medical Imaging and Radiation Therapy, University College Cork, College Road, T12 K8AF Cork, Ireland
| |
Collapse
|
42
|
Nespolo RG, Yi D, Cole E, Wang D, Warren A, Leiderman YI. Feature Tracking and Segmentation in Real Time via Deep Learning in Vitreoretinal Surgery: A Platform for Artificial Intelligence-Mediated Surgical Guidance. Ophthalmol Retina 2023; 7:236-242. [PMID: 36241132 DOI: 10.1016/j.oret.2022.10.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 09/28/2022] [Accepted: 10/03/2022] [Indexed: 11/15/2022]
Abstract
PURPOSE This study investigated whether a deep-learning neural network can detect and segment surgical instrumentation and relevant tissue boundaries and landmarks within the retina using imaging acquired from a surgical microscope in real time, with the goal of providing image-guided vitreoretinal (VR) microsurgery. DESIGN Retrospective analysis via a prospective, single-center study. PARTICIPANTS One hundred and one patients undergoing VR surgery, inclusive of core vitrectomy, membrane peeling, and endolaser application, in a university-based ophthalmology department between July 1, 2020, and September 1, 2021. METHODS A dataset composed of 606 surgical image frames was annotated by 3 VR surgeons. Annotation consisted of identifying the location and area of the following features, when present in-frame: vitrector-, forceps-, and endolaser tooltips, optic disc, fovea, retinal tears, retinal detachment, fibrovascular proliferation, endolaser spots, area where endolaser was applied, and macular hole. An instance segmentation fully convolutional neural network (YOLACT++) was adapted and trained, and fivefold cross-validation was employed to generate metrics for accuracy. MAIN OUTCOME MEASURES Area under the precision-recall curve (AUPR) for the detection of elements tracked and segmented in the final test dataset; the frames per second (FPS) for the assessment of suitability for real-time performance of the model. RESULTS The platform detected and classified the vitrector tooltip with a mean AUPR of 0.972 ± 0.009. The segmentation of target tissues, such as the optic disc, fovea, and macular hole reached mean AUPR values of 0.928 ± 0.013, 0.844 ± 0.039, and 0.916 ± 0.021, respectively. The postprocessed image was rendered at a full high-definition resolution of 1920 × 1080 pixels at 38.77 ± 1.52 FPS when attached to a surgical visualization system, reaching up to 87.44 ± 3.8 FPS. CONCLUSIONS Neural Networks can localize, classify, and segment tissues and instruments during VR procedures in real time. We propose a framework for developing surgical guidance and assessment platform that may guide surgical decision-making and help in formulating tools for systematic analyses of VR surgery. Potential applications include collision avoidance to prevent unintended instrument-tissue interactions and the extraction of spatial localization and movement of surgical instruments for surgical data science research. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Rogerio Garcia Nespolo
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois; Richard and Loan Hill Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois
| | - Darvin Yi
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois; Richard and Loan Hill Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois
| | - Emily Cole
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Daniel Wang
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Alexis Warren
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Yannek I Leiderman
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois; Richard and Loan Hill Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois.
| |
Collapse
|
43
|
Li Z, Chen W. Solving data quality issues of fundus images in real-world settings by ophthalmic AI. Cell Rep Med 2023; 4:100951. [PMID: 36812885 PMCID: PMC9975325 DOI: 10.1016/j.xcrm.2023.100951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
Abstract
Liu et al.1 develop a deep-learning-based flow cytometry-like image quality classifier, DeepFundus, for the automated, high-throughput, and multidimensional classification of fundus image quality. DeepFundus significantly improves the real-world performance of established artificial intelligence diagnostics in detecting multiple retinopathies.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, China.
| |
Collapse
|
44
|
Cavichini M, Bartsch DUG, Warter A, Singh S, An C, Wang Y, Zhang J, Nguyen T, Freeman WR. Accuracy and Time Comparison Between Side-by-Side and Artificial Intelligence Overlayed Images. Ophthalmic Surg Lasers Imaging Retina 2023; 54:108-113. [PMID: 36780638 DOI: 10.3928/23258160-20230130-03] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2023]
Abstract
BACKGROUND AND OBJECTIVE The purpose of this study was to evaluate the accuracy and the time to find a lesion, taken in different platforms, color fundus photographs and infrared scanning laser ophthalmoscope images, using the traditional side-by-side (SBS) colocalization technique to an artificial intelligence (AI)-assisted technique. PATIENTS AND METHODS Fifty-three pathological lesions were studied in 11 eyes. Images were aligned using SBS and AI overlaid methods. The location of each color fundus lesion on the corresponding infrared scanning laser ophthalmoscope image was analyzed twice, one time for each method, on different days, for two specialists, in random order. The outcomes for each method were measured and recorded by an independent observer. RESULTS The colocalization AI method was superior to the conventional in accuracy and time (P < .001), with a mean time to colocalize 37% faster. The error rate using AI was 0% compared with 18% in SBS measurements. CONCLUSIONS AI permitted a more accurate and faster colocalization of pathologic lesions than the conventional method. [Ophthalmic Surg Lasers Imaging Retina 2023;54:108-113.].
Collapse
|
45
|
A Deep Learning Model for Evaluating Meibomian Glands Morphology from Meibography. J Clin Med 2023; 12:jcm12031053. [PMID: 36769701 PMCID: PMC9918190 DOI: 10.3390/jcm12031053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 01/03/2023] [Accepted: 01/20/2023] [Indexed: 02/03/2023] Open
Abstract
To develop a deep learning model for automatically segmenting tarsus and meibomian gland areas on meibography, we included 1087 meibography images from dry eye patients. The contour of the tarsus and each meibomian gland was labeled manually by human experts. The dataset was divided into training, validation, and test sets. We built a convolutional neural network-based U-net and trained the model to segment the tarsus and meibomian gland area. Accuracy, sensitivity, specificity, and receiver operating characteristic curve (ROC) were calculated to evaluate the model. The area under the curve (AUC) values for models segmenting the tarsus and meibomian gland area were 0.985 and 0.938, respectively. The deep learning model achieved a sensitivity and specificity of 0.975 and 0.99, respectively, with an accuracy of 0.985 for segmenting the tarsus area. For meibomian gland area segmentation, the model obtained a high specificity of 0.96, with high accuracy of 0.937 and a moderate sensitivity of 0.751. The present research trained a deep learning model to automatically segment tarsus and the meibomian gland area from infrared meibography, and the model demonstrated outstanding accuracy in segmentation. With further improvement, the model could potentially be applied to assess the meibomian gland that facilitates dry eye evaluation in various clinical and research scenarios.
Collapse
|
46
|
Deep learning-based hemorrhage detection for diabetic retinopathy screening. Sci Rep 2023; 13:1479. [PMID: 36707608 PMCID: PMC9883230 DOI: 10.1038/s41598-023-28680-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 01/23/2023] [Indexed: 01/29/2023] Open
Abstract
Diabetic retinopathy is a retinal compilation that causes visual impairment. Hemorrhage is one of the pathological symptoms of diabetic retinopathy that emerges during disease development. Therefore, hemorrhage detection reveals the presence of diabetic retinopathy in the early phase. Diagnosing the disease in its initial stage is crucial to adopt proper treatment so the repercussions can be prevented. The automatic deep learning-based hemorrhage detection method is proposed that can be used as the second interpreter for ophthalmologists to reduce the time and complexity of conventional screening methods. The quality of the images was enhanced, and the prospective hemorrhage locations were estimated in the preprocessing stage. Modified gamma correction adaptively illuminates fundus images by using gradient information to address the nonuniform brightness levels of images. The algorithm estimated the locations of potential candidates by using a Gaussian match filter, entropy thresholding, and mathematical morphology. The required objects were segmented using the regional diversity at estimated locations. The novel hemorrhage network is propounded for hemorrhage classification and compared with the renowned deep models. Two datasets benchmarked the model's performance using sensitivity, specificity, precision, and accuracy metrics. Despite being the shallowest network, the proposed network marked competitive results than LeNet-5, AlexNet, ResNet50, and VGG-16. The hemorrhage network was assessed using training time and classification accuracy through synthetic experimentation. Results showed promising accuracy in the classification stage while significantly reducing training time. The research concluded that increasing deep network layers does not guarantee good results but rather increases training time. The suitable architecture of a deep model and its appropriate parameters are critical for obtaining excellent outcomes.
Collapse
|
47
|
Lu Z, Miao J, Dong J, Zhu S, Wu P, Wang X, Feng J. Automatic Multilabel Classification of Multiple Fundus Diseases Based on Convolutional Neural Network With Squeeze-and-Excitation Attention. Transl Vis Sci Technol 2023; 12:22. [PMID: 36662513 PMCID: PMC9872849 DOI: 10.1167/tvst.12.1.22] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 11/06/2022] [Indexed: 01/21/2023] Open
Abstract
Purpose Automatic multilabel classification of multiple fundus diseases is of importance for ophthalmologists. This study aims to design an effective multilabel classification model that can automatically classify multiple fundus diseases based on color fundus images. Methods We proposed a multilabel fundus disease classification model based on a convolutional neural network to classify normal and seven categories of common fundus diseases. Specifically, an attention mechanism was introduced into the network to further extract information features from color fundus images. The fundus images with eight categories of labels were applied to train, validate, and test our model. We employed the validation accuracy, area under the receiver operating characteristic curve (AUC), and F1-score as performance metrics to evaluate our model. Results Our proposed model achieved better performance with a validation accuracy of 94.27%, an AUC of 85.80%, and an F1-score of 86.08%, compared to two state-of-the-art models. Most important, the number of training parameters has dramatically dropped by three and eight times compared to the two state-of-the-art models. Conclusions This model can automatically classify multiple fundus diseases with not only excellent accuracy, AUC, and F1-score but also significantly fewer training parameters and lower computational cost, providing a reliable assistant in clinical screening. Translational Relevance The proposed model can be widely applied in large-scale multiple fundus disease screening, helping to create more efficient diagnostics in primary care settings.
Collapse
Affiliation(s)
- Zhenzhen Lu
- Department of Biomedical Engineering, Beijing International Science and Technology Cooperation Base for Intelligent Physiological Measurement and Clinical Transformation, Beijing University of Technology, Beijing, China
| | - Jingpeng Miao
- Beijing Tongren Eye Center, Beijing Ophthalmology & Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jingran Dong
- Department of Biomedical Engineering, Beijing International Science and Technology Cooperation Base for Intelligent Physiological Measurement and Clinical Transformation, Beijing University of Technology, Beijing, China
| | - Shuyuan Zhu
- Department of Biomedical Engineering, Beijing International Science and Technology Cooperation Base for Intelligent Physiological Measurement and Clinical Transformation, Beijing University of Technology, Beijing, China
| | - Penghan Wu
- Fan Gongxiu Honors College, Beijing University of Technology, Beijing, China
| | - Xiaobing Wang
- Sports and Medicine Integrative Innovation Center, Capital University of Physical Education and Sports, Beijing, China
- Department of Ophthalmology, Beijing Boai Hospital, China Rehabilitation Research Center, School of Rehabilitation Medicine, Capital Medical University, Beijing, China
| | - Jihong Feng
- Department of Biomedical Engineering, Beijing International Science and Technology Cooperation Base for Intelligent Physiological Measurement and Clinical Transformation, Beijing University of Technology, Beijing, China
| |
Collapse
|
48
|
An empirical study of preprocessing techniques with convolutional neural networks for accurate detection of chronic ocular diseases using fundus images. APPL INTELL 2023; 53:1548-1566. [PMID: 35528131 PMCID: PMC9059700 DOI: 10.1007/s10489-022-03490-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/08/2022] [Indexed: 01/07/2023]
Abstract
Chronic Ocular Diseases (COD) such as myopia, diabetic retinopathy, age-related macular degeneration, glaucoma, and cataract can affect the eye and may even lead to severe vision impairment or blindness. According to a recent World Health Organization (WHO) report on vision, at least 2.2 billion individuals worldwide suffer from vision impairment. Often, overt signs indicative of COD do not manifest until the disease has progressed to an advanced stage. However, if COD is detected early, vision impairment can be avoided by early intervention and cost-effective treatment. Ophthalmologists are trained to detect COD by examining certain minute changes in the retina, such as microaneurysms, macular edema, hemorrhages, and alterations in the blood vessels. The range of eye conditions is diverse, and each of these conditions requires a unique patient-specific treatment. Convolutional neural networks (CNNs) have demonstrated significant potential in multi-disciplinary fields, including the detection of a variety of eye diseases. In this study, we combined several preprocessing approaches with convolutional neural networks to accurately detect COD in eye fundus images. To the best of our knowledge, this is the first work that provides a qualitative analysis of preprocessing approaches for COD classification using CNN models. Experimental results demonstrate that CNNs trained on the region of interest segmented images outperform the models trained on the original input images by a substantial margin. Additionally, an ensemble of three preprocessing techniques outperformed other state-of-the-art approaches by 30% and 3%, in terms of Kappa and F 1 scores, respectively. The developed prototype has been extensively tested and can be evaluated on more comprehensive COD datasets for deployment in the clinical setup.
Collapse
|
49
|
Selvachandran G, Quek SG, Paramesran R, Ding W, Son LH. Developments in the detection of diabetic retinopathy: a state-of-the-art review of computer-aided diagnosis and machine learning methods. Artif Intell Rev 2023; 56:915-964. [PMID: 35498558 PMCID: PMC9038999 DOI: 10.1007/s10462-022-10185-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/04/2022] [Indexed: 02/02/2023]
Abstract
The exponential increase in the number of diabetics around the world has led to an equally large increase in the number of diabetic retinopathy (DR) cases which is one of the major complications caused by diabetes. Left unattended, DR worsens the vision and would lead to partial or complete blindness. As the number of diabetics continue to increase exponentially in the coming years, the number of qualified ophthalmologists need to increase in tandem in order to meet the demand for screening of the growing number of diabetic patients. This makes it pertinent to develop ways to automate the detection process of DR. A computer aided diagnosis system has the potential to significantly reduce the burden currently placed on the ophthalmologists. Hence, this review paper is presented with the aim of summarizing, classifying, and analyzing all the recent development on automated DR detection using fundus images from 2015 up to this date. Such work offers an unprecedentedly thorough review of all the recent works on DR, which will potentially increase the understanding of all the recent studies on automated DR detection, particularly on those that deploys machine learning algorithms. Firstly, in this paper, a comprehensive state-of-the-art review of the methods that have been introduced in the detection of DR is presented, with a focus on machine learning models such as convolutional neural networks (CNN) and artificial neural networks (ANN) and various hybrid models. Each AI will then be classified according to its type (e.g. CNN, ANN, SVM), its specific task(s) in performing DR detection. In particular, the models that deploy CNN will be further analyzed and classified according to some important properties of the respective CNN architectures of each model. A total of 150 research articles related to the aforementioned areas that were published in the recent 5 years have been utilized in this review to provide a comprehensive overview of the latest developments in the detection of DR. Supplementary Information The online version contains supplementary material available at 10.1007/s10462-022-10185-6.
Collapse
Affiliation(s)
- Ganeshsree Selvachandran
- Department of Actuarial Science and Applied Statistics, Faculty of Business & Management, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Shio Gai Quek
- Department of Actuarial Science and Applied Statistics, Faculty of Business & Management, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Raveendran Paramesran
- Institute of Computer Science and Digital Innovation, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, Nantong, 226019 People’s Republic of China
| | - Le Hoang Son
- VNU Information Technology Institute, Vietnam National University, Hanoi, Vietnam
| |
Collapse
|
50
|
Kurup AR, Wigdahl J, Benson J, Martínez-Ramón M, Solíz P, Joshi V. Automated malarial retinopathy detection using transfer learning and multi-camera retinal images. Biocybern Biomed Eng 2023; 43:109-123. [PMID: 36685736 PMCID: PMC9851283 DOI: 10.1016/j.bbe.2022.12.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Cerebral malaria (CM) is a fatal syndrome found commonly in children less than 5 years old in Sub-saharan Africa and Asia. The retinal signs associated with CM are known as malarial retinopathy (MR), and they include highly specific retinal lesions such as whitening and hemorrhages. Detecting these lesions allows the detection of CM with high specificity. Up to 23% of CM, patients are over-diagnosed due to the presence of clinical symptoms also related to pneumonia, meningitis, or others. Therefore, patients go untreated for these pathologies, resulting in death or neurological disability. It is essential to have a low-cost and high-specificity diagnostic technique for CM detection, for which We developed a method based on transfer learning (TL). Models pre-trained with TL select the good quality retinal images, which are fed into another TL model to detect CM. This approach shows a 96% specificity with low-cost retinal cameras.
Collapse
Affiliation(s)
| | - Jeff Wigdahl
- VisionQuest Biomedical Inc., Albuquerque, NM, USA
| | | | | | - Peter Solíz
- VisionQuest Biomedical Inc., Albuquerque, NM, USA
| | | |
Collapse
|