1
|
Hayati A, Abdol Homayuni MR, Sadeghi R, Asadigandomani H, Dashtkoohi M, Eslami S, Soleimani M. Advancing Diabetic Retinopathy Screening: A Systematic Review of Artificial Intelligence and Optical Coherence Tomography Angiography Innovations. Diagnostics (Basel) 2025; 15:737. [PMID: 40150080 PMCID: PMC11941001 DOI: 10.3390/diagnostics15060737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2025] [Revised: 03/07/2025] [Accepted: 03/13/2025] [Indexed: 03/29/2025] Open
Abstract
Background/Objectives: Diabetic retinopathy (DR) remains a leading cause of preventable blindness, with its global prevalence projected to rise sharply as diabetes incidence increases. Early detection and timely management are critical to reducing DR-related vision loss. Optical Coherence Tomography Angiography (OCTA) now enables non-invasive, layer-specific visualization of the retinal vasculature, facilitating more precise identification of early microvascular changes. Concurrently, advancements in artificial intelligence (AI), particularly deep learning (DL) architectures such as convolutional neural networks (CNNs), attention-based models, and Vision Transformers (ViTs), have revolutionized image analysis. These AI-driven tools substantially enhance the sensitivity, specificity, and interpretability of DR screening. Methods: A systematic review of PubMed, Scopus, WOS, and Embase databases, including quality assessment of published studies, investigating the result of different AI algorithms with OCTA parameters in DR patients was conducted. The variables of interest comprised training databases, type of image, imaging modality, number of images, outcomes, algorithm/model used, and performance metrics. Results: A total of 32 studies were included in this systematic review. In comparison to conventional ML techniques, our results indicated that DL algorithms significantly improve the accuracy, sensitivity, and specificity of DR screening. Multi-branch CNNs, ensemble architectures, and ViTs were among the sophisticated models with remarkable performance metrics. Several studies reported that accuracy and area under the curve (AUC) values were higher than 99%. Conclusions: This systematic review underscores the transformative potential of integrating advanced DL and machine learning (ML) algorithms with OCTA imaging for DR screening. By synthesizing evidence from 32 studies, we highlight the unique capabilities of AI-OCTA systems in improving diagnostic accuracy, enabling early detection, and streamlining clinical workflows. These advancements promise to enhance patient management by facilitating timely interventions and reducing the burden of DR-related vision loss. Furthermore, this review provides critical recommendations for clinical practice, emphasizing the need for robust validation, ethical considerations, and equitable implementation to ensure the widespread adoption of AI-OCTA technologies. Future research should focus on multicenter studies, multimodal integration, and real-world validation to maximize the clinical impact of these innovative tools.
Collapse
Affiliation(s)
- Alireza Hayati
- Students’ Research Committee (SRC), Qazvin University of Medical Sciences, Qazvin 34197-59811, Iran;
| | - Mohammad Reza Abdol Homayuni
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran 13399-73111, Iran; (M.R.A.H.); (R.S.); (H.A.)
- School of Medicine, Tehran University of Medical Sciences, Tehran 13399-73111, Iran
| | - Reza Sadeghi
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran 13399-73111, Iran; (M.R.A.H.); (R.S.); (H.A.)
- School of Medicine, Tehran University of Medical Sciences, Tehran 13399-73111, Iran
| | - Hassan Asadigandomani
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran 13399-73111, Iran; (M.R.A.H.); (R.S.); (H.A.)
- School of Medicine, Tehran University of Medical Sciences, Tehran 13399-73111, Iran
| | - Mohammad Dashtkoohi
- Students Scientific Research Center (SSRC), Tehran University of Medical Sciences, Tehran 13399-73111, Iran;
| | - Sajad Eslami
- School of Business, Stevens Institute of Technology, Hoboken, NJ 07030, USA;
| | - Mohammad Soleimani
- Department of Ophthalmology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- AI.Health4All Center for Health Equity using ML/AI, College of Medicine, University of Illinois at Chicago, Chicago, IL 60607, USA
| |
Collapse
|
2
|
Moannaei M, Jadidian F, Doustmohammadi T, Kiapasha AM, Bayani R, Rahmani M, Jahanbazy MR, Sohrabivafa F, Asadi Anar M, Magsudy A, Sadat Rafiei SK, Khakpour Y. Performance and limitation of machine learning algorithms for diabetic retinopathy screening and its application in health management: a meta-analysis. Biomed Eng Online 2025; 24:34. [PMID: 40087776 PMCID: PMC11909973 DOI: 10.1186/s12938-025-01336-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 01/07/2025] [Indexed: 03/17/2025] Open
Abstract
BACKGROUND In recent years, artificial intelligence and machine learning algorithms have been used more extensively to diagnose diabetic retinopathy and other diseases. Still, the effectiveness of these methods has not been thoroughly investigated. This study aimed to evaluate the performance and limitations of machine learning and deep learning algorithms in detecting diabetic retinopathy. METHODS This study was conducted based on the PRISMA checklist. We searched online databases, including PubMed, Scopus, and Google Scholar, for relevant articles up to September 30, 2023. After the title, abstract, and full-text screening, data extraction and quality assessment were done for the included studies. Finally, a meta-analysis was performed. RESULTS We included 76 studies with a total of 1,371,517 retinal images, of which 51 were used for meta-analysis. Our meta-analysis showed a significant sensitivity and specificity with a percentage of 90.54 (95%CI [90.42, 90.66], P < 0.001) and 78.33% (95%CI [78.21, 78.45], P < 0.001). However, the AUC (area under curvature) did not statistically differ across studies, but had a significant figure of 0.94 (95% CI [- 46.71, 48.60], P = 1). CONCLUSIONS Although machine learning and deep learning algorithms can properly diagnose diabetic retinopathy, their discriminating capacity is limited. However, they could simplify the diagnosing process. Further studies are required to improve algorithms.
Collapse
Affiliation(s)
- Mehrsa Moannaei
- School of Medicine, Hormozgan University of Medical Sciences, Bandar Abbas, Iran
| | - Faezeh Jadidian
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Tahereh Doustmohammadi
- Department and Faculty of Health Education and Health Promotion, Student Research Committee, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Amir Mohammad Kiapasha
- Student Research Committee, School of Medicine, Shahid Beheshti University of Medical Science, Tehran, Iran
| | - Romina Bayani
- Student Research Committee, School of Medicine, Shahid Beheshti University of Medical Science, Tehran, Iran
| | | | | | - Fereshteh Sohrabivafa
- Health Education and Promotion, Department of Community Medicine, School of Medicine, Dezful University of Medical Sciences, Dezful, Iran
| | - Mahsa Asadi Anar
- Student Research Committee, Shahid Beheshti University of Medical Science, Arabi Ave, Daneshjoo Blvd, Velenjak, Tehran, 19839-63113, Iran.
| | - Amin Magsudy
- Faculty of Medicine, Islamic Azad University Tabriz Branch, Tabriz, Iran
| | - Seyyed Kiarash Sadat Rafiei
- Student Research Committee, Shahid Beheshti University of Medical Science, Arabi Ave, Daneshjoo Blvd, Velenjak, Tehran, 19839-63113, Iran
| | - Yaser Khakpour
- Faculty of Medicine, Guilan University of Medical Sciences, Rasht, Iran
| |
Collapse
|
3
|
Movassagh AA, Jajroudi M, Homayoun Jafari A, Khalili Pour E, Farrokhpour H, Faghihi H, Riazi H, ArabAlibeik H. Quantifying the Characteristics of Diabetic Retinopathy in Macular Optical Coherence Tomography Angiography Images: A Few-Shot Learning and Explainable Artificial Intelligence Approach. Cureus 2025; 17:e76746. [PMID: 39897224 PMCID: PMC11785394 DOI: 10.7759/cureus.76746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/29/2024] [Indexed: 02/04/2025] Open
Abstract
BACKGROUND Early detection and accurate staging of diabetic retinopathy (DR) are important to prevent vision loss. Optical coherence tomography angiography (OCTA) images provide detailed insights into the retinal vasculature, revealing intricate changes that occur as DR progresses. However, interpreting these complex images requires significant expertise and is often time-intensive. Deep learning techniques have the potential to automate DR analysis. However, they typically require large datasets for effective training. To address the challenge of limited data in this emerging imaging field, a combined approach using few-shot learning (FSL) and self-attention mechanisms within explainable AI (XAI) was explored. OBJECTIVE To investigate and evaluate the potential of an FSL-self-attention XAI approach to improve the accuracy of DR staging classification using OCTA images. METHODS A total of 206 OCTA images, comprising 104 non-proliferative diabetic retinopathy (NPDR) and 102 proliferative diabetic retinopathy (PDR) cases, were analyzed using the FSL method. Three pre-trained networks (ResNet-50, DenseNet-161, and MobileNet-v2) were employed, with the top-performing model subsequently integrated with the Match-Them-Up Network (MTUNet) to provide explainable interpretations using a self-attention mechanism. The performance of the models was evaluated by applying standard metrics, including accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC-ROC). The performance of the MTUNet model is assessed by calculating pattern-matching scores for PDR and NPDR classes. RESULTS The ResNet-50 pre-trained model in FSL demonstrated the best overall performance, achieving an accuracy of 76.17%, a sensitivity of 81.83%, a specificity of 70.5%, and 0.82 AUC in classifying DR stages. MTUNet provided pattern-matching scores of 0.77 and 0.75 for PDR and NPDR classes, respectively. CONCLUSIONS FSL and self-attention mechanisms in XAI offer promising approaches for accurate DR stage classification, especially in data-limited scenarios. This could potentially facilitate early DR detection and inform clinical decision-making.
Collapse
Affiliation(s)
- Ali Akbar Movassagh
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, IRN
| | - Mahdie Jajroudi
- Medical Informatics, Mashhad University of Medical Sciences, Mashhad, IRN
| | - Amir Homayoun Jafari
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, IRN
| | - Elias Khalili Pour
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, IRN
| | - Hossein Farrokhpour
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, IRN
| | - Hooshang Faghihi
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, IRN
| | - Hamid Riazi
- Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, IRN
| | - Hossein ArabAlibeik
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, IRN
| |
Collapse
|
4
|
Dadzie AK, Iddir SP, Abtahi M, Ebrahimi B, Le D, Ganesh S, Son T, Heiferman MJ, Yao X. Colour fusion effect on deep learning classification of uveal melanoma. Eye (Lond) 2024; 38:2781-2787. [PMID: 38773261 PMCID: PMC11427558 DOI: 10.1038/s41433-024-03148-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 04/23/2024] [Accepted: 05/10/2024] [Indexed: 05/23/2024] Open
Abstract
BACKGROUND Reliable differentiation of uveal melanoma and choroidal nevi is crucial to guide appropriate treatment, preventing unnecessary procedures for benign lesions and ensuring timely treatment for potentially malignant cases. The purpose of this study is to validate deep learning classification of uveal melanoma and choroidal nevi, and to evaluate the effect of colour fusion options on the classification performance. METHODS A total of 798 ultra-widefield retinal images of 438 patients were included in this retrospective study, comprising 157 patients diagnosed with UM and 281 patients diagnosed with choroidal naevus. Colour fusion options, including early fusion, intermediate fusion and late fusion, were tested for deep learning image classification with a convolutional neural network (CNN). F1-score, accuracy and the area under the curve (AUC) of a receiver operating characteristic (ROC) were used to evaluate the classification performance. RESULTS Colour fusion options were observed to affect the deep learning performance significantly. For single-colour learning, the red colour image was observed to have superior performance compared to green and blue channels. For multi-colour learning, the intermediate fusion is better than early and late fusion options. CONCLUSION Deep learning is a promising approach for automated classification of uveal melanoma and choroidal nevi. Colour fusion options can significantly affect the classification performance.
Collapse
Affiliation(s)
- Albert K Dadzie
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA
| | - Sabrina P Iddir
- Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, 60612, USA
| | - Mansour Abtahi
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA
| | - Behrouz Ebrahimi
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA
| | - David Le
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA
| | - Sanjay Ganesh
- Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, 60612, USA
| | - Taeyoon Son
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA
| | - Michael J Heiferman
- Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, 60612, USA.
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, 60607, USA.
- Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, 60612, USA.
| |
Collapse
|
5
|
Li X, Wen X, Shang X, Liu J, Zhang L, Cui Y, Luo X, Zhang G, Xie J, Huang T, Chen Z, Lyu Z, Wu X, Lan Y, Meng Q. Identification of diabetic retinopathy classification using machine learning algorithms on clinical data and optical coherence tomography angiography. Eye (Lond) 2024; 38:2813-2821. [PMID: 38871934 PMCID: PMC11427469 DOI: 10.1038/s41433-024-03173-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 04/10/2024] [Accepted: 06/06/2024] [Indexed: 06/15/2024] Open
Abstract
BACKGROUND To apply machine learning (ML) algorithms to perform multiclass diabetic retinopathy (DR) classification using both clinical data and optical coherence tomography angiography (OCTA). METHODS In this cross-sectional observational study, clinical data and OCTA parameters from 203 diabetic patients (203 eye) were used to establish the ML models, and those from 169 diabetic patients (169 eye) were used for independent external validation. The random forest, gradient boosting machine (GBM), deep learning and logistic regression algorithms were used to identify the presence of DR, referable DR (RDR) and vision-threatening DR (VTDR). Four different variable patterns based on clinical data and OCTA variables were examined. The algorithms' performance were evaluated using receiver operating characteristic curves and the area under the curve (AUC) was used to assess predictive accuracy. RESULTS The random forest algorithm on OCTA+clinical data-based variables and OCTA+non-laboratory factor-based variables provided the higher AUC values for DR, RDR and VTDR. The GBM algorithm produced similar results, albeit with slightly lower AUC values. Leading predictors of DR status included vessel density, retinal thickness and GCC thickness, as well as the body mass index, waist-to-hip ratio and glucose-lowering treatment. CONCLUSIONS ML-based multiclass DR classification using OCTA and clinical data can provide reliable assistance for screening, referral, and management DR populations.
Collapse
Affiliation(s)
- Xiaoli Li
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Xin Wen
- Department of Ophthalmology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Xianwen Shang
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Junbin Liu
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Liang Zhang
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Ying Cui
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Xiaoyang Luo
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Guanrong Zhang
- Statistics Section, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, Guangdong, China
| | - Jie Xie
- Department of Ophthalmology, Heyuan People's Hospital, Heyuan, China
| | - Tian Huang
- Department of Ophthalmology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Zhifan Chen
- Department of Ophthalmology, The Fourth Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Zheng Lyu
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Xiyu Wu
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Yuqing Lan
- Department of Ophthalmology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.
| | - Qianli Meng
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China.
| |
Collapse
|
6
|
El Habib Daho M, Li Y, Zeghlache R, Boité HL, Deman P, Borderie L, Ren H, Mannivanan N, Lepicard C, Cochener B, Couturier A, Tadayoni R, Conze PH, Lamard M, Quellec G. DISCOVER: 2-D multiview summarization of Optical Coherence Tomography Angiography for automatic diabetic retinopathy diagnosis. Artif Intell Med 2024; 149:102803. [PMID: 38462293 DOI: 10.1016/j.artmed.2024.102803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 12/19/2023] [Accepted: 02/03/2024] [Indexed: 03/12/2024]
Abstract
Diabetic Retinopathy (DR), an ocular complication of diabetes, is a leading cause of blindness worldwide. Traditionally, DR is monitored using Color Fundus Photography (CFP), a widespread 2-D imaging modality. However, DR classifications based on CFP have poor predictive power, resulting in suboptimal DR management. Optical Coherence Tomography Angiography (OCTA) is a recent 3-D imaging modality offering enhanced structural and functional information (blood flow) with a wider field of view. This paper investigates automatic DR severity assessment using 3-D OCTA. A straightforward solution to this task is a 3-D neural network classifier. However, 3-D architectures have numerous parameters and typically require many training samples. A lighter solution consists in using 2-D neural network classifiers processing 2-D en-face (or frontal) projections and/or 2-D cross-sectional slices. Such an approach mimics the way ophthalmologists analyze OCTA acquisitions: (1) en-face flow maps are often used to detect avascular zones and neovascularization, and (2) cross-sectional slices are commonly analyzed to detect macular edemas, for instance. However, arbitrary data reduction or selection might result in information loss. Two complementary strategies are thus proposed to optimally summarize OCTA volumes with 2-D images: (1) a parametric en-face projection optimized through deep learning and (2) a cross-sectional slice selection process controlled through gradient-based attribution. The full summarization and DR classification pipeline is trained from end to end. The automatic 2-D summary can be displayed in a viewer or printed in a report to support the decision. We show that the proposed 2-D summarization and classification pipeline outperforms direct 3-D classification with the advantage of improved interpretability.
Collapse
Affiliation(s)
- Mostafa El Habib Daho
- Univ Bretagne Occidentale, Brest, F-29200, France; Inserm, UMR 1101, Brest, F-29200, France
| | - Yihao Li
- Univ Bretagne Occidentale, Brest, F-29200, France; Inserm, UMR 1101, Brest, F-29200, France
| | - Rachid Zeghlache
- Univ Bretagne Occidentale, Brest, F-29200, France; Inserm, UMR 1101, Brest, F-29200, France
| | - Hugo Le Boité
- Sorbonne University, Paris, F-75006, France; Service d'Ophtalmologie, Hôpital Lariboisière, APHP, Paris, F-75475, France
| | - Pierre Deman
- ADCIS, Saint-Contest, F-14280, France; Evolucare Technologies, Le Pecq, F-78230, France
| | | | - Hugang Ren
- Carl Zeiss Meditec, Dublin, CA 94568, USA
| | | | - Capucine Lepicard
- Service d'Ophtalmologie, Hôpital Lariboisière, APHP, Paris, F-75475, France
| | - Béatrice Cochener
- Univ Bretagne Occidentale, Brest, F-29200, France; Inserm, UMR 1101, Brest, F-29200, France; Service d'Ophtalmologie, CHRU Brest, Brest, F-29200, France
| | - Aude Couturier
- Service d'Ophtalmologie, Hôpital Lariboisière, APHP, Paris, F-75475, France
| | - Ramin Tadayoni
- Service d'Ophtalmologie, Hôpital Lariboisière, APHP, Paris, F-75475, France; Paris Cité University, Paris, F-75006, France
| | - Pierre-Henri Conze
- Inserm, UMR 1101, Brest, F-29200, France; IMT Atlantique, Brest, F-29200, France
| | - Mathieu Lamard
- Univ Bretagne Occidentale, Brest, F-29200, France; Inserm, UMR 1101, Brest, F-29200, France
| | | |
Collapse
|
7
|
Drira I, Noor M, Stone A, D'Souza Y, John B, McGrath O, Patel PJ, Aslam T. Comparison of Widefield OCT Angiography Features Between Severe Non-Proliferative and Proliferative Diabetic Retinopathy. Ophthalmol Ther 2024; 13:831-849. [PMID: 38273048 PMCID: PMC10853160 DOI: 10.1007/s40123-024-00886-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 01/10/2024] [Indexed: 01/27/2024] Open
Abstract
INTRODUCTION There is a high and ever-increasing global prevalence of diabetic retinopathy (DR) and invasive imaging techniques are often required to confirm the presence of proliferative disease. The aim of this study was to explore the images of a rapid and non-invasive technique, widefield optical coherence tomography angiography (OCT-A), to study differences between patients with severe non-proliferative and proliferative DR (PDR). METHODS We conducted an observational longitudinal study from November 2022 to March 2023. We recruited 75 patients who were classified into a proliferative group (28 patients) and severe non-proliferative group (47 patients). Classification was done by specialist clinicians who had full access to any multimodal imaging they required to be confident of their diagnosis, including fluorescein angiography. For all patients, we performed single-shot 4 × 4 and 10 × 10 mm (widefield) OCT-A imaging and when possible, the multiple images required for mosaic 17.5 × 17.5 mm (ultra widefield) OCT-A imaging. We assessed the frequency with which proliferative disease was identifiable solely from these OCT-A images and used custom-built MATLAB software to analyze the images and determine computerized metrics such as density and intensity of vessels, foveal avascular zone, and ischemic areas. RESULTS On clinically assessing the OCT-A 10 × 10 fields, we were only able to detect new vessels in 25% of known proliferative images. Using ultra-widefield mosaic images, however, we were able to detect new vessels in 100% of PDR patients. The image analysis metrics of 4 × 4 and 10 × 10 mm images did not show any significant differences between the two clinical groups. For mosaics, however, there were significant differences in the capillary density in patients with PDR compared to severe non-PDR (9.1% ± 1.9 in the PDR group versus 11.0% ± 1.9 for severe group). We also found with mosaics a significant difference in the metrics of ischemic areas; average area of ischemic zones (253,930.1 ± 108,636 for the proliferative group versus 149,104.2 ± 55,101.8 for the severe group. CONCLUSIONS Our study showed a high sensitivity for detecting PDR using only ultra-widefield mosaic OCT-A imaging, compared to multimodal including fluorescein angiography imaging. It also suggests that image analysis of aspects such as ischemia levels may be useful in identifying higher risk groups as a warning sign for future conversion to neovascularization.
Collapse
Affiliation(s)
- Ines Drira
- Manchester University, Manchester Royal Eye Hospital, Oxford Rd, Manchester, M13 9WL, UK
- Hospital of Toulouse, Pl. du Dr Joseph Baylac, 31300, Toulouse, France
| | - Maha Noor
- Manchester University, Manchester Royal Eye Hospital, Oxford Rd, Manchester, M13 9WL, UK
| | - Amy Stone
- Manchester University, Manchester Royal Eye Hospital, Oxford Rd, Manchester, M13 9WL, UK
| | - Yvonne D'Souza
- Manchester University, Manchester Royal Eye Hospital, Oxford Rd, Manchester, M13 9WL, UK
| | - Binu John
- Manchester University, Manchester Royal Eye Hospital, Oxford Rd, Manchester, M13 9WL, UK
| | - Orlaith McGrath
- Manchester University, Manchester Royal Eye Hospital, Oxford Rd, Manchester, M13 9WL, UK
| | - Praveen J Patel
- National Institute for Health and Care Research Biomedical Research Centre, Moorfields Eye Hospital National Health Service Foundation Trust and University College London Institute of Ophthalmology, London, UK
| | - Tariq Aslam
- Manchester University, Manchester Royal Eye Hospital, Oxford Rd, Manchester, M13 9WL, UK.
| |
Collapse
|
8
|
Pradeep K, Jeyakumar V, Bhende M, Shakeel A, Mahadevan S. Artificial intelligence and hemodynamic studies in optical coherence tomography angiography for diabetic retinopathy evaluation: A review. Proc Inst Mech Eng H 2024; 238:3-21. [PMID: 38044619 DOI: 10.1177/09544119231213443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
Diabetic retinopathy (DR) is a rapidly emerging retinal abnormality worldwide, which can cause significant vision loss by disrupting the vascular structure in the retina. Recently, optical coherence tomography angiography (OCTA) has emerged as an effective imaging tool for diagnosing and monitoring DR. OCTA produces high-quality 3-dimensional images and provides deeper visualization of retinal vessel capillaries and plexuses. The clinical relevance of OCTA in detecting, classifying, and planning therapeutic procedures for DR patients has been highlighted in various studies. Quantitative indicators obtained from OCTA, such as blood vessel segmentation of the retina, foveal avascular zone (FAZ) extraction, retinal blood vessel density, blood velocity, flow rate, capillary vessel pressure, and retinal oxygen extraction, have been identified as crucial hemodynamic features for screening DR using computer-aided systems in artificial intelligence (AI). AI has the potential to assist physicians and ophthalmologists in developing new treatment options. In this review, we explore how OCTA has impacted the future of DR screening and early diagnosis. It also focuses on how analysis methods have evolved over time in clinical trials. The future of OCTA imaging and its continued use in AI-assisted analysis is promising and will undoubtedly enhance the clinical management of DR.
Collapse
Affiliation(s)
- K Pradeep
- Department of Biomedical Engineering, Chennai Institute of Technology, Chennai, Tamil Nadu, India
| | - Vijay Jeyakumar
- Department of Biomedical Engineering, Sri Sivasubramaniya Nadar College of Engineering, Chennai, Tamil Nadu, India
| | - Muna Bhende
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya Medical Research Foundation, Chennai, Tamil Nadu, India
| | - Areeba Shakeel
- Vitreoretina Department, Sankara Nethralaya Medical Research Foundation, Chennai, Tamil Nadu, India
| | - Shriraam Mahadevan
- Department of Endocrinology, Sri Ramachandra Institute of Higher Education and Research, Chennai, Tamil Nadu, India
| |
Collapse
|
9
|
Yao X, Dadzie A, Iddir S, Abtahi M, Ebrahimi B, Le D, Ganesh S, Son T, Heiferman M. Color Fusion Effect on Deep Learning Classification of Uveal Melanoma. RESEARCH SQUARE 2023:rs.3.rs-3399214. [PMID: 37986860 PMCID: PMC10659548 DOI: 10.21203/rs.3.rs-3399214/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
Background Reliable differentiation of uveal melanoma and choroidal nevi is crucial to guide appropriate treatment, preventing unnecessary procedures for benign lesions and ensuring timely treatment for potentially malignant cases. The purpose of this study is to validate deep learning classification of uveal melanoma and choroidal nevi, and to evaluate the effect of color fusion options on the classification performance. Methods A total of 798 ultra-widefield retinal images of 438 patients were included in this retrospective study, comprising 157 patients diagnosed with UM and 281 patients diagnosed with choroidal nevus. Color fusion options, including early fusion, intermediate fusion and late fusion, were tested for deep learning image classification with a convolutional neural network (CNN). Specificity, sensitivity, F1-score, accuracy, and the area under the curve (AUC) of a receiver operating characteristic (ROC) were used to evaluate the classification performance. The saliency map visualization technique was used to understand the areas in the image that had the most influence on classification decisions of the CNN. Results Color fusion options were observed to affect the deep learning performance significantly. For single-color learning, the red color image was observed to have superior performance compared to green and blue channels. For multi-color learning, the intermediate fusion is better than early and late fusion options. Conclusion Deep learning is a promising approach for automated classification of uveal melanoma and choroidal nevi, and color fusion options can significantly affect the classification performance.
Collapse
|
10
|
Verejan V. Advancing Diabetic Retinopathy Diagnosis: Leveraging Optical Coherence Tomography Imaging with Convolutional Neural Networks. Rom J Ophthalmol 2023; 67:398-402. [PMID: 38239418 PMCID: PMC10793374 DOI: 10.22336/rjo.2023.63] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/14/2023] [Indexed: 01/22/2024] Open
Abstract
Diabetic retinopathy (DR) is a vision-threatening complication of diabetes, necessitating early and accurate diagnosis. The combination of optical coherence tomography (OCT) imaging with convolutional neural networks (CNNs) has emerged as a promising approach for enhancing DR diagnosis. OCT provides detailed retinal morphology information, while CNNs analyze OCT images for automated detection and classification of DR. This paper reviews the current research on OCT imaging and CNNs for DR diagnosis, discussing their technical aspects and suitability. It explores CNN applications in detecting lesions, segmenting microaneurysms, and assessing disease severity, showing high sensitivity and accuracy. CNN models outperform traditional methods and rival expert ophthalmologists' results. However, challenges such as dataset availability and model interpretability remain. Future directions include multimodal imaging integration and real-time, point-of-care CNN systems for DR screening. The integration of OCT imaging with CNNs has transformative potential in DR diagnosis, facilitating early intervention, personalized treatments, and improved patient outcomes. Abbreviations: DR = Diabetic Retinopathy, OCT = Optical Coherence Tomography, CNN = Convolutional Neural Network, CMV = Cytomegalovirus, PDR = Proliferative Diabetic Retinopathy, AMD = Age-Related Macular Degeneration, VEGF = vascular endothelial growth factor, RAP = Retinal Angiomatous Proliferation, OCTA = OCT Angiography, AI = Artificial Intelligence.
Collapse
Affiliation(s)
- Victoria Verejan
- Department of Ophthalmology, “N. Testemițanu” State University of Medicine and Pharmacy, Chişinău, Republic of Moldova
| |
Collapse
|
11
|
Ebrahimi B, Le D, Abtahi M, Dadzie AK, Lim JI, Chan RVP, Yao X. Optimizing the OCTA layer fusion option for deep learning classification of diabetic retinopathy. BIOMEDICAL OPTICS EXPRESS 2023; 14:4713-4724. [PMID: 37791267 PMCID: PMC10545199 DOI: 10.1364/boe.495999] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 07/29/2023] [Accepted: 07/31/2023] [Indexed: 10/05/2023]
Abstract
The purpose of this study is to evaluate layer fusion options for deep learning classification of optical coherence tomography (OCT) angiography (OCTA) images. A convolutional neural network (CNN) end-to-end classifier was utilized to classify OCTA images from healthy control subjects and diabetic patients with no retinopathy (NoDR) and non-proliferative diabetic retinopathy (NPDR). For each eye, three en-face OCTA images were acquired from the superficial capillary plexus (SCP), deep capillary plexus (DCP), and choriocapillaris (CC) layers. The performances of the CNN classifier with individual layer inputs and multi-layer fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared. For individual layer inputs, the superficial OCTA was observed to have the best performance, with 87.25% accuracy, 78.26% sensitivity, and 90.10% specificity, to differentiate control, NoDR, and NPDR. For multi-layer fusion options, the best option is the intermediate-fusion architecture, which achieved 92.65% accuracy, 87.01% sensitivity, and 94.37% specificity. To interpret the deep learning performance, the Gradient-weighted Class Activation Mapping (Grad-CAM) was utilized to identify spatial characteristics for OCTA classification. Comparative analysis indicates that the layer data fusion options can affect the performance of deep learning classification, and the intermediate-fusion approach is optimal for OCTA classification of DR.
Collapse
Affiliation(s)
- Behrouz Ebrahimi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - David Le
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Mansour Abtahi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Albert K. Dadzie
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Jennifer I. Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - R. V. Paul Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| |
Collapse
|