1
|
Chatzara A, Maliagkani E, Mitsopoulou D, Katsimpris A, Apostolopoulos ID, Papageorgiou E, Georgalas I. Artificial Intelligence Approaches for Geographic Atrophy Segmentation: A Systematic Review and Meta-Analysis. Bioengineering (Basel) 2025; 12:475. [PMID: 40428094 PMCID: PMC12108927 DOI: 10.3390/bioengineering12050475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2025] [Revised: 04/26/2025] [Accepted: 04/28/2025] [Indexed: 05/29/2025] Open
Abstract
Geographic atrophy (GA) is a progressive retinal disease associated with late-stage age-related macular degeneration (AMD), a significant cause of visual impairment in senior adults. GA lesion segmentation is important for disease monitoring in clinical trials and routine ophthalmic practice; however, its manual delineation is time-consuming, laborious, and subject to inter-grader variability. The use of artificial intelligence (AI) is rapidly expanding within the medical field and could potentially improve accuracy while reducing the workload by facilitating this task. This systematic review evaluates the performance of AI algorithms for GA segmentation and highlights their key limitations from the literature. Five databases and two registries were searched from inception until 23 March 2024, following the PRISMA methodology. Twenty-four studies met the prespecified eligibility criteria, and fifteen were included in this meta-analysis. The pooled Dice similarity coefficient (DSC) was 0.91 (95% CI 0.88-0.95), signifying a high agreement between the reference standards and model predictions. The risk of bias and reporting quality were assessed using QUADAS-2 and CLAIM tools. This review provides a comprehensive evaluation of AI applications for GA segmentation and identifies areas for improvement. The findings support the potential of AI to enhance clinical workflows and highlight pathways for improved future models that could bridge the gap between research settings and real-world clinical practice.
Collapse
Affiliation(s)
- Aikaterini Chatzara
- 1st Department of Ophthalmology, G. Gennimatas General Hospital, National and Kapodistrian University of Athens, 11527 Athens, Greece; (A.C.); (E.M.); (I.G.)
| | - Eirini Maliagkani
- 1st Department of Ophthalmology, G. Gennimatas General Hospital, National and Kapodistrian University of Athens, 11527 Athens, Greece; (A.C.); (E.M.); (I.G.)
| | | | - Andreas Katsimpris
- Princess Alexandra Eye Pavilion, University of Edinburgh, Edinburgh EH3 9HA, UK;
| | - Ioannis D. Apostolopoulos
- ACTA Lab, Department of Energy Systems, University of Thessaly, Gaiopolis Campus, 41500 Larisa, Greece;
| | - Elpiniki Papageorgiou
- ACTA Lab, Department of Energy Systems, University of Thessaly, Gaiopolis Campus, 41500 Larisa, Greece;
| | - Ilias Georgalas
- 1st Department of Ophthalmology, G. Gennimatas General Hospital, National and Kapodistrian University of Athens, 11527 Athens, Greece; (A.C.); (E.M.); (I.G.)
| |
Collapse
|
2
|
Gao Y, Xiong F, Xiong J, Chen Z, Lin Y, Xia X, Yang Y, Li G, Hu Y. Recent advances in the application of artificial intelligence in age-related macular degeneration. BMJ Open Ophthalmol 2024; 9:e001903. [PMID: 39537399 PMCID: PMC11580293 DOI: 10.1136/bmjophth-2024-001903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Accepted: 10/23/2024] [Indexed: 11/16/2024] Open
Abstract
Recent advancements in ophthalmology have been driven by the incorporation of artificial intelligence (AI), especially in diagnosing, monitoring treatment and predicting outcomes for age-related macular degeneration (AMD). AMD is a leading cause of irreversible vision loss worldwide, and its increasing prevalence among the ageing population presents a significant challenge for managing the disease. AI holds considerable promise in tackling this issue. This paper provides an overview of the latest developments in AI applications for AMD. However, current limitations include insufficient and unbalanced data, lack of interpretability in models, dependence on data quality and limited generality.
Collapse
Affiliation(s)
- Yundi Gao
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
- Beijing Bright Eye Hospital, Beijing, Beijing, China
| | - Fen Xiong
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
| | - Jian Xiong
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
| | - Zidan Chen
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
| | - Yucai Lin
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
| | - Xinjing Xia
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
| | - Yulan Yang
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
| | - Guodong Li
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
| | - Yunwei Hu
- Ophthalmic Center, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
| |
Collapse
|
3
|
Shmueli O, Szeskin A, Benhamou I, Joskowicz L, Shwartz Y, Levy J. Measuring Geographic Atrophy Area Using Column-Based Machine Learning Software on Spectral-Domain Optical Coherence Tomography versus Fundus Auto Fluorescence. Bioengineering (Basel) 2024; 11:849. [PMID: 39199806 PMCID: PMC11351153 DOI: 10.3390/bioengineering11080849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2024] [Revised: 08/11/2024] [Accepted: 08/16/2024] [Indexed: 09/01/2024] Open
Abstract
BACKGROUND The purpose of this study was to compare geographic atrophy (GA) area semi-automatic measurement using fundus autofluorescence (FAF) versus optical coherence tomography (OCT) annotation with the cRORA (complete retinal pigment epithelium and outer retinal atrophy) criteria. METHODS GA findings on FAF and OCT were semi-automatically annotated at a single time point in 36 pairs of FAF and OCT scans obtained from 36 eyes in 24 patients with dry age-related macular degeneration (AMD). The GA area, focality, perimeter, circularity, minimum and maximum Feret diameter, and minimum distance from the center were compared between FAF and OCT annotations. RESULTS The total GA area measured on OCT was 4.74 ± 3.80 mm2. In contrast, the total GA measured on FAF was 13.47 ± 8.64 mm2 (p < 0.0001), with a mean difference of 8.72 ± 6.35 mm2. Multivariate regression analysis revealed a significant correlation between the difference in area between OCT and FAF and the total baseline lesion perimeter and maximal lesion diameter measured on OCT (adjusted r2: 0.52; p < 0.0001) and the total baseline lesion area measured on FAF (adjusted r2: 0.83; p < 0.0001). CONCLUSIONS We report that the GA area measured on FAF differs significantly from the GA area measured on OCT. Further research is warranted in order to determine the clinical relevance of these findings.
Collapse
Affiliation(s)
- Or Shmueli
- Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Ein-Karem, Jerusalem 91120, Israel; (O.S.); (Y.S.)
| | - Adi Szeskin
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Givat-Ram, Jerusalem 9190401, Israel; (A.S.); (I.B.); (L.J.)
| | - Ilan Benhamou
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Givat-Ram, Jerusalem 9190401, Israel; (A.S.); (I.B.); (L.J.)
| | - Leo Joskowicz
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Givat-Ram, Jerusalem 9190401, Israel; (A.S.); (I.B.); (L.J.)
| | - Yahel Shwartz
- Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Ein-Karem, Jerusalem 91120, Israel; (O.S.); (Y.S.)
| | - Jaime Levy
- Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Ein-Karem, Jerusalem 91120, Israel; (O.S.); (Y.S.)
| |
Collapse
|
4
|
Mishra Z, Wang Z, Xu E, Xu S, Majid I, Sadda SR, Hu ZJ. Recurrent and Concurrent Prediction of Longitudinal Progression of Stargardt Atrophy and Geographic Atrophy. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.02.11.24302670. [PMID: 38405807 PMCID: PMC10888984 DOI: 10.1101/2024.02.11.24302670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Stargardt disease and age-related macular degeneration are the leading causes of blindness in the juvenile and geriatric populations, respectively. The formation of atrophic regions of the macula is a hallmark of the end-stages of both diseases. The progression of these diseases is tracked using various imaging modalities, two of the most common being fundus autofluorescence (FAF) imaging and spectral-domain optical coherence tomography (SD-OCT). This study seeks to investigate the use of longitudinal FAF and SD-OCT imaging (month 0, month 6, month 12, and month 18) data for the predictive modelling of future atrophy in Stargardt and geographic atrophy. To achieve such an objective, we develop a set of novel deep convolutional neural networks enhanced with recurrent network units for longitudinal prediction and concurrent learning of ensemble network units (termed ReConNet) which take advantage of improved retinal layer features beyond the mean intensity features. Using FAF images, the neural network presented in this paper achieved mean (± standard deviation, SD) and median Dice coefficients of 0.895 (± 0.086) and 0.922 for Stargardt atrophy, and 0.864 (± 0.113) and 0.893 for geographic atrophy. Using SD-OCT images for Stargardt atrophy, the neural network achieved mean and median Dice coefficients of 0.882 (± 0.101) and 0.906, respectively. When predicting only the interval growth of the atrophic lesions with FAF images, mean (± SD) and median Dice coefficients of 0.557 (± 0.094) and 0.559 were achieved for Stargardt atrophy, and 0.612 (± 0.089) and 0.601 for geographic atrophy. The prediction performance in OCT images is comparably good to that using FAF which opens a new, more efficient, and practical door in the assessment of atrophy progression for clinical trials and retina clinics, beyond widely used FAF. These results are highly encouraging for a high-performance interval growth prediction when more frequent or longer-term longitudinal data are available in our clinics. This is a pressing task for our next step in ongoing research.
Collapse
Affiliation(s)
- Zubin Mishra
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA, 91103, USA
- Case Western Reserve University School of Medicine, Cleveland, OH, 44106, USA
| | - Ziyuan Wang
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA, 91103, USA
- The University of California, Los Angeles, CA, 90095, USA
| | - Emily Xu
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA, 91103, USA
| | - Sophia Xu
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA, 91103, USA
| | - Iyad Majid
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA, 91103, USA
| | - SriniVas R. Sadda
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA, 91103, USA
- The University of California, Los Angeles, CA, 90095, USA
| | - Zhihong Jewel Hu
- Doheny Image Analysis Laboratory, Doheny Eye Institute, Pasadena, CA, 91103, USA
| |
Collapse
|
5
|
Yang C, Li B, Xiao Q, Bai Y, Li Y, Li Z, Li H, Li H. LA-Net: layer attention network for 3D-to-2D retinal vessel segmentation in OCTA images. Phys Med Biol 2024; 69:045019. [PMID: 38237179 DOI: 10.1088/1361-6560/ad2011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 01/18/2024] [Indexed: 02/10/2024]
Abstract
Objective.Retinal vessel segmentation from optical coherence tomography angiography (OCTA) volumes is significant in analyzing blood supply structures and the diagnosing ophthalmic diseases. However, accurate retinal vessel segmentation in 3D OCTA remains challenging due to the interference of choroidal blood flow signals and the variations in retinal vessel structure.Approach.This paper proposes a layer attention network (LA-Net) for 3D-to-2D retinal vessel segmentation. The network comprises a 3D projection path and a 2D segmentation path. The key component in the 3D path is the proposed multi-scale layer attention module, which effectively learns the layer features of OCT and OCTA to attend to the retinal vessel layer while suppressing the choroidal vessel layer. This module also efficiently captures 3D multi-scale information for improved semantic understanding during projection. In the 2D path, a reverse boundary attention module is introduced to explore and preserve boundary and shape features of retinal vessels by focusing on non-salient regions in deep features.Main results.Experimental results in two subsets of the OCTA-500 dataset showed that our method achieves advanced segmentation performance with Dice similarity coefficients of 93.04% and 89.74%, respectively.Significance.The proposed network provides reliable 3D-to-2D segmentation of retinal vessels, with potential for application in various segmentation tasks that involve projecting the input image. Implementation code:https://github.com/y8421036/LA-Net.
Collapse
Affiliation(s)
- Chaozhi Yang
- College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, People's Republic of China
| | - Bei Li
- Beijing Hospital, Institute of Geriatric Medicine, Chinese Academy of Medical Science, Beijing 100730, People's Republic of China
| | - Qian Xiao
- College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, People's Republic of China
| | - Yun Bai
- College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, People's Republic of China
| | - Yachuan Li
- College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, People's Republic of China
| | - Zongmin Li
- College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, People's Republic of China
| | - Hongyi Li
- Beijing Hospital, Institute of Geriatric Medicine, Chinese Academy of Medical Science, Beijing 100730, People's Republic of China
| | - Hua Li
- Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, People's Republic of China
| |
Collapse
|
6
|
Elsawy A, Keenan TD, Chen Q, Shi X, Thavikulwat AT, Bhandari S, Chew EY, Lu Z. Deep-GA-Net for Accurate and Explainable Detection of Geographic Atrophy on OCT Scans. OPHTHALMOLOGY SCIENCE 2023; 3:100311. [PMID: 37304045 PMCID: PMC10251072 DOI: 10.1016/j.xops.2023.100311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 04/06/2023] [Accepted: 04/07/2023] [Indexed: 06/13/2023]
Abstract
Objective To propose Deep-GA-Net, a 3-dimensional (3D) deep learning network with 3D attention layer, for the detection of geographic atrophy (GA) on spectral domain OCT (SD-OCT) scans, explain its decision making, and compare it with existing methods. Design Deep learning model development. Participants Three hundred eleven participants from the Age-Related Eye Disease Study 2 Ancillary SD-OCT Study. Methods A dataset of 1284 SD-OCT scans from 311 participants was used to develop Deep-GA-Net. Cross-validation was used to evaluate Deep-GA-Net, where each testing set contained no participant from the corresponding training set. En face heatmaps and important regions at the B-scan level were used to visualize the outputs of Deep-GA-Net, and 3 ophthalmologists graded the presence or absence of GA in them to assess the explainability (i.e., understandability and interpretability) of its detections. Main Outcome Measures Accuracy, area under receiver operating characteristic curve (AUC), area under precision-recall curve (APR). Results Compared with other networks, Deep-GA-Net achieved the best metrics, with accuracy of 0.93, AUC of 0.94, and APR of 0.91, and received the best gradings of 0.98 and 0.68 on the en face heatmap and B-scan grading tasks, respectively. Conclusions Deep-GA-Net was able to detect GA accurately from SD-OCT scans. The visualizations of Deep-GA-Net were more explainable, as suggested by 3 ophthalmologists. The code and pretrained models are publicly available at https://github.com/ncbi/Deep-GA-Net. Financial Disclosures The author(s) have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Amr Elsawy
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Tiarnan D.L. Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Xioashuang Shi
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Alisa T. Thavikulwat
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Sanjeeb Bhandari
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Emily Y. Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| |
Collapse
|
7
|
Spaide T, Jiang J, Patil J, Anegondi N, Steffen V, Kawczynski MG, Newton EM, Rabe C, Gao SS, Lee AY, Holz FG, Sadda S, Schmitz-Valckenberg S, Ferrara D. Geographic Atrophy Segmentation Using Multimodal Deep Learning. Transl Vis Sci Technol 2023; 12:10. [PMID: 37428131 DOI: 10.1167/tvst.12.7.10] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/11/2023] Open
Abstract
Purpose To examine deep learning (DL)-based methods for accurate segmentation of geographic atrophy (GA) lesions using fundus autofluorescence (FAF) and near-infrared (NIR) images. Methods This retrospective analysis utilized imaging data from study eyes of patients enrolled in Proxima A and B (NCT02479386; NCT02399072) natural history studies of GA. Two multimodal DL networks (UNet and YNet) were used to automatically segment GA lesions on FAF; segmentation accuracy was compared with annotations by experienced graders. The training data set comprised 940 image pairs (FAF and NIR) from 183 patients in Proxima B; the test data set comprised 497 image pairs from 154 patients in Proxima A. Dice coefficient scores, Bland-Altman plots, and Pearson correlation coefficient (r) were used to assess performance. Results On the test set, Dice scores for the DL network to grader comparison ranged from 0.89 to 0.92 for screening visit; Dice score between graders was 0.94. GA lesion area correlations (r) for YNet versus grader, UNet versus grader, and between graders were 0.981, 0.959, and 0.995, respectively. Longitudinal GA lesion area enlargement correlations (r) for screening to 12 months (n = 53) were lower (0.741, 0.622, and 0.890, respectively) compared with the cross-sectional results at screening. Longitudinal correlations (r) from screening to 6 months (n = 77) were even lower (0.294, 0.248, and 0.686, respectively). Conclusions Multimodal DL networks to segment GA lesions can produce accurate results comparable with expert graders. Translational Relevance DL-based tools may support efficient and individualized assessment of patients with GA in clinical research and practice.
Collapse
Affiliation(s)
- Theodore Spaide
- Roche Personalized Healthcare, Genentech, Inc., South San Francisco, CA, USA
| | - Jiaxiang Jiang
- Clinical Imaging Group, Genentech, Inc., South San Francisco, CA, USA
- Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, CA, USA
| | - Jasmine Patil
- Clinical Imaging Group, Genentech, Inc., South San Francisco, CA, USA
| | - Neha Anegondi
- Roche Personalized Healthcare, Genentech, Inc., South San Francisco, CA, USA
- Clinical Imaging Group, Genentech, Inc., South San Francisco, CA, USA
| | - Verena Steffen
- Roche Personalized Healthcare, Genentech, Inc., South San Francisco, CA, USA
- Biostatistics, Genentech, Inc., South San Francisco, CA, USA
| | | | - Elizabeth M Newton
- Roche Personalized Healthcare, Genentech, Inc., South San Francisco, CA, USA
| | - Christina Rabe
- Roche Personalized Healthcare, Genentech, Inc., South San Francisco, CA, USA
- Biostatistics, Genentech, Inc., South San Francisco, CA, USA
| | - Simon S Gao
- Roche Personalized Healthcare, Genentech, Inc., South San Francisco, CA, USA
- Clinical Imaging Group, Genentech, Inc., South San Francisco, CA, USA
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, School of Medicine, Seattle, WA, USA
| | - Frank G Holz
- Department of Ophthalmology and GRADE Reading Center, University of Bonn, Bonn, Germany
| | - SriniVas Sadda
- Doheny Eye Institute, Los Angeles, CA, USA
- Department of Ophthalmology, David Geffen School of Medicine at University of California, Los Angeles, Los Angeles, CA, USA
| | - Steffen Schmitz-Valckenberg
- Department of Ophthalmology and GRADE Reading Center, University of Bonn, Bonn, Germany
- John A. Moran Eye Center, University of Utah, Salt Lake City, UT, USA
| | - Daniela Ferrara
- Roche Personalized Healthcare, Genentech, Inc., South San Francisco, CA, USA
| |
Collapse
|
8
|
Wei W, Southern J, Zhu K, Li Y, Cordeiro MF, Veselkov K. Deep learning to detect macular atrophy in wet age-related macular degeneration using optical coherence tomography. Sci Rep 2023; 13:8296. [PMID: 37217770 DOI: 10.1038/s41598-023-35414-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 05/17/2023] [Indexed: 05/24/2023] Open
Abstract
Here, we have developed a deep learning method to fully automatically detect and quantify six main clinically relevant atrophic features associated with macular atrophy (MA) using optical coherence tomography (OCT) analysis of patients with wet age-related macular degeneration (AMD). The development of MA in patients with AMD results in irreversible blindness, and there is currently no effective method of early diagnosis of this condition, despite the recent development of unique treatments. Using OCT dataset of a total of 2211 B-scans from 45 volumetric scans of 8 patients, a convolutional neural network using one-against-all strategy was trained to present all six atrophic features followed by a validation to evaluate the performance of the models. The model predictive performance has achieved a mean dice similarity coefficient score of 0.706 ± 0.039, a mean Precision score of 0.834 ± 0.048, and a mean Sensitivity score of 0.615 ± 0.051. These results show the unique potential of using artificially intelligence-aided methods for early detection and identification of the progression of MA in wet AMD, which can further support and assist clinical decisions.
Collapse
Affiliation(s)
- Wei Wei
- Department of Surgery and Cancer, Imperial College London, London, UK
- Ningbo Medical Center Lihuili Hospital, Ningbo, China
- Imperial College Ophthalmology Research Group, London, UK
| | | | - Kexuan Zhu
- Ningbo Medical Center Lihuili Hospital, Ningbo, China
| | - Yefeng Li
- School of Cyber Science and Engineering, Ningbo University of Technology, Ningbo, China
| | - Maria Francesca Cordeiro
- Department of Surgery and Cancer, Imperial College London, London, UK.
- Imperial College Ophthalmology Research Group, London, UK.
| | - Kirill Veselkov
- Department of Surgery and Cancer, Imperial College London, London, UK.
| |
Collapse
|
9
|
Wei W, Anantharanjit R, Patel RP, Cordeiro MF. Detection of macular atrophy in age-related macular degeneration aided by artificial intelligence. Expert Rev Mol Diagn 2023:1-10. [PMID: 37144908 DOI: 10.1080/14737159.2023.2208751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
INTRODUCTION Age-related macular degeneration (AMD) is a leading cause of irreversible visual impairment worldwide. The endpoint of AMD, both in its dry or wet form, is macular atrophy (MA) which is characterized by the permanent loss of the RPE and overlying photoreceptors either in dry AMD or in wet AMD. A recognized unmet need in AMD is the early detection of MA development. AREAS COVERED Artificial Intelligence (AI) has demonstrated great impact in detection of retinal diseases, especially with its robust ability to analyze big data afforded by ophthalmic imaging modalities, such as color fundus photography (CFP), fundus autofluorescence (FAF), near-infrared reflectance (NIR), and optical coherence tomography (OCT). Among these, OCT has been shown to have great promise in identifying early MA using the new criteria in 2018. EXPERT OPINION There are few studies in which AI-OCT methods have been used to identify MA; however, results are very promising when compared to other imaging modalities. In this paper, we review the development and advances of ophthalmic imaging modalities and their combination with AI technology to detect MA in AMD. In addition, we emphasize the application of AI-OCT as an objective, cost-effective tool for the early detection and monitoring of the progression of MA in AMD.
Collapse
Affiliation(s)
- Wei Wei
- Department of Ophthalmology, Ningbo Medical Center Lihuili Hospital, Ningbo, China
- Department of Surgery & Cancer, Imperial College London, London, UK
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Ophthalmology Research Group, London, UK
| | - Rajeevan Anantharanjit
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Ophthalmology Research Group, London, UK
- Western Eye Hospital, Imperial College Healthcare NHS trust, London, UK
| | - Radhika Pooja Patel
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Ophthalmology Research Group, London, UK
- Western Eye Hospital, Imperial College Healthcare NHS trust, London, UK
| | - Maria Francesca Cordeiro
- Department of Surgery & Cancer, Imperial College London, London, UK
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Ophthalmology Research Group, London, UK
- Western Eye Hospital, Imperial College Healthcare NHS trust, London, UK
| |
Collapse
|
10
|
Mai J, Lachinov D, Riedl S, Reiter GS, Vogl WD, Bogunovic H, Schmidt-Erfurth U. Clinical validation for automated geographic atrophy monitoring on OCT under complement inhibitory treatment. Sci Rep 2023; 13:7028. [PMID: 37120456 PMCID: PMC10148818 DOI: 10.1038/s41598-023-34139-2] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 04/25/2023] [Indexed: 05/01/2023] Open
Abstract
Geographic atrophy (GA) represents a late stage of age-related macular degeneration, which leads to irreversible vision loss. With the first successful therapeutic approach, namely complement inhibition, huge numbers of patients will have to be monitored regularly. Given these perspectives, a strong need for automated GA segmentation has evolved. The main purpose of this study was the clinical validation of an artificial intelligence (AI)-based algorithm to segment a topographic 2D GA area on a 3D optical coherence tomography (OCT) volume, and to evaluate its potential for AI-based monitoring of GA progression under complement-targeted treatment. 100 GA patients from routine clinical care at the Medical University of Vienna for internal validation and 113 patients from the FILLY phase 2 clinical trial for external validation were included. Mean Dice Similarity Coefficient (DSC) was 0.86 ± 0.12 and 0.91 ± 0.05 for total GA area on the internal and external validation, respectively. Mean DSC for the GA growth area at month 12 on the external test set was 0.46 ± 0.16. Importantly, the automated segmentation by the algorithm corresponded to the outcome of the original FILLY trial measured manually on fundus autofluorescence. The proposed AI approach can reliably segment GA area on OCT with high accuracy. The availability of such tools represents an important step towards AI-based monitoring of GA progression under treatment on OCT for clinical management as well as regulatory trials.
Collapse
Affiliation(s)
- Julia Mai
- Laboratory for Ophthalmic Image Analysis (OPTIMA), Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - Dmitrii Lachinov
- Laboratory for Ophthalmic Image Analysis (OPTIMA), Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Sophie Riedl
- Laboratory for Ophthalmic Image Analysis (OPTIMA), Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - Gregor S Reiter
- Laboratory for Ophthalmic Image Analysis (OPTIMA), Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - Wolf-Dieter Vogl
- Laboratory for Ophthalmic Image Analysis (OPTIMA), Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - Hrvoje Bogunovic
- Laboratory for Ophthalmic Image Analysis (OPTIMA), Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Laboratory for Ophthalmic Image Analysis (OPTIMA), Department of Ophthalmology and Optometry, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria.
| |
Collapse
|
11
|
Lu J, Cheng Y, Li J, Liu Z, Shen M, Zhang Q, Liu J, Herrera G, Hiya FE, Morin R, Joseph J, Gregori G, Rosenfeld PJ, Wang RK. Automated segmentation and quantification of calcified drusen in 3D swept source OCT imaging. BIOMEDICAL OPTICS EXPRESS 2023; 14:1292-1306. [PMID: 36950236 PMCID: PMC10026581 DOI: 10.1364/boe.485999] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 02/18/2023] [Accepted: 02/19/2023] [Indexed: 06/18/2023]
Abstract
Qualitative and quantitative assessments of calcified drusen are clinically important for determining the risk of disease progression in age-related macular degeneration (AMD). This paper reports the development of an automated algorithm to segment and quantify calcified drusen on swept-source optical coherence tomography (SS-OCT) images. The algorithm leverages the higher scattering property of calcified drusen compared with soft drusen. Calcified drusen have a higher optical attenuation coefficient (OAC), which results in a choroidal hypotransmission defect (hypoTD) below the calcified drusen. We show that it is possible to automatically segment calcified drusen from 3D SS-OCT scans by combining the OAC within drusen and the hypoTDs under drusen. We also propose a correction method for the segmentation of the retina pigment epithelium (RPE) overlying calcified drusen by automatically correcting the RPE by an amount of the OAC peak width along each A-line, leading to more accurate segmentation and quantification of drusen in general, and the calcified drusen in particular. A total of 29 eyes with nonexudative AMD and calcified drusen imaged with SS-OCT using the 6 × 6 mm2 scanning pattern were used in this study to test the performance of the proposed automated method. We demonstrated that the method achieved good agreement with the human expert graders in identifying the area of calcified drusen (Dice similarity coefficient: 68.27 ± 11.09%, correlation coefficient of the area measurements: r = 0.9422, the mean bias of the area measurements = 0.04781 mm2).
Collapse
Affiliation(s)
- Jie Lu
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
| | - Yuxuan Cheng
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
| | - Jianqing Li
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Ziyu Liu
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
| | - Mengxi Shen
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Qinqin Zhang
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
- Research and Development, Carl Zeiss Meditec, Inc., Dublin, CA, USA
| | - Jeremy Liu
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Gissel Herrera
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Farhan E. Hiya
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Rosalyn Morin
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Joan Joseph
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Giovanni Gregori
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Philip J. Rosenfeld
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Ruikang K. Wang
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA
| |
Collapse
|
12
|
Pramil V, de Sisternes L, Omlor L, Lewis W, Sheikh H, Chu Z, Manivannan N, Durbin M, Wang RK, Rosenfeld PJ, Shen M, Guymer R, Liang MC, Gregori G, Waheed NK. A Deep Learning Model for Automated Segmentation of Geographic Atrophy Imaged Using Swept-Source OCT. Ophthalmol Retina 2023; 7:127-141. [PMID: 35970318 DOI: 10.1016/j.oret.2022.08.007] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Revised: 07/21/2022] [Accepted: 08/08/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE To present a deep learning algorithm for segmentation of geographic atrophy (GA) using en face swept-source OCT (SS-OCT) images that is accurate and reproducible for the assessment of GA growth over time. DESIGN Retrospective review of images obtained as part of a prospective natural history study. SUBJECTS Patients with GA (n = 90), patients with early or intermediate age-related macular degeneration (n = 32), and healthy controls (n = 16). METHODS An automated algorithm using scan volume data to generate 3 image inputs characterizing the main OCT features of GA-hypertransmission in subretinal pigment epithelium (sub-RPE) slab, regions of RPE loss, and loss of retinal thickness-was trained using 126 images (93 with GA and 33 without GA, from the same number of eyes) using a fivefold cross-validation method and data augmentation techniques. It was tested in an independent set of one hundred eighty 6 × 6-mm2 macular SS-OCT scans consisting of 3 repeated scans of 30 eyes with GA at baseline and follow-up as well as 45 images obtained from 42 eyes without GA. MAIN OUTCOME MEASURES The GA area, enlargement rate of GA area, square root of GA area, and square root of the enlargement rate of GA area measurements were calculated using the automated algorithm and compared with ground truth calculations performed by 2 manual graders. The repeatability of these measurements was determined using intraclass coefficients (ICCs). RESULTS There were no significant differences in the GA areas, enlargement rates of GA area, square roots of GA area, and square roots of the enlargement rates of GA area between the graders and the automated algorithm. The algorithm showed high repeatability, with ICCs of 0.99 and 0.94 for the GA area measurements and the enlargement rates of GA area, respectively. The repeatability limit for the GA area measurements made by grader 1, grader 2, and the automated algorithm was 0.28, 0.33, and 0.92 mm2, respectively. CONCLUSIONS When compared with manual methods, this proposed deep learning-based automated algorithm for GA segmentation using en face SS-OCT images was able to accurately delineate GA and produce reproducible measurements of the enlargement rates of GA.
Collapse
Affiliation(s)
- Varsha Pramil
- Tufts University School of Medicine, Boston, Massachusetts; New England Eye Center, Tufts New England Medical Center, Boston, Massachusetts
| | | | - Lars Omlor
- Carl Zeiss Meditec, Inc, Dublin, California
| | - Warren Lewis
- Carl Zeiss Meditec, Inc, Dublin, California; Bayside Photonics, Inc, Yellow Springs, Ohio
| | - Harris Sheikh
- New England Eye Center, Tufts New England Medical Center, Boston, Massachusetts
| | - Zhongdi Chu
- Department of Biomedical Engineering, University of Washington Seattle, Seattle, Washington
| | | | | | - Ruikang K Wang
- Department of Biomedical Engineering, University of Washington Seattle, Seattle, Washington
| | - Philip J Rosenfeld
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida
| | - Mengxi Shen
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida
| | - Robyn Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, Australia
| | - Michelle C Liang
- Tufts University School of Medicine, Boston, Massachusetts; New England Eye Center, Tufts New England Medical Center, Boston, Massachusetts
| | - Giovanni Gregori
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida
| | - Nadia K Waheed
- Tufts University School of Medicine, Boston, Massachusetts; New England Eye Center, Tufts New England Medical Center, Boston, Massachusetts.
| |
Collapse
|
13
|
Zhang Q, Shi Y, Shen M, Cheng Y, Zhou H, Feuer W, de Sisternes L, Gregori G, Rosenfeld PJ, Wang RK. Does the Outer Retinal Thickness Around Geographic Atrophy Represent Another Clinical Biomarker for Predicting Growth? Am J Ophthalmol 2022; 244:79-87. [PMID: 36002074 DOI: 10.1016/j.ajo.2022.08.012] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 08/10/2022] [Accepted: 08/12/2022] [Indexed: 01/30/2023]
Abstract
PURPOSE To determine whether the outer retinal layer (ORL) thickness around geographic atrophy (GA) could serve as a clinical biomarker to predict the annual enlargement rate (ER) of GA. DESIGN Retrospective analysis of a prospective, observational case series. METHODS Eyes with GA were imaged with a swept-source OCT 6 × 6 mm scan pattern. GA lesions were measured from customized en face OCT images and the annual ERs were calculated. The ORL was defined and segmented from the inner boundary of outer plexiform layer (OPL) to the inner boundary of retinal pigment epithelium (RPE) layer. The ORL thickness was measured at different subregions around GA. RESULTS A total of 38 eyes from 27 participants were included. The same eyes were used for the choriocapillaris (CC) flow deficit (FD) analysis and the RPE to the Bruch membrane (RPE-BM) distance measurements. A negative correlation was observed between the ORL thickness and the GA growth. The ORL thickness in a 300-μm rim around GA showed the strongest correlation with the GA growth (r = -0.457, P = .004). No correlations were found between the ORL thickness and the CC FDs; however, a significant correlation was found between the ORL thickness and the RPE-BM distances around GA (r = -0.398, P = .013). CONCLUSIONS ORL thickness showed a significant negative correlation with annual GA growth, but also showed a significant correlation with the RPE-BM distances, suggesting that they were dependently correlated with GA growth. This finding suggests that the loss of photoreceptors was associated with the formation of basal laminar deposits around GA.
Collapse
Affiliation(s)
- Qinqin Zhang
- From the Department of Bioengineering (Q.Z., Y.C., H.Z., R.K.W.), University of Washington, Seattle, Washington, USA
| | - Yingying Shi
- Department of Ophthalmology (Y.S., M.S., W.F., G.G., P.J.R.), Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Mengxi Shen
- Department of Ophthalmology (Y.S., M.S., W.F., G.G., P.J.R.), Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Yuxuan Cheng
- From the Department of Bioengineering (Q.Z., Y.C., H.Z., R.K.W.), University of Washington, Seattle, Washington, USA
| | - Hao Zhou
- From the Department of Bioengineering (Q.Z., Y.C., H.Z., R.K.W.), University of Washington, Seattle, Washington, USA
| | - William Feuer
- Department of Ophthalmology (Y.S., M.S., W.F., G.G., P.J.R.), Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Luis de Sisternes
- Research and Development (L.d.S.), Carl Zeiss Meditec, Inc, Dublin, California, USA
| | - Giovanni Gregori
- Department of Ophthalmology (Y.S., M.S., W.F., G.G., P.J.R.), Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Philip J Rosenfeld
- Department of Ophthalmology (Y.S., M.S., W.F., G.G., P.J.R.), Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Ruikang K Wang
- From the Department of Bioengineering (Q.Z., Y.C., H.Z., R.K.W.), University of Washington, Seattle, Washington, USA; Department of Ophthalmology (R.K.W.), University of Washington, Seattle, Washington, USA.
| |
Collapse
|
14
|
Wang Z, Sadda SR, Lee A, Hu ZJ. Automated segmentation and feature discovery of age-related macular degeneration and Stargardt disease via self-attended neural networks. Sci Rep 2022; 12:14565. [PMID: 36028647 PMCID: PMC9418226 DOI: 10.1038/s41598-022-18785-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 08/18/2022] [Indexed: 11/09/2022] Open
Abstract
Age-related macular degeneration (AMD) and Stargardt disease are the leading causes of blindness for the elderly and young adults respectively. Geographic atrophy (GA) of AMD and Stargardt atrophy are their end-stage outcomes. Efficient methods for segmentation and quantification of these atrophic lesions are critical for clinical research. In this study, we developed a deep convolutional neural network (CNN) with a trainable self-attended mechanism for accurate GA and Stargardt atrophy segmentation. Compared with traditional post-hoc attention mechanisms which can only visualize CNN features, our self-attended mechanism is embedded in a fully convolutional network and directly involved in training the CNN to actively attend key features for enhanced algorithm performance. We applied the self-attended CNN on the segmentation of AMD and Stargardt atrophic lesions on fundus autofluorescence (FAF) images. Compared with a preexisting regular fully convolutional network (the U-Net), our self-attended CNN achieved 10.6% higher Dice coefficient and 17% higher IoU (intersection over union) for AMD GA segmentation, and a 22% higher Dice coefficient and a 32% higher IoU for Stargardt atrophy segmentation. With longitudinal image data having over a longer time, the developed self-attended mechanism can also be applied on the visual discovery of early AMD and Stargardt features.
Collapse
Affiliation(s)
- Ziyuan Wang
- Doheny Eye Institute, 150 N Orange Grove Blvd, Pasadena, 91103, USA
- The University of California, Los Angeles, CA, 90095, USA
| | - Srinivas Reddy Sadda
- Doheny Eye Institute, 150 N Orange Grove Blvd, Pasadena, 91103, USA
- The University of California, Los Angeles, CA, 90095, USA
| | - Aaron Lee
- The University of Washington, Seattle, WA, 98195, USA
| | - Zhihong Jewel Hu
- Doheny Eye Institute, 150 N Orange Grove Blvd, Pasadena, 91103, USA.
| |
Collapse
|
15
|
Yang J, Tao Y, Xu Q, Zhang Y, Ma X, Yuan S, Chen Q. Self-Supervised Sequence Recovery for Semi-Supervised Retinal Layer Segmentation. IEEE J Biomed Health Inform 2022; 26:3872-3883. [PMID: 35412994 DOI: 10.1109/jbhi.2022.3166778] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Automated layer segmentation plays an important role for retinal disease diagnosis in optical coherence tomography (OCT) images. However, the severe retinal diseases result in the performance degeneration of automated layer segmentation approaches. In this paper, we present a robust semi-supervised retinal layer segmentation network to relieve the model failures on abnormal retinas, in which we obtain the lesion features from the labeled images with disease-balanced distribution, and utilize the unlabeled images to supplement the layer structure information. Specifically, in our proposed method, the cross-consistency training is utilized over the predictions of the different decoders, and we enforce a consistency between different decoder predictions to improve the encoders representation. Then, we proposed a sequence prediction branch based on self-supervised manner, which is designed to predict the position of each jigsaw puzzle to obtain sensory perception of the retinal layer structure. To this task, a layer spatial pyramid pooling (LSPP) module is designed to extract multi-scale layer spatial features. Furthermore, we use the optical coherence tomography angiography (OCTA) to supplement the information damaged by diseases. The experimental results validate that our method achieves more robust results compared with current supervised segmentation methods. Meanwhile, advanced segmentation performance can be obtained compared with state-of-the-art semi-supervised segmentation methods.
Collapse
|
16
|
Chu Z, Shi Y, Zhou X, Wang L, Zhou H, Laiginhas R, Zhang Q, Cheng Y, Shen M, de Sisternes L, Durbin MK, Feuer W, Gregori G, Rosenfeld PJ, Wang RK. Optical Coherence Tomography Measurements of the Retinal Pigment Epithelium to Bruch Membrane Thickness Around Geographic Atrophy Correlate With Growth. Am J Ophthalmol 2022; 236:249-260. [PMID: 34780802 DOI: 10.1016/j.ajo.2021.10.032] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 09/07/2021] [Accepted: 10/30/2021] [Indexed: 11/01/2022]
Abstract
PURPOSE The retinal pigment epithelium (RPE) to Bruch membrane (BM) distance around geographic atrophy (GA) was measured using an optical attenuation coefficient (OAC) algorithm to determine whether this measurement could serve as a clinical biomarker to predict the annual square root enlargement rate (ER) of GA. DESIGN A retrospective analysis of a prospective, observational case series. METHODS Eyes with GA secondary to age-related macular degeneration (AMD) were imaged with swept-source OCT (SS-OCT) using a 6 × 6-mm scan pattern. GA lesions were identified and measured using customized en face OCT images, and GA annual square root ERs were calculated. At baseline, the OACs were calculated from OCT datasets to generate customized en face OAC images for GA visualization. RPE-BM distances were measured using OAC data from different subregions around the GA. RESULTS A total of 38 eyes from 27 patients were included in this study. Measured RPE-BM distances were the highest in the region closest to GA. The RPE-BM distances immediately around the GA were significantly correlated with GA annual square root ERs (r = 0.595, P < .001 for a 0- to 300-µm rim around the GA). No correlations were found between RPE-BM distances and previously published choriocapillaris (CC) flow deficits in any subregions. CONCLUSIONS RPE-BM distances from regions around the GA significantly correlate with the annual ERs of GA. These results suggest that an abnormally thickened RPE/BM complex contributes to GA growth and that this effect is independent of CC perfusion deficits.
Collapse
|
17
|
Chu Z, Wang L, Zhou X, Shi Y, Cheng Y, Laiginhas R, Zhou H, Shen M, Zhang Q, de Sisternes L, Lee AY, Gregori G, Rosenfeld PJ, Wang RK. Automatic geographic atrophy segmentation using optical attenuation in OCT scans with deep learning. BIOMEDICAL OPTICS EXPRESS 2022; 13:1328-1343. [PMID: 35414972 PMCID: PMC8973176 DOI: 10.1364/boe.449314] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 01/29/2022] [Accepted: 01/30/2022] [Indexed: 05/22/2023]
Abstract
A deep learning algorithm was developed to automatically identify, segment, and quantify geographic atrophy (GA) based on optical attenuation coefficients (OACs) calculated from optical coherence tomography (OCT) datasets. Normal eyes and eyes with GA secondary to age-related macular degeneration were imaged with swept-source OCT using 6 × 6 mm scanning patterns. OACs calculated from OCT scans were used to generate customized composite en face OAC images. GA lesions were identified and measured using customized en face sub-retinal pigment epithelium (subRPE) OCT images. Two deep learning models with the same U-Net architecture were trained using OAC images and subRPE OCT images. Model performance was evaluated using DICE similarity coefficients (DSCs). The GA areas were calculated and compared with manual segmentations using Pearson's correlation and Bland-Altman plots. In total, 80 GA eyes and 60 normal eyes were included in this study, out of which, 16 GA eyes and 12 normal eyes were used to test the models. Both models identified GA with 100% sensitivity and specificity on the subject level. With the GA eyes, the model trained with OAC images achieved significantly higher DSCs, stronger correlation to manual results and smaller mean bias than the model trained with subRPE OCT images (0.940 ± 0.032 vs 0.889 ± 0.056, p = 0.03, paired t-test, r = 0.995 vs r = 0.959, mean bias = 0.011 mm vs mean bias = 0.117 mm). In summary, the proposed deep learning model using composite OAC images effectively and accurately identified, segmented, and quantified GA using OCT scans.
Collapse
Affiliation(s)
- Zhongdi Chu
- Department of Bioengineering, University of Washington, Seattle, Washington, 98195, USA
| | - Liang Wang
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, 33136, USA
| | - Xiao Zhou
- Department of Bioengineering, University of Washington, Seattle, Washington, 98195, USA
| | - Yingying Shi
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, 33136, USA
| | - Yuxuan Cheng
- Department of Bioengineering, University of Washington, Seattle, Washington, 98195, USA
| | - Rita Laiginhas
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, 33136, USA
| | - Hao Zhou
- Department of Bioengineering, University of Washington, Seattle, Washington, 98195, USA
| | - Mengxi Shen
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, 33136, USA
| | - Qinqin Zhang
- Department of Bioengineering, University of Washington, Seattle, Washington, 98195, USA
| | - Luis de Sisternes
- Research and Development, Carl Zeiss Meditec, Inc, Dublin, California, 94568, USA
| | - Aaron Y. Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington, 98195, USA
| | - Giovanni Gregori
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, 33136, USA
| | - Philip J. Rosenfeld
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, 33136, USA
| | - Ruikang K. Wang
- Department of Bioengineering, University of Washington, Seattle, Washington, 98195, USA
- Department of Ophthalmology, University of Washington, Seattle, Washington, 98195, USA
| |
Collapse
|
18
|
Rahman L, Hafejee A, Anantharanjit R, Wei W, Cordeiro MF. Accelerating precision ophthalmology: recent advances. EXPERT REVIEW OF PRECISION MEDICINE AND DRUG DEVELOPMENT 2022. [DOI: 10.1080/23808993.2022.2154146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Affiliation(s)
- Loay Rahman
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Ammaarah Hafejee
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Rajeevan Anantharanjit
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Wei Wei
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | | |
Collapse
|
19
|
Fully-automated atrophy segmentation in dry age-related macular degeneration in optical coherence tomography. Sci Rep 2021; 11:21893. [PMID: 34751189 PMCID: PMC8575929 DOI: 10.1038/s41598-021-01227-0] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 09/23/2021] [Indexed: 11/09/2022] Open
Abstract
Age-related macular degeneration (AMD) is a progressive retinal disease, causing vision loss. A more detailed characterization of its atrophic form became possible thanks to the introduction of Optical Coherence Tomography (OCT). However, manual atrophy quantification in 3D retinal scans is a tedious task and prevents taking full advantage of the accurate retina depiction. In this study we developed a fully automated algorithm segmenting Retinal Pigment Epithelial and Outer Retinal Atrophy (RORA) in dry AMD on macular OCT. 62 SD-OCT scans from eyes with atrophic AMD (57 patients) were collected and split into train and test sets. The training set was used to develop a Convolutional Neural Network (CNN). The performance of the algorithm was established by cross validation and comparison to the test set with ground-truth annotated by two graders. Additionally, the effect of using retinal layer segmentation during training was investigated. The algorithm achieved mean Dice scores of 0.881 and 0.844, sensitivity of 0.850 and 0.915 and precision of 0.928 and 0.799 in comparison with Expert 1 and Expert 2, respectively. Using retinal layer segmentation improved the model performance. The proposed model identified RORA with performance matching human experts. It has a potential to rapidly identify atrophy with high consistency.
Collapse
|
20
|
Shi X, Keenan TD, Chen Q, De Silva T, Thavikulwat AT, Broadhead G, Bhandari S, Cukras C, Chew EY, Lu Z. Improving Interpretability in Machine Diagnosis. OPHTHALMOLOGY SCIENCE 2021; 1:100038. [PMID: 36247813 PMCID: PMC9559084 DOI: 10.1016/j.xops.2021.100038] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 07/02/2021] [Accepted: 07/02/2021] [Indexed: 11/28/2022]
Abstract
Purpose Manually identifying geographic atrophy (GA) presence and location on OCT volume scans can be challenging and time consuming. This study developed a deep learning model simultaneously (1) to perform automated detection of GA presence or absence from OCT volume scans and (2) to provide interpretability by demonstrating which regions of which B-scans show GA. Design Med-XAI-Net, an interpretable deep learning model was developed to detect GA presence or absence from OCT volume scans using only volume scan labels, as well as to interpret the most relevant B-scans and B-scan regions. Participants One thousand two hundred eighty-four OCT volume scans (each containing 100 B-scans) from 311 participants, including 321 volumes with GA and 963 volumes without GA. Methods Med-XAI-Net simulates the human diagnostic process by using a region-attention module to locate the most relevant region in each B-scan, followed by an image-attention module to select the most relevant B-scans for classifying GA presence or absence in each OCT volume scan. Med-XAI-Net was trained and tested (80% and 20% participants, respectively) using gold standard volume scan labels from human expert graders. Main Outcome Measures Accuracy, area under the receiver operating characteristic (ROC) curve, F1 score, sensitivity, and specificity. Results In the detection of GA presence or absence, Med-XAI-Net obtained superior performance (91.5%, 93.5%, 82.3%, 82.8%, and 94.6% on accuracy, area under the ROC curve, F1 score, sensitivity, and specificity, respectively) to that of 2 other state-of-the-art deep learning methods. The performance of ophthalmologists grading only the 5 B-scans selected by Med-XAI-Net as most relevant (95.7%, 95.4%, 91.2%, and 100%, respectively) was almost identical to that of ophthalmologists grading all volume scans (96.0%, 95.7%, 91.8%, and 100%, respectively). Even grading only 1 region in 1 B-scan, the ophthalmologists demonstrated moderately high performance (89.0%, 87.4%, 77.6%, and 100%, respectively). Conclusions Despite using ground truth labels during training at the volume scan level only, Med-XAI-Net was effective in locating GA in B-scans and selecting relevant B-scans within each volume scan for GA diagnosis. These results illustrate the strengths of Med-XAI-Net in interpreting which regions and B-scans contribute to GA detection in the volume scan.
Collapse
|
21
|
Arslan J, Samarasinghe G, Sowmya A, Benke KK, Hodgson LAB, Guymer RH, Baird PN. Deep Learning Applied to Automated Segmentation of Geographic Atrophy in Fundus Autofluorescence Images. Transl Vis Sci Technol 2021; 10:2. [PMID: 34228106 PMCID: PMC8267211 DOI: 10.1167/tvst.10.8.2] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Accepted: 05/23/2021] [Indexed: 11/02/2022] Open
Abstract
Purpose This study describes the development of a deep learning algorithm based on the U-Net architecture for automated segmentation of geographic atrophy (GA) lesions in fundus autofluorescence (FAF) images. Methods Image preprocessing and normalization by modified adaptive histogram equalization were used for image standardization to improve effectiveness of deep learning. A U-Net-based deep learning algorithm was developed and trained and tested by fivefold cross-validation using FAF images from clinical datasets. The following metrics were used for evaluating the performance for lesion segmentation in GA: dice similarity coefficient (DSC), DSC loss, sensitivity, specificity, mean absolute error (MAE), accuracy, recall, and precision. Results In total, 702 FAF images from 51 patients were analyzed. After fivefold cross-validation for lesion segmentation, the average training and validation scores were found for the most important metric, DSC (0.9874 and 0.9779), for accuracy (0.9912 and 0.9815), for sensitivity (0.9955 and 0.9928), and for specificity (0.8686 and 0.7261). Scores for testing were all similar to the validation scores. The algorithm segmented GA lesions six times more quickly than human performance. Conclusions The deep learning algorithm can be implemented using clinical data with a very high level of performance for lesion segmentation. Automation of diagnostics for GA assessment has the potential to provide savings with respect to patient visit duration, operational cost and measurement reliability in routine GA assessments. Translational Relevance A deep learning algorithm based on the U-Net architecture and image preprocessing appears to be suitable for automated segmentation of GA lesions on clinical data, producing fast and accurate results.
Collapse
Affiliation(s)
- Janan Arslan
- Centre for Eye Research Australia, University of Melbourne, Royal Victorian Eye & Ear Hospital, East Melbourne, Victoria, Australia
- Department of Surgery, Ophthalmology, University of Melbourne, Parkville, Victoria, Australia
| | - Gihan Samarasinghe
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Kurt K. Benke
- School of Engineering, University of Melbourne, Parkville, Victoria, Australia
- Centre for AgriBioscience, AgriBio, Bundoora, Victoria, Australia
| | - Lauren A. B. Hodgson
- Centre for Eye Research Australia, University of Melbourne, Royal Victorian Eye & Ear Hospital, East Melbourne, Victoria, Australia
| | - Robyn H. Guymer
- Centre for Eye Research Australia, University of Melbourne, Royal Victorian Eye & Ear Hospital, East Melbourne, Victoria, Australia
- Department of Surgery, Ophthalmology, University of Melbourne, Parkville, Victoria, Australia
| | - Paul N. Baird
- Department of Surgery, Ophthalmology, University of Melbourne, Parkville, Victoria, Australia
| |
Collapse
|
22
|
Szeskin A, Yehuda R, Shmueli O, Levy J, Joskowicz L. A column-based deep learning method for the detection and quantification of atrophy associated with AMD in OCT scans. Med Image Anal 2021; 72:102130. [PMID: 34198041 DOI: 10.1016/j.media.2021.102130] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 05/27/2021] [Accepted: 06/03/2021] [Indexed: 10/21/2022]
Abstract
The objective quantification of retinal atrophy associated with age-related macular degeneration (AMD) is required for clinical diagnosis, follow-up, treatment efficacy evaluation, and clinical research. Spectral Domain Optical Coherence Tomography (OCT) has become an essential imaging technology to evaluate the macula. This paper describes a novel automatic method for the identification and quantification of atrophy associated with AMD in OCT scans and its visualization in the corresponding infrared imaging (IR) image. The method is based on the classification of light scattering patterns in vertical pixel-wide columns (A-scans) in OCT slices (B-scans) in which atrophy appears with a custom column-based convolutional neural network (CNN). The network classifies individual columns with 3D column patches formed by adjacent neighboring columns from the volumetric OCT scan. Subsequent atrophy columns form atrophy segments which are then projected onto the IR image and are used to identify and segment atrophy lesions in the IR image and to measure their areas and distances from the fovea. Experimental results on 106 clinical OCT scans (5,207 slices) in which cRORA atrophy (the end point of advanced dry AMD) was identified in 2,952 atrophy segments and 1,046 atrophy lesions yield a mean F1 score of 0.78 (std 0.06) and an AUC of 0.937, both close to the observer variability. Automated computer-based detection and quantification of atrophy associated with AMD using a column-based CNN classification in OCT scans can be performed at expert level and may be a useful clinical decision support and research tool for the diagnosis, follow-up and treatment of retinal degenerations and dystrophies.
Collapse
Affiliation(s)
- Adi Szeskin
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel
| | - Roei Yehuda
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel
| | - Or Shmueli
- Department of Ophthalmology, Hadassah Medical Center, Jerusalem, Israel
| | - Jaime Levy
- Department of Ophthalmology, Hadassah Medical Center, Jerusalem, Israel
| | - Leo Joskowicz
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel.
| |
Collapse
|
23
|
Sarhan MH, Nasseri MA, Zapp D, Maier M, Lohmann CP, Navab N, Eslami A. Machine Learning Techniques for Ophthalmic Data Processing: A Review. IEEE J Biomed Health Inform 2020; 24:3338-3350. [PMID: 32750971 DOI: 10.1109/jbhi.2020.3012134] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Machine learning and especially deep learning techniques are dominating medical image and data analysis. This article reviews machine learning approaches proposed for diagnosing ophthalmic diseases during the last four years. Three diseases are addressed in this survey, namely diabetic retinopathy, age-related macular degeneration, and glaucoma. The review covers over 60 publications and 25 public datasets and challenges related to the detection, grading, and lesion segmentation of the three considered diseases. Each section provides a summary of the public datasets and challenges related to each pathology and the current methods that have been applied to the problem. Furthermore, the recent machine learning approaches used for retinal vessels segmentation, and methods of retinal layers and fluid segmentation are reviewed. Two main imaging modalities are considered in this survey, namely color fundus imaging, and optical coherence tomography. Machine learning approaches that use eye measurements and visual field data for glaucoma detection are also included in the survey. Finally, the authors provide their views, expectations and the limitations of the future of these techniques in the clinical practice.
Collapse
|
24
|
Li M, Chen Y, Ji Z, Xie K, Yuan S, Chen Q, Li S. Image Projection Network: 3D to 2D Image Segmentation in OCTA Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3343-3354. [PMID: 32365023 DOI: 10.1109/tmi.2020.2992244] [Citation(s) in RCA: 64] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
We present an image projection network (IPN), which is a novel end-to-end architecture and can achieve 3D-to-2D image segmentation in optical coherence tomography angiography (OCTA) images. Our key insight is to build a projection learning module (PLM) which uses a unidirectional pooling layer to conduct effective features selection and dimension reduction concurrently. By combining multiple PLMs, the proposed network can input 3D OCTA data, and output 2D segmentation results such as retinal vessel segmentation. It provides a new idea for the quantification of retinal indicators: without retinal layer segmentation and without projection maps. We tested the performance of our network for two crucial retinal image segmentation issues: retinal vessel (RV) segmentation and foveal avascular zone (FAZ) segmentation. The experimental results on 316 OCTA volumes demonstrate that the IPN is an effective implementation of 3D-to-2D segmentation networks, and the uses of multi-modality information and volumetric information make IPN perform better than the baseline methods.
Collapse
|
25
|
Arslan J, Samarasinghe G, Benke KK, Sowmya A, Wu Z, Guymer RH, Baird PN. Artificial Intelligence Algorithms for Analysis of Geographic Atrophy: A Review and Evaluation. Transl Vis Sci Technol 2020; 9:57. [PMID: 33173613 PMCID: PMC7594588 DOI: 10.1167/tvst.9.2.57] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 09/28/2020] [Indexed: 12/28/2022] Open
Abstract
Purpose The purpose of this study was to summarize and evaluate artificial intelligence (AI) algorithms used in geographic atrophy (GA) diagnostic processes (e.g. isolating lesions or disease progression). Methods The search strategy and selection of publications were both conducted in accordance with the Preferred of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. PubMed and Web of Science were used to extract literary data. The algorithms were summarized by objective, performance, and scope of coverage of GA diagnosis (e.g. lesion automation and GA progression). Results Twenty-seven studies were identified for this review. A total of 18 publications focused on lesion segmentation only, 2 were designed to detect and classify GA, 2 were designed to predict future overall GA progression, 3 focused on prediction of future spatial GA progression, and 2 focused on prediction of visual function in GA. GA-related algorithms reported sensitivities from 0.47 to 0.98, specificities from 0.73 to 0.99, accuracies from 0.42 to 0.995, and Dice coefficients from 0.66 to 0.89. Conclusions Current GA-AI publications have a predominant focus on lesion segmentation and a minor focus on classification and progression analysis. AI could be applied to other facets of GA diagnoses, such as understanding the role of hyperfluorescent areas in GA. Using AI for GA has several advantages, including improved diagnostic accuracy and faster processing speeds. Translational Relevance AI can be used to quantify GA lesions and therefore allows one to impute visual function and quality-of-life. However, there is a need for the development of reliable and objective models and software to predict the rate of GA progression and to quantify improvements due to interventions.
Collapse
Affiliation(s)
- Janan Arslan
- Centre for Eye Research Australia, University of Melbourne, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
- Department of Surgery, Ophthalmology, University of Melbourne, Victoria, Australia
| | - Gihan Samarasinghe
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Kurt K. Benke
- School of Engineering, University of Melbourne, Parkville, Victoria, Australia
- Centre for AgriBioscience, AgriBio, Bundoora, Victoria, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Zhichao Wu
- Centre for Eye Research Australia, University of Melbourne, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | - Robyn H. Guymer
- Centre for Eye Research Australia, University of Melbourne, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
- Department of Surgery, Ophthalmology, University of Melbourne, Victoria, Australia
| | - Paul N. Baird
- Department of Surgery, Ophthalmology, University of Melbourne, Victoria, Australia
| |
Collapse
|
26
|
Ma X, Ji Z, Niu S, Leng T, Rubin DL, Chen Q. MS-CAM: Multi-Scale Class Activation Maps for Weakly-Supervised Segmentation of Geographic Atrophy Lesions in SD-OCT Images. IEEE J Biomed Health Inform 2020; 24:3443-3455. [PMID: 32750923 DOI: 10.1109/jbhi.2020.2999588] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
As one of the most critical characteristics in advanced stage of non-exudative Age-related Macular Degeneration (AMD), Geographic Atrophy (GA) is one of the significant causes of sustained visual acuity loss. Automatic localization of retinal regions affected by GA is a fundamental step for clinical diagnosis. In this paper, we present a novel weakly supervised model for GA segmentation in Spectral-Domain Optical Coherence Tomography (SD-OCT) images. A novel Multi-Scale Class Activation Map (MS-CAM) is proposed to highlight the discriminatory significance regions in localization and detail descriptions. To extract available multi-scale features, we design a Scaling and UpSampling (SUS) module to balance the information content between features of different scales. To capture more discriminative features, an Attentional Fully Connected (AFC) module is proposed by introducing the attention mechanism into the fully connected operations to enhance the significant informative features and suppress less useful ones. Based on the location cues, the final GA region prediction is obtained by the projection segmentation of MS-CAM. The experimental results on two independent datasets demonstrate that the proposed weakly supervised model outperforms the conventional GA segmentation methods and can produce similar or superior accuracy comparing with fully supervised approaches. The source code has been released and is available on GitHub: https://github.com/ jizexuan/Multi-Scale-Class-Activation-Map-Tensorflow.
Collapse
|
27
|
Lo J, Heisler M, Vanzan V, Karst S, Matovinović IZ, Lončarić S, Navajas EV, Beg MF, Šarunić MV. Microvasculature Segmentation and Intercapillary Area Quantification of the Deep Vascular Complex Using Transfer Learning. Transl Vis Sci Technol 2020; 9:38. [PMID: 32855842 PMCID: PMC7424950 DOI: 10.1167/tvst.9.2.38] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2019] [Accepted: 05/08/2020] [Indexed: 12/28/2022] Open
Abstract
Purpose Optical coherence tomography angiography (OCT-A) permits visualization of the changes to the retinal circulation due to diabetic retinopathy (DR), a microvascular complication of diabetes. We demonstrate accurate segmentation of the vascular morphology for the superficial capillary plexus (SCP) and deep vascular complex (DVC) using a convolutional neural network (CNN) for quantitative analysis. Methods The main CNN training dataset consisted of retinal OCT-A with a 6 × 6-mm field of view (FOV), acquired using a Zeiss PlexElite. Multiple-volume acquisition and averaging enhanced the vasculature contrast used for constructing the ground truth for neural network training. We used transfer learning from a CNN trained on smaller FOVs of the SCP acquired using different OCT instruments. Quantitative analysis of perfusion was performed on the resulting automated vasculature segmentations in representative patients with DR. Results The automated segmentations of the OCT-A images maintained the distinct morphologies of the SCP and DVC. The network segmented the SCP with an accuracy and Dice index of 0.8599 and 0.8618, respectively, and 0.7986 and 0.8139, respectively, for the DVC. The inter-rater comparisons for the SCP had an accuracy and Dice index of 0.8300 and 0.6700, respectively, and 0.6874 and 0.7416, respectively, for the DVC. Conclusions Transfer learning reduces the amount of manually annotated images required while producing high-quality automatic segmentations of the SCP and DVC that exceed inter-rater comparisons. The resulting intercapillary area quantification provides a tool for in-depth clinical analysis of retinal perfusion. Translational Relevance Accurate retinal microvasculature segmentation with the CNN results in improved perfusion analysis in diabetic retinopathy.
Collapse
Affiliation(s)
- Julian Lo
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
| | - Morgan Heisler
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
| | - Vinicius Vanzan
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, BC, Canada
| | - Sonja Karst
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, BC, Canada.,Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | | | - Sven Lončarić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Eduardo V Navajas
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, BC, Canada
| | - Mirza Faisal Beg
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
| | - Marinko V Šarunić
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
| |
Collapse
|
28
|
Heisler M, Karst S, Lo J, Mammo Z, Yu T, Warner S, Maberley D, Beg MF, Navajas EV, Sarunic MV. Ensemble Deep Learning for Diabetic Retinopathy Detection Using Optical Coherence Tomography Angiography. Transl Vis Sci Technol 2020; 9:20. [PMID: 32818081 PMCID: PMC7396168 DOI: 10.1167/tvst.9.2.20] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Accepted: 01/23/2020] [Indexed: 02/06/2023] Open
Abstract
Purpose To evaluate the role of ensemble learning techniques with deep learning in classifying diabetic retinopathy (DR) in optical coherence tomography angiography (OCTA) images and their corresponding co-registered structural images. Methods A total of 463 volumes from 380 eyes were acquired using the 3 × 3-mm OCTA protocol on the Zeiss Plex Elite system. Enface images of the superficial and deep capillary plexus were exported from both the optical coherence tomography and OCTA data. Component neural networks were constructed using single data-types and fine-tuned using VGG19, ResNet50, and DenseNet architectures pretrained on ImageNet weights. These networks were then ensembled using majority soft voting and stacking techniques. Results were compared with a classifier using manually engineered features. Class activation maps (CAMs) were created using the original CAM algorithm and Grad-CAM. Results The networks trained with the VGG19 architecture outperformed the networks trained on deeper architectures. Ensemble networks constructed using the four fine-tuned VGG19 architectures achieved accuracies of 0.92 and 0.90 for the majority soft voting and stacking methods respectively. Both ensemble methods outperformed the highest single data-type network and the network trained on hand-crafted features. Grad-CAM was shown to more accurately highlight areas of disease. Conclusions Ensemble learning increases the predictive accuracy of CNNs for classifying referable DR on OCTA datasets. Translational Relevance Because the diagnostic accuracy of OCTA images is shown to be greater than the manually extracted features currently used in the literature, the proposed methods may be beneficial toward developing clinically valuable solutions for DR diagnoses.
Collapse
Affiliation(s)
- Morgan Heisler
- School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Sonja Karst
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Julian Lo
- School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Zaid Mammo
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Timothy Yu
- School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Simon Warner
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - David Maberley
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Mirza Faisal Beg
- School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Eduardo V Navajas
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Marinko V Sarunic
- School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
| |
Collapse
|
29
|
Automated Quantification of Photoreceptor alteration in macular disease using Optical Coherence Tomography and Deep Learning. Sci Rep 2020; 10:5619. [PMID: 32221349 PMCID: PMC7101374 DOI: 10.1038/s41598-020-62329-9] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Accepted: 03/03/2020] [Indexed: 02/03/2023] Open
Abstract
Diabetic macular edema (DME) and retina vein occlusion (RVO) are macular diseases in which central photoreceptors are affected due to pathological accumulation of fluid. Optical coherence tomography allows to visually assess and evaluate photoreceptor integrity, whose alteration has been observed as an important biomarker of both diseases. However, the manual quantification of this layered structure is challenging, tedious and time-consuming. In this paper we introduce a deep learning approach for automatically segmenting and characterising photoreceptor alteration. The photoreceptor layer is segmented using an ensemble of four different convolutional neural networks. En-face representations of the layer thickness are produced to characterize the photoreceptors. The pixel-wise standard deviation of the score maps produced by the individual models is also taken to indicate areas of photoreceptor abnormality or ambiguous results. Experimental results showed that our ensemble is able to produce results in pair with a human expert, outperforming each of its constitutive models. No statistically significant differences were observed between mean thickness estimates obtained from automated and manually generated annotations. Therefore, our model is able to reliable quantify photoreceptors, which can be used to improve prognosis and managment of macular diseases.
Collapse
|
30
|
Beyond Performance Metrics: Automatic Deep Learning Retinal OCT Analysis Reproduces Clinical Trial Outcome. Ophthalmology 2019; 127:793-801. [PMID: 32019699 DOI: 10.1016/j.ophtha.2019.12.015] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Revised: 12/10/2019] [Accepted: 12/17/2019] [Indexed: 12/12/2022] Open
Abstract
PURPOSE To validate the efficacy of a fully automatic, deep learning-based segmentation algorithm beyond conventional performance metrics by measuring the primary outcome of a clinical trial for macular telangiectasia type 2 (MacTel2). DESIGN Evaluation of diagnostic test or technology. PARTICIPANTS A total of 92 eyes from 62 participants with MacTel2 from a phase 2 clinical trial (NCT01949324) randomized to 1 of 2 treatment groups METHODS: The ellipsoid zone (EZ) defect areas were measured on spectral domain OCT images of each eye at 2 time points (baseline and month 24) by a fully automatic, deep learning-based segmentation algorithm. The change in EZ defect area from baseline to month 24 was calculated and analyzed according to the clinical trial protocol. MAIN OUTCOME MEASURE Difference in the change in EZ defect area from baseline to month 24 between the 2 treatment groups. RESULTS The difference in the change in EZ defect area from baseline to month 24 between the 2 treatment groups measured by the fully automatic segmentation algorithm was 0.072±0.035 mm2 (P = 0.021). This was comparable to the outcome of the clinical trial using semiautomatic measurements by expert readers, 0.065±0.033 mm2 (P = 0.025). CONCLUSIONS The fully automatic segmentation algorithm was as accurate as semiautomatic expert segmentation to assess EZ defect areas and was able to reliably reproduce the statistically significant primary outcome measure of the clinical trial. This approach, to validate the performance of an automatic segmentation algorithm on the primary clinical trial end point, provides a robust gauge of its clinical applicability.
Collapse
|
31
|
Guo M, Zhao M, Cheong AMY, Dai H, Lam AKC, Zhou Y. Automatic quantification of superficial foveal avascular zone in optical coherence tomography angiography implemented with deep learning. Vis Comput Ind Biomed Art 2019; 2:21. [PMID: 32240395 PMCID: PMC7099561 DOI: 10.1186/s42492-019-0031-8] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 11/12/2019] [Indexed: 12/17/2022] Open
Abstract
An accurate segmentation and quantification of the superficial foveal avascular zone (sFAZ) is important to facilitate the diagnosis and treatment of many retinal diseases, such as diabetic retinopathy and retinal vein occlusion. We proposed a method based on deep learning for the automatic segmentation and quantification of the sFAZ in optical coherence tomography angiography (OCTA) images with robustness to brightness and contrast (B/C) variations. A dataset of 405 OCTA images from 45 participants was acquired with Zeiss Cirrus HD-OCT 5000 and the ground truth (GT) was manually segmented subsequently. A deep learning network with an encoder-decoder architecture was created to classify each pixel into an sFAZ or non-sFAZ class. Subsequently, we applied largest-connected-region extraction and hole-filling to fine-tune the automatic segmentation results. A maximum mean dice similarity coefficient (DSC) of 0.976 ± 0.011 was obtained when the automatic segmentation results were compared against the GT. The correlation coefficient between the area calculated from the automatic segmentation results and that calculated from the GT was 0.997. In all nine parameter groups with various brightness/contrast, all the DSCs of the proposed method were higher than 0.96. The proposed method achieved better performance in the sFAZ segmentation and quantification compared to two previously reported methods. In conclusion, we proposed and successfully verified an automatic sFAZ segmentation and quantification method based on deep learning with robustness to B/C variations. For clinical applications, this is an important progress in creating an automated segmentation and quantification applicable to clinical analysis.
Collapse
Affiliation(s)
- Menglin Guo
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen University Xili Campus, Room 208, Block A2,, Taoyuan Street, Shenzhen, 518055, China
| | - Mei Zhao
- Centre for Myopia Research, School of Optometry, Faculty of Health and Social Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Allen M Y Cheong
- Centre for Myopia Research, School of Optometry, Faculty of Health and Social Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Houjiao Dai
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen University Xili Campus, Room 208, Block A2,, Taoyuan Street, Shenzhen, 518055, China
| | - Andrew K C Lam
- Centre for Myopia Research, School of Optometry, Faculty of Health and Social Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China.
| | - Yongjin Zhou
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen University Xili Campus, Room 208, Block A2,, Taoyuan Street, Shenzhen, 518055, China.
| |
Collapse
|
32
|
Wu M, Cai X, Chen Q, Ji Z, Niu S, Leng T, Rubin DL, Park H. Geographic atrophy segmentation in SD-OCT images using synthesized fundus autofluorescence imaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 182:105101. [PMID: 31600644 DOI: 10.1016/j.cmpb.2019.105101] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2019] [Revised: 09/04/2019] [Accepted: 09/27/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate assessment of geographic atrophy (GA) is critical for diagnosis and therapy of non-exudative age-related macular degeneration (AMD). Herein, we propose a novel GA segmentation framework for spectral-domain optical coherence tomography (SD-OCT) images that employs synthesized fundus autofluorescence (FAF) images. METHODS An en-face OCT image is created via the restricted sub-volume projection of three-dimensional OCT data. A GA region-aware conditional generative adversarial network is employed to generate a plausible FAF image from the en-face OCT image. The network balances the consistency between the entire synthesize FAF image and the lesion. We use a fully convolutional deep network architecture to segment the GA region using the multimodal images, where the features of the en-face OCT and synthesized FAF images are fused on the front-end of the network. RESULTS Experimental results for 56 SD-OCT scans with GA indicate that our synthesis algorithm can generate high-quality synthesized FAF images and that the proposed segmentation network achieves a dice similarity coefficient, an overlap ratio, and an absolute area difference of 87.2%, 77.9%, and 11.0%, respectively. CONCLUSION We report an automatic GA segmentation method utilizing synthesized FAF images. SIGNIFICANCE Our method is effective for multimodal segmentation of the GA region and can improve AMD treatment.
Collapse
Affiliation(s)
- Menglin Wu
- School of Computer Science and Technology, Nanjing Tech University, Nanjing, China
| | - Xinxin Cai
- School of Computer Science and Technology, Nanjing Tech University, Nanjing, China
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Zexuan Ji
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Sijie Niu
- School of Information Science and Engineering, University of Jinan, Jinan, China
| | - Theodore Leng
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Daniel L Rubin
- Department of Radiology and Medicine (Biomedical Informatics Research) and Ophthalmology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Hyunjin Park
- School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, South Korea; Center for Neuroscience Imaging Research, Institute of Basic Science, Suwon, South Korea.
| |
Collapse
|
33
|
Hamwood J, Alonso-Caneiro D, Sampson DM, Collins MJ, Chen FK. Automatic Detection of Cone Photoreceptors With Fully Convolutional Networks. Transl Vis Sci Technol 2019; 8:10. [PMID: 31737434 PMCID: PMC6855369 DOI: 10.1167/tvst.8.6.10] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Accepted: 09/10/2019] [Indexed: 11/30/2022] Open
Abstract
PURPOSE To develop a fully automatic method, based on deep learning algorithms, for determining the locations of cone photoreceptors within adaptive optics scanning laser ophthalmoscope images and evaluate its performance against a dataset of manually segmented images. METHODS A fully convolutional network (FCN) based on U-Net architecture was used to generate prediction probability maps and then used a localization algorithm to reduce the prediction map to a collection of points. The proposed method was trained and tested on two publicly available datasets of different imaging modalities, with Dice overlap, false discovery rate, and true positive reported to assess performance. RESULTS The proposed method achieves a Dice coefficient of 0.989, true positive rate of 0.987, and false discovery rate of 0.009 on the first confocal dataset; and a Dice coefficient of 0.926, true positive rate of 0.909, and false discovery rate of 0.051 on the second split detector dataset. Results compare favorably with a previously proposed method, but this method provides quicker (25 times faster) evaluation performance. CONCLUSIONS The proposed FCN-based method demonstrates that deep learning algorithms can achieve accurate cone localizations, almost comparable to a human expert, while labeling the images. TRANSLATIONAL RELEVANCE Manual cone photoreceptor identification is a time-consuming task due to the large number of cones present within a single image; using the proposed FCN-based method could support the image analysis task, drastically reducing the need for manual assessment of the photoreceptor mosaic.
Collapse
Affiliation(s)
- Jared Hamwood
- School of Optometry & Vision Science, Queensland University of Technology, Queensland, Australia
| | - David Alonso-Caneiro
- School of Optometry & Vision Science, Queensland University of Technology, Queensland, Australia
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Perth, Western Australia, Australia
| | - Danuta M. Sampson
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Perth, Western Australia, Australia
- Surrey Biophotonics, Centre for Vision, Speech and Signal Processing and School of Biosciences and Medicine, The University of Surrey, Guildford, UK
| | - Michael J. Collins
- School of Optometry & Vision Science, Queensland University of Technology, Queensland, Australia
| | - Fred K. Chen
- Centre for Ophthalmology and Visual Science (incorporating Lions Eye Institute), The University of Western Australia, Perth, Western Australia, Australia
- Department of Ophthalmology, Royal Perth Hospital, Perth, Western Australia, Australia
| |
Collapse
|
34
|
Expert-level Automated Biomarker Identification in Optical Coherence Tomography Scans. Sci Rep 2019; 9:13605. [PMID: 31537854 PMCID: PMC6753124 DOI: 10.1038/s41598-019-49740-7] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Accepted: 08/29/2019] [Indexed: 12/20/2022] Open
Abstract
In ophthalmology, retinal biological markers, or biomarkers, play a critical role in the management of chronic eye conditions and in the development of new therapeutics. While many imaging technologies used today can visualize these, Optical Coherence Tomography (OCT) is often the tool of choice due to its ability to image retinal structures in three dimensions at micrometer resolution. But with widespread use in clinical routine, and growing prevalence in chronic retinal conditions, the quantity of scans acquired worldwide is surpassing the capacity of retinal specialists to inspect these in meaningful ways. Instead, automated analysis of scans using machine learning algorithms provide a cost effective and reliable alternative to assist ophthalmologists in clinical routine and research. We present a machine learning method capable of consistently identifying a wide range of common retinal biomarkers from OCT scans. Our approach avoids the need for costly segmentation annotations and allows scans to be characterized by biomarker distributions. These can then be used to classify scans based on their underlying pathology in a device-independent way.
Collapse
|
35
|
[Deep learning and neuronal networks in ophthalmology : Applications in the field of optical coherence tomography]. Ophthalmologe 2019; 115:714-721. [PMID: 29675699 DOI: 10.1007/s00347-018-0706-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Deep learning is increasingly becoming the focus of various imaging methods in medicine. Due to the large number of different imaging modalities, ophthalmology is particularly suitable for this field of application. This article gives a general overview on the topic of deep learning and its current applications in the field of optical coherence tomography. For the benefit of the reader it focuses on the clinical rather than the technical aspects.
Collapse
|
36
|
George N, Jiji C. Two stage contour evolution for automatic segmentation of choroid and cornea in OCT images. Biocybern Biomed Eng 2019. [DOI: 10.1016/j.bbe.2019.05.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
37
|
Ervin AM, Strauss RW, Ahmed MI, Birch D, Cheetham J, Ferris FL, Ip MS, Jaffe GJ, Maguire MG, Schönbach EM, Sadda SR, West SK, Scholl HP, for the ProgStar Study Group. A Workshop on Measuring the Progression of Atrophy Secondary to Stargardt Disease in the ProgStar Studies: Findings and Lessons Learned. Transl Vis Sci Technol 2019; 8:16. [PMID: 31019847 PMCID: PMC6469878 DOI: 10.1167/tvst.8.2.16] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 02/12/2019] [Indexed: 11/24/2022] Open
Abstract
The Progression of Atrophy Secondary to Stargardt Disease (ProgStar) studies were designed to measure the progression of Stargardt disease through the use of fundus autofluorescence imaging, optical coherence tomography, and microperimetry. The overarching objectives of the studies were to document the natural course of Stargardt disease and identify the most appropriate clinical outcome measures for clinical trials assessing the efficacy and safety of upcoming treatments for Stargardt disease. A workshop organized by the Foundation Fighting Blindness Clinical Research Institute was held on June 11, 2018, in Baltimore, MD, USA. Invited speakers discussed spectral-domain optical coherence tomography, fundus autofluorescence, and microperimetry methods and findings in the ProgStar prospective study. The workshop concluded with a panel discussion of optimal endpoints for measuring treatment efficacy in Stargardt disease. We summarize the workshop presentations in light of the most current literature on Stargardt disease and discuss potential clinical outcome measures and endpoints for future treatment trials.
Collapse
Affiliation(s)
- Ann-Margret Ervin
- Wilmer Eye Institute, The Johns Hopkins School of Medicine, Johns Hopkins University, Baltimore, MD, USA
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA
| | - Rupert W. Strauss
- Wilmer Eye Institute, The Johns Hopkins School of Medicine, Johns Hopkins University, Baltimore, MD, USA
- Moorfields Eye Hospital NHS Foundation Trust, and UCL Institute of Ophthalmology, University College London, London, UK
- Department of Ophthalmology, Kepler University Clinic, Linz, Austria
- Department of Ophthalmology, Medical University Graz, Graz, Austria
| | - Mohamed I. Ahmed
- Wilmer Eye Institute, The Johns Hopkins School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - David Birch
- Retina Foundation of the Southwest, Dallas, TX, USA
| | - Janet Cheetham
- Foundation Fighting Blindness Clinical Research Institute, Columbia, MD, USA
| | | | - Michael S. Ip
- Doheny Imaging Reading Center, Doheny Eye Institute, David Geffen School of Medicine at University of California Los Angeles, CA, USA
| | - Glenn J. Jaffe
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Maureen G. Maguire
- Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Etienne M. Schönbach
- Wilmer Eye Institute, The Johns Hopkins School of Medicine, Johns Hopkins University, Baltimore, MD, USA
- Case Western Reserve University, Cleveland, OH, USA
| | - SriniVas R. Sadda
- Doheny Imaging Reading Center, Doheny Eye Institute, David Geffen School of Medicine at University of California Los Angeles, CA, USA
| | - Sheila K. West
- Wilmer Eye Institute, The Johns Hopkins School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Hendrik P.N. Scholl
- Wilmer Eye Institute, The Johns Hopkins School of Medicine, Johns Hopkins University, Baltimore, MD, USA
- Department of Ophthalmology, University of Basel, Basel, Switzerland
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
| | - for the ProgStar Study Group
- Wilmer Eye Institute, The Johns Hopkins School of Medicine, Johns Hopkins University, Baltimore, MD, USA
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA
- Moorfields Eye Hospital NHS Foundation Trust, and UCL Institute of Ophthalmology, University College London, London, UK
- Department of Ophthalmology, Kepler University Clinic, Linz, Austria
- Department of Ophthalmology, Medical University Graz, Graz, Austria
- Retina Foundation of the Southwest, Dallas, TX, USA
- Foundation Fighting Blindness Clinical Research Institute, Columbia, MD, USA
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
- Doheny Imaging Reading Center, Doheny Eye Institute, David Geffen School of Medicine at University of California Los Angeles, CA, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Case Western Reserve University, Cleveland, OH, USA
- Department of Ophthalmology, University of Basel, Basel, Switzerland
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
| |
Collapse
|
38
|
Automated geographic atrophy segmentation for SD-OCT images based on two-stage learning model. Comput Biol Med 2019; 105:102-111. [DOI: 10.1016/j.compbiomed.2018.12.013] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Revised: 12/27/2018] [Accepted: 12/27/2018] [Indexed: 01/19/2023]
|
39
|
Abdolmanafi A, Duong L, Dahdah N, Adib IR, Cheriet F. Characterization of coronary artery pathological formations from OCT imaging using deep learning. BIOMEDICAL OPTICS EXPRESS 2018; 9:4936-4960. [PMID: 30319913 PMCID: PMC6179392 DOI: 10.1364/boe.9.004936] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Revised: 09/13/2018] [Accepted: 09/14/2018] [Indexed: 05/18/2023]
Abstract
Coronary artery disease is the number one health hazard leading to the pathological formations in coronary artery tissues. In severe cases, they can lead to myocardial infarction and sudden death. Optical Coherence Tomography (OCT) is an interferometric imaging modality, which has been recently used in cardiology to characterize coronary artery tissues providing high resolution ranging from 10 to 20 µm. In this study, we investigate different deep learning models for robust tissue characterization to learn the various intracoronary pathological formations caused by Kawasaki disease (KD) from OCT imaging. The experiments are performed on 33 retrospective cases comprising of pullbacks of intracoronary cross-sectional images obtained from different pediatric patients with KD. Our approach evaluates deep features computed from three different pre-trained convolutional networks. Then, a majority voting approach is applied to provide the final classification result. The results demonstrate high values of accuracy, sensitivity, and specificity for each tissue (up to 0.99 ± 0.01). Hence, deep learning models and especially, majority voting method are robust for automatic interpretation of the OCT images.
Collapse
Affiliation(s)
- Atefeh Abdolmanafi
- Dept. of Software and IT Engineering, École de technologie supérieure, Montréal,
Canada
| | - Luc Duong
- Dept. of Software and IT Engineering, École de technologie supérieure, Montréal,
Canada
| | - Nagib Dahdah
- Div. of Pediatric Cardiology and Research Center, Centre Hospitalier Universitaire Sainte-Justine, Montréal,
Canada
| | | | - Farida Cheriet
- Dept. of Computer Engineering, École Polytechnique de Montréal, Montréal,
Canada
| |
Collapse
|
40
|
Schmidt-Erfurth U, Sadeghipour A, Gerendas BS, Waldstein SM, Bogunović H. Artificial intelligence in retina. Prog Retin Eye Res 2018; 67:1-29. [PMID: 30076935 DOI: 10.1016/j.preteyeres.2018.07.004] [Citation(s) in RCA: 421] [Impact Index Per Article: 60.1] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 07/24/2018] [Accepted: 07/31/2018] [Indexed: 02/08/2023]
Abstract
Major advances in diagnostic technologies are offering unprecedented insight into the condition of the retina and beyond ocular disease. Digital images providing millions of morphological datasets can fast and non-invasively be analyzed in a comprehensive manner using artificial intelligence (AI). Methods based on machine learning (ML) and particularly deep learning (DL) are able to identify, localize and quantify pathological features in almost every macular and retinal disease. Convolutional neural networks thereby mimic the path of the human brain for object recognition through learning of pathological features from training sets, supervised ML, or even extrapolation from patterns recognized independently, unsupervised ML. The methods of AI-based retinal analyses are diverse and differ widely in their applicability, interpretability and reliability in different datasets and diseases. Fully automated AI-based systems have recently been approved for screening of diabetic retinopathy (DR). The overall potential of ML/DL includes screening, diagnostic grading as well as guidance of therapy with automated detection of disease activity, recurrences, quantification of therapeutic effects and identification of relevant targets for novel therapeutic approaches. Prediction and prognostic conclusions further expand the potential benefit of AI in retina which will enable personalized health care as well as large scale management and will empower the ophthalmologist to provide high quality diagnosis/therapy and successfully deal with the complexity of 21st century ophthalmology.
Collapse
Affiliation(s)
- Ursula Schmidt-Erfurth
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Amir Sadeghipour
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Bianca S Gerendas
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Sebastian M Waldstein
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Hrvoje Bogunović
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| |
Collapse
|
41
|
Venhuizen FG, van Ginneken B, Liefers B, van Asten F, Schreur V, Fauser S, Hoyng C, Theelen T, Sánchez CI. Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2018; 9:1545-1569. [PMID: 29675301 PMCID: PMC5905905 DOI: 10.1364/boe.9.001545] [Citation(s) in RCA: 85] [Impact Index Per Article: 12.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2017] [Revised: 01/13/2018] [Accepted: 01/31/2018] [Indexed: 05/18/2023]
Abstract
We developed a deep learning algorithm for the automatic segmentation and quantification of intraretinal cystoid fluid (IRC) in spectral domain optical coherence tomography (SD-OCT) volumes independent of the device used for acquisition. A cascade of neural networks was introduced to include prior information on the retinal anatomy, boosting performance significantly. The proposed algorithm approached human performance reaching an overall Dice coefficient of 0.754 ± 0.136 and an intraclass correlation coefficient of 0.936, for the task of IRC segmentation and quantification, respectively. The proposed method allows for fast quantitative IRC volume measurements that can be used to improve patient care, reduce costs, and allow fast and reliable analysis in large population studies.
Collapse
Affiliation(s)
- Freerk G. Venhuizen
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
- Department of Ophthalmology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Bart Liefers
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
- Department of Ophthalmology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Freekje van Asten
- Department of Ophthalmology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Vivian Schreur
- Department of Ophthalmology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Sascha Fauser
- Roche Pharma Research and Early Development, F. Hoffmann-La Roche Ltd, Basel,
Switzerland
- Cologne University Eye Clinic, Cologne,
Germany
| | - Carel Hoyng
- Department of Ophthalmology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Thomas Theelen
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
- Department of Ophthalmology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Clara I. Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
- Department of Ophthalmology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen,
the Netherlands
| |
Collapse
|