1
|
Li F, Pan W, Xiang W, Zou H. Automatic segmentation of multitype retinal fluid from optical coherence tomography images using semisupervised deep learning network. Br J Ophthalmol 2023; 107:1350-1355. [PMID: 35697498 DOI: 10.1136/bjophthalmol-2022-321348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 05/19/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND/AIMS To develop and validate a deep learning model for automated segmentation of multitype retinal fluid using optical coherence tomography (OCT) images. METHODS We retrospectively collected a total of 2814 completely anonymised OCT images with subretinal fluid (SRF) and intraretinal fluid (IRF) from 141 patients between July 2018 and June 2020, constituting our in-house retinal OCT dataset. On this dataset, we developed a novel semisupervised retinal fluid segmentation deep network (Ref-Net) to automatically identify SRF and IRF in a coarse-to-refine fashion. We performed quantitative and qualitative analyses on the model's performance while verifying its generalisation ability by using our in-house retinal OCT dataset for training and an unseen Kermany dataset for testing. We also determined the importance of major components in the semisupervised Ref-Net through extensive ablation. The main outcome measures were Dice similarity coefficient (Dice), sensitivity (Sen), specificity (Spe) and mean absolute error (MAE). RESULTS Our model trained on a handful of labelled OCT images manifested higher performance (Dice: 81.2%, Sen: 87.3%, Spe: 98.8% and MAE: 1.1% for SRF; Dice: 78.0%, Sen: 83.6%, Spe: 99.3% and MAE: 0.5% for IRF) over most cutting-edge segmentation models. It obtained expert-level performance with only 80 labelled OCT images and even exceeded two out of three ophthalmologists with 160 labelled OCT images. Its satisfactory generalisation capability across an unseen dataset was also demonstrated. CONCLUSION The semisupervised Ref-Net required only la few labelled OCT images to generate outstanding performance in automate segmentation of multitype retinal fluid, which has the potential for providing assistance for clinicians in the management of ocular disease.
Collapse
Affiliation(s)
- Feng Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - WenZhe Pan
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Wenjie Xiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Haidong Zou
- Shanghai Eye Disease Prevention and Treatment Center, Shanghai, China
- Shanghai General Hospital, Shanghai, China
| |
Collapse
|
2
|
Zhao PY, Bommakanti N, Yu G, Aaberg MT, Patel TP, Paulus YM. Deep learning for automated detection of neovascular leakage on ultra-widefield fluorescein angiography in diabetic retinopathy. Sci Rep 2023; 13:9165. [PMID: 37280345 DOI: 10.1038/s41598-023-36327-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 06/01/2023] [Indexed: 06/08/2023] Open
Abstract
Diabetic retinopathy is a leading cause of blindness in working-age adults worldwide. Neovascular leakage on fluorescein angiography indicates progression to the proliferative stage of diabetic retinopathy, which is an important distinction that requires timely ophthalmic intervention with laser or intravitreal injection treatment to reduce the risk of severe, permanent vision loss. In this study, we developed a deep learning algorithm to detect neovascular leakage on ultra-widefield fluorescein angiography images obtained from patients with diabetic retinopathy. The algorithm, an ensemble of three convolutional neural networks, was able to accurately classify neovascular leakage and distinguish this disease marker from other angiographic disease features. With additional real-world validation and testing, our algorithm could facilitate identification of neovascular leakage in the clinical setting, allowing timely intervention to reduce the burden of blinding diabetic eye disease.
Collapse
Affiliation(s)
- Peter Y Zhao
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Nikhil Bommakanti
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Gina Yu
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Michael T Aaberg
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Tapan P Patel
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA
| | - Yannis M Paulus
- Department of Ophthalmology and Visual Sciences, W.K. Kellogg Eye Center, University of Michigan, 1000 Wall Street, Ann Arbor, MI, 48105, USA.
| |
Collapse
|
3
|
Wu Y, Olvera-Barrios A, Yanagihara R, Kung TPH, Lu R, Leung I, Mishra AV, Nussinovitch H, Grimaldi G, Blazes M, Lee CS, Egan C, Tufail A, Lee AY. Training Deep Learning Models to Work on Multiple Devices by Cross-Domain Learning with No Additional Annotations. Ophthalmology 2023; 130:213-222. [PMID: 36154868 PMCID: PMC9868052 DOI: 10.1016/j.ophtha.2022.09.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 09/07/2022] [Accepted: 09/16/2022] [Indexed: 01/25/2023] Open
Abstract
PURPOSE To create an unsupervised cross-domain segmentation algorithm for segmenting intraretinal fluid and retinal layers on normal and pathologic macular OCT images from different manufacturers and camera devices. DESIGN We sought to use generative adversarial networks (GANs) to generalize a segmentation model trained on one OCT device to segment B-scans obtained from a different OCT device manufacturer in a fully unsupervised approach without labeled data from the latter manufacturer. PARTICIPANTS A total of 732 OCT B-scans from 4 different OCT devices (Heidelberg Spectralis, Topcon 1000, Maestro2, and Zeiss Plex Elite 9000). METHODS We developed an unsupervised GAN model, GANSeg, to segment 7 retinal layers and intraretinal fluid in Topcon 1000 OCT images (domain B) that had access only to labeled data on Heidelberg Spectralis images (domain A). GANSeg was unsupervised because it had access only to 110 Heidelberg labeled OCTs and 556 raw and unlabeled Topcon 1000 OCTs. To validate GANSeg segmentations, 3 masked graders manually segmented 60 OCTs from an external Topcon 1000 test dataset independently. To test the limits of GANSeg, graders also manually segmented 3 OCTs from Zeiss Plex Elite 9000 and Topcon Maestro2. A U-Net was trained on the same labeled Heidelberg images as baseline. The GANSeg repository with labeled annotations is at https://github.com/uw-biomedical-ml/ganseg. MAIN OUTCOME MEASURES Dice scores comparing segmentation results from GANSeg and the U-Net model with the manual segmented images. RESULTS Although GANSeg and U-Net achieved comparable Dice scores performance as human experts on the labeled Heidelberg test dataset, only GANSeg achieved comparable Dice scores with the best performance for the ganglion cell layer plus inner plexiform layer (90%; 95% confidence interval [CI], 68%-96%) and the worst performance for intraretinal fluid (58%; 95% CI, 18%-89%), which was statistically similar to human graders (79%; 95% CI, 43%-94%). GANSeg significantly outperformed the U-Net model. Moreover, GANSeg generalized to both Zeiss and Topcon Maestro2 swept-source OCT domains, which it had never encountered before. CONCLUSIONS GANSeg enables the transfer of supervised deep learning algorithms across OCT devices without labeled data, thereby greatly expanding the applicability of deep learning algorithms.
Collapse
Affiliation(s)
- Yue Wu
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Abraham Olvera-Barrios
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Ryan Yanagihara
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | | | - Randy Lu
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Irene Leung
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Amit V Mishra
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | | | - Gabriela Grimaldi
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Marian Blazes
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington; Roger and Angie Karalis Johnson Retina Center, Seattle, Washington
| | - Catherine Egan
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington; Roger and Angie Karalis Johnson Retina Center, Seattle, Washington.
| |
Collapse
|
4
|
Halfpenny W, Baxter SL. Towards effective data sharing in ophthalmology: data standardization and data privacy. Curr Opin Ophthalmol 2022; 33:418-424. [PMID: 35819893 PMCID: PMC9357189 DOI: 10.1097/icu.0000000000000878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW The purpose of this review is to provide an overview of updates in data standardization and data privacy in ophthalmology. These topics represent two key aspects of medical information sharing and are important knowledge areas given trends in data-driven healthcare. RECENT FINDINGS Standardization and privacy can be seen as complementary aspects that pertain to data sharing. Standardization promotes the ease and efficacy through which data is shared. Privacy considerations ensure that data sharing is appropriate and sufficiently controlled. There is active development in both areas, including government regulations and common data models to advance standardization, and application of technologies such as blockchain and synthetic data to help tackle privacy issues. These advancements have seen use in ophthalmology, but there are areas where further work is required. SUMMARY Information sharing is fundamental to both research and care delivery, and standardization/privacy are key constituent considerations. Therefore, widespread engagement with, and development of, data standardization and privacy ecosystems stand to offer great benefit to ophthalmology.
Collapse
Affiliation(s)
| | - Sally L. Baxter
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
- Health Department of Biomedical Informatics, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
5
|
Yaghy A, Lee AY, Keane PA, Keenan TDL, Mendonca LSM, Lee CS, Cairns AM, Carroll J, Chen H, Clark J, Cukras CA, de Sisternes L, Domalpally A, Durbin MK, Goetz KE, Grassmann F, Haines JL, Honda N, Hu ZJ, Mody C, Orozco LD, Owsley C, Poor S, Reisman C, Ribeiro R, Sadda SR, Sivaprasad S, Staurenghi G, Ting DS, Tumminia SJ, Zalunardo L, Waheed NK. Artificial intelligence-based strategies to identify patient populations and advance analysis in age-related macular degeneration clinical trials. Exp Eye Res 2022; 220:109092. [PMID: 35525297 PMCID: PMC9405680 DOI: 10.1016/j.exer.2022.109092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 03/18/2022] [Accepted: 04/20/2022] [Indexed: 11/04/2022]
Affiliation(s)
- Antonio Yaghy
- New England Eye Center, Tufts University Medical Center, Boston, MA, USA
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA; Karalis Johnson Retina Center, Seattle, WA, USA
| | - Pearse A Keane
- Moorfields Eye Hospital & UCL Institute of Ophthalmology, London, UK
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA; Karalis Johnson Retina Center, Seattle, WA, USA
| | | | - Joseph Carroll
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, 925 N 87th Street, Milwaukee, WI, 53226, USA
| | - Hao Chen
- Genentech, South San Francisco, CA, USA
| | | | - Catherine A Cukras
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Amitha Domalpally
- Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA
| | | | - Kerry E Goetz
- Office of the Director, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Jonathan L Haines
- Department of Population and Quantitative Health Sciences, Case Western Reserve University School of Medicine, Cleveland, OH, USA; Cleveland Institute of Computational Biology, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | | | - Zhihong Jewel Hu
- Doheny Eye Institute, University of California, Los Angeles, CA, USA
| | | | - Luz D Orozco
- Department of Bioinformatics, Genentech, South San Francisco, CA, 94080, USA
| | - Cynthia Owsley
- Department of Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA
| | - Stephen Poor
- Department of Ophthalmology, Novartis Institutes for Biomedical Research, Cambridge, MA, USA
| | | | | | - Srinivas R Sadda
- Doheny Eye Institute, David Geffen School of Medicine, University of California-Los Angeles, Los Angeles, CA, USA
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London, UK
| | - Giovanni Staurenghi
- Department of Biomedical and Clinical Sciences Luigi Sacco, Luigi Sacco Hospital, University of Milan, Italy
| | - Daniel Sw Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore
| | - Santa J Tumminia
- Office of the Director, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Nadia K Waheed
- New England Eye Center, Tufts University Medical Center, Boston, MA, USA.
| |
Collapse
|
6
|
Liu TYA, Wu JH. The Ethical and Societal Considerations for the Rise of Artificial Intelligence and Big Data in Ophthalmology. Front Med (Lausanne) 2022; 9:845522. [PMID: 35836952 PMCID: PMC9273876 DOI: 10.3389/fmed.2022.845522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 06/10/2022] [Indexed: 01/09/2023] Open
Abstract
Medical specialties with access to a large amount of imaging data, such as ophthalmology, have been at the forefront of the artificial intelligence (AI) revolution in medicine, driven by deep learning (DL) and big data. With the rise of AI and big data, there has also been increasing concern on the issues of bias and privacy, which can be partially addressed by low-shot learning, generative DL, federated learning and a "model-to-data" approach, as demonstrated by various groups of investigators in ophthalmology. However, to adequately tackle the ethical and societal challenges associated with the rise of AI in ophthalmology, a more comprehensive approach is preferable. Specifically, AI should be viewed as sociotechnical, meaning this technology shapes, and is shaped by social phenomena.
Collapse
Affiliation(s)
- T. Y. Alvin Liu
- Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, United States,*Correspondence: T. Y. Alvin Liu
| | - Jo-Hsuan Wu
- Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, CA, United States
| |
Collapse
|
7
|
Dow ER, Keenan TDL, Lad EM, Lee AY, Lee CS, Loewenstein A, Eydelman MB, Chew EY, Keane PA, Lim JI. From Data to Deployment: The Collaborative Community on Ophthalmic Imaging Roadmap for Artificial Intelligence in Age-Related Macular Degeneration. Ophthalmology 2022; 129:e43-e59. [PMID: 35016892 PMCID: PMC9859710 DOI: 10.1016/j.ophtha.2022.01.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 12/16/2021] [Accepted: 01/04/2022] [Indexed: 01/25/2023] Open
Abstract
OBJECTIVE Health care systems worldwide are challenged to provide adequate care for the 200 million individuals with age-related macular degeneration (AMD). Artificial intelligence (AI) has the potential to make a significant, positive impact on the diagnosis and management of patients with AMD; however, the development of effective AI devices for clinical care faces numerous considerations and challenges, a fact evidenced by a current absence of Food and Drug Administration (FDA)-approved AI devices for AMD. PURPOSE To delineate the state of AI for AMD, including current data, standards, achievements, and challenges. METHODS Members of the Collaborative Community on Ophthalmic Imaging Working Group for AI in AMD attended an inaugural meeting on September 7, 2020, to discuss the topic. Subsequently, they undertook a comprehensive review of the medical literature relevant to the topic. Members engaged in meetings and discussion through December 2021 to synthesize the information and arrive at a consensus. RESULTS Existing infrastructure for robust AI development for AMD includes several large, labeled data sets of color fundus photography and OCT images; however, image data often do not contain the metadata necessary for the development of reliable, valid, and generalizable models. Data sharing for AMD model development is made difficult by restrictions on data privacy and security, although potential solutions are under investigation. Computing resources may be adequate for current applications, but knowledge of machine learning development may be scarce in many clinical ophthalmology settings. Despite these challenges, researchers have produced promising AI models for AMD for screening, diagnosis, prediction, and monitoring. Future goals include defining benchmarks to facilitate regulatory authorization and subsequent clinical setting generalization. CONCLUSIONS Delivering an FDA-authorized, AI-based device for clinical care in AMD involves numerous considerations, including the identification of an appropriate clinical application; acquisition and development of a large, high-quality data set; development of the AI architecture; training and validation of the model; and functional interactions between the model output and clinical end user. The research efforts undertaken to date represent starting points for the medical devices that eventually will benefit providers, health care systems, and patients.
Collapse
Affiliation(s)
- Eliot R Dow
- Byers Eye Institute, Stanford University, Palo Alto, California
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Eleonora M Lad
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Medical Center, Tel Aviv, Israel
| | - Malvina B Eydelman
- Office of Health Technology 1, Center of Devices and Radiological Health, Food and Drug Administration, Silver Spring, Maryland
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland.
| | - Pearse A Keane
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom.
| | - Jennifer I Lim
- Department of Ophthalmology, University of Illinois at Chicago, Chicago, Illinois.
| |
Collapse
|
8
|
Lim JS, Hong M, Lam WST, Zhang Z, Teo ZL, Liu Y, Ng WY, Foo LL, Ting DSW. Novel technical and privacy-preserving technology for artificial intelligence in ophthalmology. Curr Opin Ophthalmol 2022; 33:174-187. [PMID: 35266894 DOI: 10.1097/icu.0000000000000846] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW The application of artificial intelligence (AI) in medicine and ophthalmology has experienced exponential breakthroughs in recent years in diagnosis, prognosis, and aiding clinical decision-making. The use of digital data has also heralded the need for privacy-preserving technology to protect patient confidentiality and to guard against threats such as adversarial attacks. Hence, this review aims to outline novel AI-based systems for ophthalmology use, privacy-preserving measures, potential challenges, and future directions of each. RECENT FINDINGS Several key AI algorithms used to improve disease detection and outcomes include: Data-driven, imagedriven, natural language processing (NLP)-driven, genomics-driven, and multimodality algorithms. However, deep learning systems are susceptible to adversarial attacks, and use of data for training models is associated with privacy concerns. Several data protection methods address these concerns in the form of blockchain technology, federated learning, and generative adversarial networks. SUMMARY AI-applications have vast potential to meet many eyecare needs, consequently reducing burden on scarce healthcare resources. A pertinent challenge would be to maintain data privacy and confidentiality while supporting AI endeavors, where data protection methods would need to rapidly evolve with AI technology needs. Ultimately, for AI to succeed in medicine and ophthalmology, a balance would need to be found between innovation and privacy.
Collapse
Affiliation(s)
- Jane S Lim
- Singapore National Eye Centre, Singapore Eye Research Institute
| | | | - Walter S T Lam
- Yong Loo Lin School of Medicine, National University of Singapore
| | - Zheting Zhang
- Lee Kong Chian School of Medicine, Nanyang Technological University
| | - Zhen Ling Teo
- Singapore National Eye Centre, Singapore Eye Research Institute
| | - Yong Liu
- National University of Singapore, DukeNUS Medical School, Singapore
| | - Wei Yan Ng
- Singapore National Eye Centre, Singapore Eye Research Institute
| | - Li Lian Foo
- Singapore National Eye Centre, Singapore Eye Research Institute
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore Eye Research Institute
| |
Collapse
|
9
|
Trustworthy AI: Closing the gap between development and integration of AI systems in ophthalmic practice. Prog Retin Eye Res 2021; 90:101034. [PMID: 34902546 DOI: 10.1016/j.preteyeres.2021.101034] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 01/14/2023]
Abstract
An increasing number of artificial intelligence (AI) systems are being proposed in ophthalmology, motivated by the variety and amount of clinical and imaging data, as well as their potential benefits at the different stages of patient care. Despite achieving close or even superior performance to that of experts, there is a critical gap between development and integration of AI systems in ophthalmic practice. This work focuses on the importance of trustworthy AI to close that gap. We identify the main aspects or challenges that need to be considered along the AI design pipeline so as to generate systems that meet the requirements to be deemed trustworthy, including those concerning accuracy, resiliency, reliability, safety, and accountability. We elaborate on mechanisms and considerations to address those aspects or challenges, and define the roles and responsibilities of the different stakeholders involved in AI for ophthalmic care, i.e., AI developers, reading centers, healthcare providers, healthcare institutions, ophthalmological societies and working groups or committees, patients, regulatory bodies, and payers. Generating trustworthy AI is not a responsibility of a sole stakeholder. There is an impending necessity for a collaborative approach where the different stakeholders are represented along the AI design pipeline, from the definition of the intended use to post-market surveillance after regulatory approval. This work contributes to establish such multi-stakeholder interaction and the main action points to be taken so that the potential benefits of AI reach real-world ophthalmic settings.
Collapse
|
10
|
Federated Learning for Microvasculature Segmentation and Diabetic Retinopathy Classification of OCT Data. OPHTHALMOLOGY SCIENCE 2021; 1:100069. [PMID: 36246944 PMCID: PMC9559956 DOI: 10.1016/j.xops.2021.100069] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 09/01/2021] [Accepted: 09/28/2021] [Indexed: 12/26/2022]
Abstract
Purpose To evaluate the performance of a federated learning framework for deep neural network-based retinal microvasculature segmentation and referable diabetic retinopathy (RDR) classification using OCT and OCT angiography (OCTA). Design Retrospective analysis of clinical OCT and OCTA scans of control participants and patients with diabetes. Participants The 153 OCTA en face images used for microvasculature segmentation were acquired from 4 OCT instruments with fields of view ranging from 2 × 2-mm to 6 × 6-mm. The 700 eyes used for RDR classification consisted of OCTA en face images and structural OCT projections acquired from 2 commercial OCT systems. Methods OCT angiography images used for microvasculature segmentation were delineated manually and verified by retina experts. Diabetic retinopathy (DR) severity was evaluated by retinal specialists and was condensed into 2 classes: non-RDR and RDR. The federated learning configuration was demonstrated via simulation using 4 clients for microvasculature segmentation and was compared with other collaborative training methods. Subsequently, federated learning was applied over multiple institutions for RDR classification and was compared with models trained and tested on data from the same institution (internal models) and different institutions (external models). Main Outcome Measures For microvasculature segmentation, we measured the accuracy and Dice similarity coefficient (DSC). For severity classification, we measured accuracy, area under the receiver operating characteristic curve (AUROC), area under the precision-recall curve, balanced accuracy, F1 score, sensitivity, and specificity. Results For both applications, federated learning achieved similar performance as internal models. Specifically, for microvasculature segmentation, the federated learning model achieved similar performance (mean DSC across all test sets, 0.793) as models trained on a fully centralized dataset (mean DSC, 0.807). For RDR classification, federated learning achieved a mean AUROC of 0.954 and 0.960; the internal models attained a mean AUROC of 0.956 and 0.973. Similar results are reflected in the other calculated evaluation metrics. Conclusions Federated learning showed similar results to traditional deep learning in both applications of segmentation and classification, while maintaining data privacy. Evaluation metrics highlight the potential of collaborative learning for increasing domain diversity and the generalizability of models used for the classification of OCT data.
Collapse
|
11
|
Chen JS, Coyner AS, Chan RP, Hartnett ME, Moshfeghi DM, Owen LA, Kalpathy-Cramer J, Chiang MF, Campbell JP. Deepfakes in Ophthalmology. OPHTHALMOLOGY SCIENCE 2021; 1:100079. [PMID: 36246951 PMCID: PMC9562356 DOI: 10.1016/j.xops.2021.100079] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 10/01/2021] [Accepted: 10/29/2021] [Indexed: 02/06/2023]
Abstract
Purpose Generative adversarial networks (GANs) are deep learning (DL) models that can create and modify realistic-appearing synthetic images, or deepfakes, from real images. The purpose of our study was to evaluate the ability of experts to discern synthesized retinal fundus images from real fundus images and to review the current uses and limitations of GANs in ophthalmology. Design Development and expert evaluation of a GAN and an informal review of the literature. Participants A total of 4282 image pairs of fundus images and retinal vessel maps acquired from a multicenter ROP screening program. Methods Pix2Pix HD, a high-resolution GAN, was first trained and validated on fundus and vessel map image pairs and subsequently used to generate 880 images from a held-out test set. Fifty synthetic images from this test set and 50 different real images were presented to 4 expert ROP ophthalmologists using a custom online system for evaluation of whether the images were real or synthetic. Literature was reviewed on PubMed and Google Scholars using combinations of the terms ophthalmology, GANs, generative adversarial networks, ophthalmology, images, deepfakes, and synthetic. Ancestor search was performed to broaden results. Main Outcome Measures Expert ability to discern real versus synthetic images was evaluated using percent accuracy. Statistical significance was evaluated using a Fisher exact test, with P values ≤ 0.05 thresholded for significance. Results The expert majority correctly identified 59% of images as being real or synthetic (P = 0.1). Experts 1 to 4 correctly identified 54%, 58%, 49%, and 61% of images (P = 0.505, 0.158, 1.000, and 0.043, respectively). These results suggest that the majority of experts could not discern between real and synthetic images. Additionally, we identified 20 implementations of GANs in the ophthalmology literature, with applications in a variety of imaging modalities and ophthalmic diseases. Conclusions Generative adversarial networks can create synthetic fundus images that are indiscernible from real fundus images by expert ROP ophthalmologists. Synthetic images may improve dataset augmentation for DL, may be used in trainee education, and may have implications for patient privacy.
Collapse
Affiliation(s)
- Jimmy S. Chen
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Aaron S. Coyner
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - R.V. Paul Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, Illinois
| | - M. Elizabeth Hartnett
- Department of Ophthalmology, John A. Moran Eye Center, University of Utah, Salt Lake City, Utah
| | - Darius M. Moshfeghi
- Byers Eye Institute, Horngren Family Vitreoretinal Center, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, California
| | - Leah A. Owen
- Department of Ophthalmology, John A. Moran Eye Center, University of Utah, Salt Lake City, Utah
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, Massachusetts
- Massachusetts General Hospital & Brigham and Women’s Hospital Center for Clinical Data Science, Boston, Massachusetts
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
- Correspondence: J. Peter Campbell, MD, MPH, Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, 515 SW Campus Drive, Portland, OR 97239.
| |
Collapse
|
12
|
Development and Validation of an Explainable Artificial Intelligence Framework for Macular Disease Diagnosis Based on OCT Images. Retina 2021; 42:456-464. [PMID: 34723902 DOI: 10.1097/iae.0000000000003325] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE To develop and validate an artificial intelligence framework for identifying multiple retinal lesions at image-level and performing an explainable macular disease diagnosis at eye-level in optical coherence tomography (OCT) images. METHODS 26,815 OCT images were collected from 865 eyes, and 9 retinal lesions and 3 macular diseases were labelled by ophthalmologists, including diabetic macular edema (DME) and dry/wet age-related macular degeneration (dry/wet AMD). We applied deep learning to classify retinal lesion at image-level and random forests to achieve an explainable disease diagnosis at eye-level. The performance of the integrated two-stage framework was evaluated and compared with human experts. RESULTS On testing dataset of 2,480 OCT images from 80 eyes, deep learning model achieved average Area Under Curve (AUC) of 0.978 (95% CI, 0.971-0.983) for lesion classification. And random forests performed accurate disease diagnosis with 0% error rate, which achieved the same accuracy as one of human experts and was better than the other 3 experts. It also revealed that the detection of specific lesions in the center of macular region had more contribution to macular disease diagnosis. CONCLUSIONS The integrated method achieved high accuracy and interpretability in retinal lesion classification and macular disease diagnosis in OCT images, and could have the potential to facilitate the clinical diagnosis.
Collapse
|
13
|
Chen JS, Coyner AS, Ostmo S, Sonmez K, Bajimaya S, Pradhan E, Valikodath N, Cole ED, Al-Khaled T, Chan RVP, Singh P, Kalpathy-Cramer J, Chiang MF, Campbell JP. Deep Learning for the Diagnosis of Stage in Retinopathy of Prematurity: Accuracy and Generalizability across Populations and Cameras. Ophthalmol Retina 2021; 5:1027-1035. [PMID: 33561545 PMCID: PMC8364291 DOI: 10.1016/j.oret.2020.12.013] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 12/02/2020] [Accepted: 12/16/2020] [Indexed: 12/23/2022]
Abstract
PURPOSE Stage is an important feature to identify in retinal images of infants at risk of retinopathy of prematurity (ROP). The purpose of this study was to implement a convolutional neural network (CNN) for binary detection of stages 1, 2, and 3 in ROP and to evaluate its generalizability across different populations and camera systems. DESIGN Diagnostic validation study of CNN for stage detection. PARTICIPANTS Retinal fundus images obtained from preterm infants during routine ROP screenings. METHODS Two datasets were used: 5943 fundus images obtained by RetCam camera (Natus Medical, Pleasanton, CA) from 9 North American institutions and 5049 images obtained by 3nethra camera (Forus Health Incorporated, Bengaluru, India) from 4 hospitals in Nepal. Images were labeled based on the presence of stage by 1 to 3 expert graders. Three CNN models were trained using 5-fold cross-validation on datasets from North America alone, Nepal alone, and a combined dataset and were evaluated on 2 held-out test sets consisting of 708 and 247 images from the Nepali and North American datasets, respectively. MAIN OUTCOME MEASURES Convolutional neural network performance was evaluated using area under the receiver operating characteristic curve (AUROC), area under the precision-recall curve (AUPRC), sensitivity, and specificity. RESULTS Both the North American- and Nepali-trained models demonstrated high performance on a test set from the same population: AUROC, 0.99; AUPRC, 0.98; sensitivity, 94%; and AUROC, 0.97; AUPRC, 0.91; and sensitivity, 73%; respectively. However, the performance of each model decreased to AUROC of 0.96 and AUPRC of 0.88 (sensitivity, 52%) and AUROC of 0.62 and AUPRC of 0.36 (sensitivity, 44%) when evaluated on a test set from the other population. Compared with the models trained on individual datasets, the model trained on a combined dataset achieved improved performance on each respective test set: sensitivity improved from 94% to 98% on the North American test set and from 73% to 82% on the Nepali test set. CONCLUSIONS A CNN can identify accurately the presence of ROP stage in retinal images, but performance depends on the similarity between training and testing populations. We demonstrated that internal and external performance can be improved by increasing the heterogeneity of the training dataset features of the training dataset, in this case by combining images from different populations and cameras.
Collapse
Affiliation(s)
- Jimmy S Chen
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Aaron S Coyner
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Kemal Sonmez
- Cancer Early Detection Advanced Research Center, Knight Cancer Institute, Oregon Health & Science University, Portland, Oregon
| | | | - Eli Pradhan
- Tilganga Institute of Ophthalmology, Kathmandu, Nepal
| | - Nita Valikodath
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Emily D Cole
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Tala Al-Khaled
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Praveer Singh
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts
| | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon; Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon.
| |
Collapse
|
14
|
Using deep learning to identify the recurrent laryngeal nerve during thyroidectomy. Sci Rep 2021; 11:14306. [PMID: 34253767 PMCID: PMC8275665 DOI: 10.1038/s41598-021-93202-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2021] [Accepted: 06/22/2021] [Indexed: 11/16/2022] Open
Abstract
Surgeons must visually distinguish soft-tissues, such as nerves, from surrounding anatomy to prevent complications and optimize patient outcomes. An accurate nerve segmentation and analysis tool could provide useful insight for surgical decision-making. Here, we present an end-to-end, automatic deep learning computer vision algorithm to segment and measure nerves. Unlike traditional medical imaging, our unconstrained setup with accessible handheld digital cameras, along with the unstructured open surgery scene, makes this task uniquely challenging. We investigate one common procedure, thyroidectomy, during which surgeons must avoid damaging the recurrent laryngeal nerve (RLN), which is responsible for human speech. We evaluate our segmentation algorithm on a diverse dataset across varied and challenging settings of operating room image capture, and show strong segmentation performance in the optimal image capture condition. This work lays the foundation for future research in real-time tissue discrimination and integration of accessible, intelligent tools into open surgery to provide actionable insights.
Collapse
|
15
|
Wilson M, Chopra R, Wilson MZ, Cooper C, MacWilliams P, Liu Y, Wulczyn E, Florea D, Hughes CO, Karthikesalingam A, Khalid H, Vermeirsch S, Nicholson L, Keane PA, Balaskas K, Kelly CJ. Validation and Clinical Applicability of Whole-Volume Automated Segmentation of Optical Coherence Tomography in Retinal Disease Using Deep Learning. JAMA Ophthalmol 2021; 139:964-973. [PMID: 34236406 PMCID: PMC8444027 DOI: 10.1001/jamaophthalmol.2021.2273] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Question Is deep learning–based segmentation of macular disease in optical coherence tomography (OCT) suitable for clinical use? Findings In this diagnostic study of OCT data from 173 patients with age-related macular degeneration or diabetic macular edema, model segmentations qualitatively ranked better or comparable for clinical applicability to 1 or more expert grader segmentations in 127 scans (73%) by a panel of 3 retinal specialists. Scans with high quantitative accuracy scores were not reliably associated with higher rankings. Meaning These findings suggest that qualitative evaluation adds to quantitative approaches when assessing clinical applicability of segmentation tools and clinician satisfaction in practice. Importance Quantitative volumetric measures of retinal disease in optical coherence tomography (OCT) scans are infeasible to perform owing to the time required for manual grading. Expert-level deep learning systems for automatic OCT segmentation have recently been developed. However, the potential clinical applicability of these systems is largely unknown. Objective To evaluate a deep learning model for whole-volume segmentation of 4 clinically important pathological features and assess clinical applicability. Design, Setting, Participants This diagnostic study used OCT data from 173 patients with a total of 15 558 B-scans, treated at Moorfields Eye Hospital. The data set included 2 common OCT devices and 2 macular conditions: wet age-related macular degeneration (107 scans) and diabetic macular edema (66 scans), covering the full range of severity, and from 3 points during treatment. Two expert graders performed pixel-level segmentations of intraretinal fluid, subretinal fluid, subretinal hyperreflective material, and pigment epithelial detachment, including all B-scans in each OCT volume, taking as long as 50 hours per scan. Quantitative evaluation of whole-volume model segmentations was performed. Qualitative evaluation of clinical applicability by 3 retinal experts was also conducted. Data were collected from June 1, 2012, to January 31, 2017, for set 1 and from January 1 to December 31, 2017, for set 2; graded between November 2018 and January 2020; and analyzed from February 2020 to November 2020. Main Outcomes and Measures Rating and stack ranking for clinical applicability by retinal specialists, model-grader agreement for voxelwise segmentations, and total volume evaluated using Dice similarity coefficients, Bland-Altman plots, and intraclass correlation coefficients. Results Among the 173 patients included in the analysis (92 [53%] women), qualitative assessment found that automated whole-volume segmentation ranked better than or comparable to at least 1 expert grader in 127 scans (73%; 95% CI, 66%-79%). A neutral or positive rating was given to 135 model segmentations (78%; 95% CI, 71%-84%) and 309 expert gradings (2 per scan) (89%; 95% CI, 86%-92%). The model was rated neutrally or positively in 86% to 92% of diabetic macular edema scans and 53% to 87% of age-related macular degeneration scans. Intraclass correlations ranged from 0.33 (95% CI, 0.08-0.96) to 0.96 (95% CI, 0.90-0.99). Dice similarity coefficients ranged from 0.43 (95% CI, 0.29-0.66) to 0.78 (95% CI, 0.57-0.85). Conclusions and Relevance This deep learning–based segmentation tool provided clinically useful measures of retinal disease that would otherwise be infeasible to obtain. Qualitative evaluation was additionally important to reveal clinical applicability for both care management and research.
Collapse
Affiliation(s)
| | - Reena Chopra
- Google Health, London, United Kingdom.,National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS (National Health Service) Foundation Trust, London, United Kingdom.,University College London Institute of Ophthalmology, London, United Kingdom
| | | | | | | | - Yun Liu
- Google Health, Palo Alto, California
| | | | - Daniela Florea
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS (National Health Service) Foundation Trust, London, United Kingdom.,University College London Institute of Ophthalmology, London, United Kingdom
| | | | | | - Hagar Khalid
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS (National Health Service) Foundation Trust, London, United Kingdom.,University College London Institute of Ophthalmology, London, United Kingdom
| | - Sandra Vermeirsch
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS (National Health Service) Foundation Trust, London, United Kingdom.,University College London Institute of Ophthalmology, London, United Kingdom
| | - Luke Nicholson
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS (National Health Service) Foundation Trust, London, United Kingdom.,University College London Institute of Ophthalmology, London, United Kingdom
| | - Pearse A Keane
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS (National Health Service) Foundation Trust, London, United Kingdom.,University College London Institute of Ophthalmology, London, United Kingdom
| | - Konstantinos Balaskas
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS (National Health Service) Foundation Trust, London, United Kingdom.,University College London Institute of Ophthalmology, London, United Kingdom
| | | |
Collapse
|
16
|
Montazerin M, Sajjadifar Z, Khalili Pour E, Riazi-Esfahani H, Mahmoudi T, Rabbani H, Movahedian H, Dehghani A, Akhlaghi M, Kafieh R. Livelayer: a semi-automatic software program for segmentation of layers and diabetic macular edema in optical coherence tomography images. Sci Rep 2021; 11:13794. [PMID: 34215763 PMCID: PMC8253852 DOI: 10.1038/s41598-021-92713-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 06/15/2021] [Indexed: 11/09/2022] Open
Abstract
Given the capacity of Optical Coherence Tomography (OCT) imaging to display structural changes in a wide variety of eye diseases and neurological disorders, the need for OCT image segmentation and the corresponding data interpretation is latterly felt more than ever before. In this paper, we wish to address this need by designing a semi-automatic software program for applying reliable segmentation of 8 different macular layers as well as outlining retinal pathologies such as diabetic macular edema. The software accommodates a novel graph-based semi-automatic method, called "Livelayer" which is designed for straightforward segmentation of retinal layers and fluids. This method is chiefly based on Dijkstra's Shortest Path First (SPF) algorithm and the Live-wire function together with some preprocessing operations on the to-be-segmented images. The software is indeed suitable for obtaining detailed segmentation of layers, exact localization of clear or unclear fluid objects and the ground truth, demanding far less endeavor in comparison to a common manual segmentation method. It is also valuable as a tool for calculating the irregularity index in deformed OCT images. The amount of time (seconds) that Livelayer required for segmentation of Inner Limiting Membrane, Inner Plexiform Layer-Inner Nuclear Layer, Outer Plexiform Layer-Outer Nuclear Layer was much less than that for the manual segmentation, 5 s for the ILM (minimum) and 15.57 s for the OPL-ONL (maximum). The unsigned errors (pixels) between the semi-automatically labeled and gold standard data was on average 2.7, 1.9, 2.1 for ILM, IPL-INL, OPL-ONL, respectively. The Bland-Altman plots indicated perfect concordance between the Livelayer and the manual algorithm and that they could be used interchangeably. The repeatability error was around one pixel for the OPL-ONL and < 1 for the other two. The unsigned errors between the Livelayer and the manual algorithm was 1.33 for ILM and 1.53 for Nerve Fiber Layer-Ganglion Cell Layer in peripapillary B-Scans. The Dice scores for comparing the two algorithms and for obtaining the repeatability on segmentation of fluid objects were at acceptable levels.
Collapse
Affiliation(s)
- Mansooreh Montazerin
- Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan, Iran
| | - Zahra Sajjadifar
- Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan, Iran
| | - Elias Khalili Pour
- Retina Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Hamid Riazi-Esfahani
- Retina Service, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Tahereh Mahmoudi
- Department of Biomedical Systems and Medical Physics, Tehran University of Medical Sciences, Tehran, Iran
| | - Hossein Rabbani
- School of Advanced Technologies in Medicine, Medical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Hossein Movahedian
- Isfahan Eye Research Center, Department of Ophthalmology, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Alireza Dehghani
- Isfahan Eye Research Center, Department of Ophthalmology, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Mohammadreza Akhlaghi
- Isfahan Eye Research Center, Department of Ophthalmology, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Rahele Kafieh
- School of Advanced Technologies in Medicine, Medical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran.
| |
Collapse
|
17
|
Arslan J, Samarasinghe G, Sowmya A, Benke KK, Hodgson LAB, Guymer RH, Baird PN. Deep Learning Applied to Automated Segmentation of Geographic Atrophy in Fundus Autofluorescence Images. Transl Vis Sci Technol 2021; 10:2. [PMID: 34228106 PMCID: PMC8267211 DOI: 10.1167/tvst.10.8.2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Accepted: 05/23/2021] [Indexed: 11/02/2022] Open
Abstract
Purpose This study describes the development of a deep learning algorithm based on the U-Net architecture for automated segmentation of geographic atrophy (GA) lesions in fundus autofluorescence (FAF) images. Methods Image preprocessing and normalization by modified adaptive histogram equalization were used for image standardization to improve effectiveness of deep learning. A U-Net-based deep learning algorithm was developed and trained and tested by fivefold cross-validation using FAF images from clinical datasets. The following metrics were used for evaluating the performance for lesion segmentation in GA: dice similarity coefficient (DSC), DSC loss, sensitivity, specificity, mean absolute error (MAE), accuracy, recall, and precision. Results In total, 702 FAF images from 51 patients were analyzed. After fivefold cross-validation for lesion segmentation, the average training and validation scores were found for the most important metric, DSC (0.9874 and 0.9779), for accuracy (0.9912 and 0.9815), for sensitivity (0.9955 and 0.9928), and for specificity (0.8686 and 0.7261). Scores for testing were all similar to the validation scores. The algorithm segmented GA lesions six times more quickly than human performance. Conclusions The deep learning algorithm can be implemented using clinical data with a very high level of performance for lesion segmentation. Automation of diagnostics for GA assessment has the potential to provide savings with respect to patient visit duration, operational cost and measurement reliability in routine GA assessments. Translational Relevance A deep learning algorithm based on the U-Net architecture and image preprocessing appears to be suitable for automated segmentation of GA lesions on clinical data, producing fast and accurate results.
Collapse
Affiliation(s)
- Janan Arslan
- Centre for Eye Research Australia, University of Melbourne, Royal Victorian Eye & Ear Hospital, East Melbourne, Victoria, Australia
- Department of Surgery, Ophthalmology, University of Melbourne, Parkville, Victoria, Australia
| | - Gihan Samarasinghe
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Kurt K. Benke
- School of Engineering, University of Melbourne, Parkville, Victoria, Australia
- Centre for AgriBioscience, AgriBio, Bundoora, Victoria, Australia
| | - Lauren A. B. Hodgson
- Centre for Eye Research Australia, University of Melbourne, Royal Victorian Eye & Ear Hospital, East Melbourne, Victoria, Australia
| | - Robyn H. Guymer
- Centre for Eye Research Australia, University of Melbourne, Royal Victorian Eye & Ear Hospital, East Melbourne, Victoria, Australia
- Department of Surgery, Ophthalmology, University of Melbourne, Parkville, Victoria, Australia
| | - Paul N. Baird
- Department of Surgery, Ophthalmology, University of Melbourne, Parkville, Victoria, Australia
| |
Collapse
|
18
|
Schmidt-Erfurth U, Reiter GS, Riedl S, Seeböck P, Vogl WD, Blodi BA, Domalpally A, Fawzi A, Jia Y, Sarraf D, Bogunović H. AI-based monitoring of retinal fluid in disease activity and under therapy. Prog Retin Eye Res 2021; 86:100972. [PMID: 34166808 DOI: 10.1016/j.preteyeres.2021.100972] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 05/11/2021] [Accepted: 05/13/2021] [Indexed: 12/21/2022]
Abstract
Retinal fluid as the major biomarker in exudative macular disease is accurately visualized by high-resolution three-dimensional optical coherence tomography (OCT), which is used world-wide as a diagnostic gold standard largely replacing clinical examination. Artificial intelligence (AI) with its capability to objectively identify, localize and quantify fluid introduces fully automated tools into OCT imaging for personalized disease management. Deep learning performance has already proven superior to human experts, including physicians and certified readers, in terms of accuracy and speed. Reproducible measurement of retinal fluid relies on precise AI-based segmentation methods that assign a label to each OCT voxel denoting its fluid type such as intraretinal fluid (IRF) and subretinal fluid (SRF) or pigment epithelial detachment (PED) and its location within the central 1-, 3- and 6-mm macular area. Such reliable analysis is most relevant to reflect differences in pathophysiological mechanisms and impacts on retinal function, and the dynamics of fluid resolution during therapy with different regimens and substances. Yet, an in-depth understanding of the mode of action of supervised and unsupervised learning, the functionality of a convolutional neural net (CNN) and various network architectures is needed. Greater insight regarding adequate methods for performance, validation assessment, and device- and scanning-pattern-dependent variations is necessary to empower ophthalmologists to become qualified AI users. Fluid/function correlation can lead to a better definition of valid fluid variables relevant for optimal outcomes on an individual and a population level. AI-based fluid analysis opens the way for precision medicine in real-world practice of the leading retinal diseases of modern times.
Collapse
Affiliation(s)
- Ursula Schmidt-Erfurth
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Gregor S Reiter
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Sophie Riedl
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Philipp Seeböck
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Wolf-Dieter Vogl
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Barbara A Blodi
- Fundus Photograph Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA.
| | - Amitha Domalpally
- Fundus Photograph Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA.
| | - Amani Fawzi
- Feinberg School of Medicine, Northwestern University, Chicago, IL, USA.
| | - Yali Jia
- Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.
| | - David Sarraf
- Stein Eye Institute, University of California Los Angeles, Los Angeles, CA, USA.
| | - Hrvoje Bogunović
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| |
Collapse
|
19
|
Ran A, Cheung CY. Deep Learning-Based Optical Coherence Tomography and Optical Coherence Tomography Angiography Image Analysis: An Updated Summary. Asia Pac J Ophthalmol (Phila) 2021; 10:253-260. [PMID: 34383717 DOI: 10.1097/apo.0000000000000405] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
ABSTRACT Deep learning (DL) is a subset of artificial intelligence based on deep neural networks. It has made remarkable breakthroughs in medical imaging, particularly for image classification and pattern recognition. In ophthalmology, there are rising interests in applying DL methods to analyze optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA) images. Studies showed that OCT and OCTA image evaluation by DL algorithms achieved good performance for disease detection, prognosis prediction, and image quality control, suggesting that the incorporation of DL technology could potentially enhance the accuracy of disease evaluation and the efficiency of clinical workflow. However, substantial issues, such as small training sample size, data preprocessing standardization, model robustness, results explanation, and performance cross-validation, are yet to be tackled before deploying these DL models in real-time clinics. This review summarized recent studies on DL-based image analysis models for OCT and OCTA images and discussed the potential challenges of clinical deployment and future research directions.
Collapse
Affiliation(s)
- Anran Ran
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong SAR
| | | |
Collapse
|