1
|
Moraes G, Struyven R, Wagner SK, Liu T, Chong D, Abbas A, Chopra R, Patel PJ, Balaskas K, Keenan TD, Keane PA. Quantifying Changes on OCT in Eyes Receiving Treatment for Neovascular Age-Related Macular Degeneration. OPHTHALMOLOGY SCIENCE 2024; 4:100570. [PMID: 39224530 PMCID: PMC11367487 DOI: 10.1016/j.xops.2024.100570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 06/24/2024] [Accepted: 06/24/2024] [Indexed: 09/04/2024]
Abstract
Purpose Application of artificial intelligence (AI) to macular OCT scans to segment and quantify volumetric change in anatomical and pathological features during intravitreal treatment for neovascular age-related macular degeneration (AMD). Design Retrospective analysis of OCT images from the Moorfields Eye Hospital AMD Database. Participants A total of 2115 eyes from 1801 patients starting anti-VEGF treatment between June 1, 2012, and June 30, 2017. Methods The Moorfields Eye Hospital neovascular AMD database was queried for first and second eyes receiving anti-VEGF treatment and had an OCT scan at baseline and 12 months. Follow-up scans were input into the AI system and volumes of OCT variables were studied at different time points and compared with baseline volume groups. Cross-sectional comparisons between time points were conducted using Mann-Whitney U test. Main Outcome Measures Volume outputs of the following variables were studied: intraretinal fluid, subretinal fluid, pigment epithelial detachment (PED), subretinal hyperreflective material (SHRM), hyperreflective foci, neurosensory retina, and retinal pigment epithelium. Results Mean volumes of analyzed features decreased significantly from baseline to both 4 and 12 months, in both first-treated and second-treated eyes. Pathological features that reflect exudation, including pure fluid components (intraretinal fluid and subretinal fluid) and those with fluid and fibrovascular tissue (PED and SHRM), displayed similar responses to treatment over 12 months. Mean PED and SHRM volumes showed less pronounced but also substantial decreases over the first 2 months, reaching a plateau postloading phase, and minimal change to 12 months. Both neurosensory retina and retinal pigment epithelium volumes showed gradual reductions over time, and were not as substantial as exudative features. Conclusions We report the results of a quantitative analysis of change in retinal segmented features over time, enabled by an AI segmentation system. Cross-sectional analysis at multiple time points demonstrated significant associations between baseline OCT-derived segmented features and the volume of biomarkers at follow-up. Demonstrating how certain OCT biomarkers progress with treatment and the impact of pretreatment retinal morphology on different structural volumes may provide novel insights into disease mechanisms and aid the personalization of care. Data will be made public for future studies. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Gabriella Moraes
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Robbert Struyven
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Siegfried K. Wagner
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Timing Liu
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - David Chong
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Abdallah Abbas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Reena Chopra
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Praveen J. Patel
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Konstantinos Balaskas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Tiarnan D.L. Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Pearse A. Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| |
Collapse
|
2
|
Liu Z, Han X, Gao L, Chen S, Huang W, Li P, Wu Z, Wang M, Zheng Y. Cost-effectiveness of incorporating self-imaging optical coherence tomography into fundus photography-based diabetic retinopathy screening. NPJ Digit Med 2024; 7:225. [PMID: 39181938 PMCID: PMC11344775 DOI: 10.1038/s41746-024-01222-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 08/13/2024] [Indexed: 08/27/2024] Open
Abstract
Diabetic macular edema (DME) has emerged as the foremost cause of vision loss in the population with diabetes. Early detection of DME is paramount, yet the prevailing screening, relying on two-dimensional and labor-intensive fundus photography (FP), results in frequent unwarranted referrals and overlooked diagnoses. Self-imaging optical coherence tomography (SI-OCT), offering fully automated, three-dimensional macular imaging, holds the potential to enhance DR screening. We conducted an observational study within a cohort of 1822 participants with diabetes, who received comprehensive assessments, including visual acuity testing, FP, and SI-OCT examinations. We compared the performance of three screening strategies: the conventional FP-based strategy, a combination strategy of FP and SI-OCT, and a simulated combination strategy of FP and manual SD-OCT. Additionally, we undertook a cost-effectiveness analysis utilizing Markov models to evaluate the costs and benefits of the three strategies for referable DR. We found that the FP + SI-OCT strategy demonstrated superior sensitivity (87.69% vs 61.53%) and specificity (98.29% vs 92.47%) in detecting DME when compared to the FP-based strategy. Importantly, the FP + SI-OCT strategy outperformed the FP-based strategy, with an incremental cost-effectiveness ratio (ICER) of $8016 per quality-adjusted life year (QALY), while the FP + SD-OCT strategy was less cost-effective, with an ICER of $45,754/QALY. Our results were robust to extensive sensitivity analyses, with the FP + SI-OCT strategy standing as the dominant choice in 69.36% of simulations conducted at the current willingness-to-pay threshold. In summary, incorporating SI-OCT into FP-based screening offers substantial enhancements in sensitivity, specificity for detecting DME, and most notably, cost-effectiveness for DR screening.
Collapse
Affiliation(s)
- Zitian Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Xiaotong Han
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Le Gao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Shida Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Wenyong Huang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Peng Li
- MOPTIM Imaging Technique Co. Ltd, Shenzhen, China
| | - Zhiyan Wu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Mengchi Wang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| |
Collapse
|
3
|
Niu Z, Deng Z, Gao W, Bai S, Gong Z, Chen C, Rong F, Li F, Ma L. FNeXter: A Multi-Scale Feature Fusion Network Based on ConvNeXt and Transformer for Retinal OCT Fluid Segmentation. SENSORS (BASEL, SWITZERLAND) 2024; 24:2425. [PMID: 38676042 PMCID: PMC11054479 DOI: 10.3390/s24082425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 03/31/2024] [Accepted: 04/08/2024] [Indexed: 04/28/2024]
Abstract
The accurate segmentation and quantification of retinal fluid in Optical Coherence Tomography (OCT) images are crucial for the diagnosis and treatment of ophthalmic diseases such as age-related macular degeneration. However, the accurate segmentation of retinal fluid is challenging due to significant variations in the size, position, and shape of fluid, as well as their complex, curved boundaries. To address these challenges, we propose a novel multi-scale feature fusion attention network (FNeXter), based on ConvNeXt and Transformer, for OCT fluid segmentation. In FNeXter, we introduce a novel global multi-scale hybrid encoder module that integrates ConvNeXt, Transformer, and region-aware spatial attention. This module can capture long-range dependencies and non-local similarities while also focusing on local features. Moreover, this module possesses the spatial region-aware capabilities, enabling it to adaptively focus on the lesions regions. Additionally, we propose a novel self-adaptive multi-scale feature fusion attention module to enhance the skip connections between the encoder and the decoder. The inclusion of this module elevates the model's capacity to learn global features and multi-scale contextual information effectively. Finally, we conduct comprehensive experiments to evaluate the performance of the proposed FNeXter. Experimental results demonstrate that our proposed approach outperforms other state-of-the-art methods in the task of fluid segmentation.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | - Lan Ma
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China; (Z.N.); (Z.D.); (W.G.); (S.B.); (Z.G.); (C.C.); (F.R.); (F.L.)
| |
Collapse
|
4
|
Seeböck P, Orlando JI, Michl M, Mai J, Schmidt-Erfurth U, Bogunović H. Anomaly guided segmentation: Introducing semantic context for lesion segmentation in retinal OCT using weak context supervision from anomaly detection. Med Image Anal 2024; 93:103104. [PMID: 38350222 DOI: 10.1016/j.media.2024.103104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 12/01/2023] [Accepted: 02/05/2024] [Indexed: 02/15/2024]
Abstract
Automated lesion detection in retinal optical coherence tomography (OCT) scans has shown promise for several clinical applications, including diagnosis, monitoring and guidance of treatment decisions. However, segmentation models still struggle to achieve the desired results for some complex lesions or datasets that commonly occur in real-world, e.g. due to variability of lesion phenotypes, image quality or disease appearance. While several techniques have been proposed to improve them, one line of research that has not yet been investigated is the incorporation of additional semantic context through the application of anomaly detection models. In this study we experimentally show that incorporating weak anomaly labels to standard segmentation models consistently improves lesion segmentation results. This can be done relatively easy by detecting anomalies with a separate model and then adding these output masks as an extra class for training the segmentation model. This provides additional semantic context without requiring extra manual labels. We empirically validated this strategy using two in-house and two publicly available retinal OCT datasets for multiple lesion targets, demonstrating the potential of this generic anomaly guided segmentation approach to be used as an extra tool for improving lesion detection models.
Collapse
Affiliation(s)
- Philipp Seeböck
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria; Computational Imaging Research Lab, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Austria.
| | - José Ignacio Orlando
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria; Yatiris Group at PLADEMA Institute, CONICET, Universidad Nacional del Centro de la Provincia de Buenos Aires, Gral. Pinto 399, Tandil, Buenos Aires, Argentina
| | - Martin Michl
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria
| | - Julia Mai
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria
| | - Hrvoje Bogunović
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria.
| |
Collapse
|
5
|
Opoku M, Weyori BA, Adekoya AF, Adu K. CLAHE-CapsNet: Efficient retina optical coherence tomography classification using capsule networks with contrast limited adaptive histogram equalization. PLoS One 2023; 18:e0288663. [PMID: 38032915 PMCID: PMC10688733 DOI: 10.1371/journal.pone.0288663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 07/01/2023] [Indexed: 12/02/2023] Open
Abstract
Manual detection of eye diseases using retina Optical Coherence Tomography (OCT) images by Ophthalmologists is time consuming, prone to errors and tedious. Previous researchers have developed a computer aided system using deep learning-based convolutional neural networks (CNNs) to aid in faster detection of the retina diseases. However, these methods find it difficult to achieve better classification performance due to noise in the OCT image. Moreover, the pooling operations in CNN reduce resolution of the image that limits the performance of the model. The contributions of the paper are in two folds. Firstly, this paper makes a comprehensive literature review to establish current-state-of-act methods successfully implemented in retina OCT image classifications. Additionally, this paper proposes a capsule network coupled with contrast limited adaptive histogram equalization (CLAHE-CapsNet) for retina OCT image classification. The CLAHE was implemented as layers to minimize the noise in the retina image for better performance of the model. A three-layer convolutional capsule network was designed with carefully chosen hyperparameters. The dataset used for this study was presented by University of California San Diego (UCSD). The dataset consists of 84,495 X-Ray images (JPEG) and 4 categories (NORMAL, CNV, DME, and DRUSEN). The images went through a grading system consisting of multiple layers of trained graders of expertise for verification and correction of image labels. Evaluation experiments were conducted and comparison of results was done with state-of-the-art models to find out the best performing model. The evaluation metrics; accuracy, sensitivity, precision, specificity, and AUC are used to determine the performance of the models. The evaluation results show that the proposed model achieves the best performing model of accuracies of 97.7%, 99.5%, and 99.3% on overall accuracy (OA), overall sensitivity (OS), and overall precision (OP), respectively. The results obtained indicate that the proposed model can be adopted and implemented to help ophthalmologists in detecting retina OCT diseases.
Collapse
Affiliation(s)
- Michael Opoku
- Department of Computer Science and Informatics, University of Energy and Natural Resource, Sunyani, Ghana
| | - Benjamin Asubam Weyori
- Department of Computer Science and Informatics, University of Energy and Natural Resource, Sunyani, Ghana
| | - Adebayo Felix Adekoya
- Department of Computer Science and Informatics, University of Energy and Natural Resource, Sunyani, Ghana
| | - Kwabena Adu
- Department of Computer Science and Informatics, University of Energy and Natural Resource, Sunyani, Ghana
| |
Collapse
|
6
|
Ye X, He S, Zhong X, Yu J, Yang S, Shen Y, Chen Y, Wang Y, Huang X, Shen L. OIMHS: An Optical Coherence Tomography Image Dataset Based on Macular Hole Manual Segmentation. Sci Data 2023; 10:769. [PMID: 37932307 PMCID: PMC10628143 DOI: 10.1038/s41597-023-02675-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 10/24/2023] [Indexed: 11/08/2023] Open
Abstract
Macular holes, one of the most common macular diseases, require timely treatment. The morphological changes on optical coherence tomography (OCT) images provided an opportunity for direct observation of the disease, and accurate segmentation was needed to identify and quantify the lesions. Developments of such algorithms had been obstructed by a lack of high-quality datasets (the OCT images and the corresponding gold standard macular hole segmentation labels), especially for supervised learning-based segmentation algorithms. In such context, we established a large OCT image macular hole segmentation (OIMHS) dataset with 3859 B-scan images of 119 patients, and each image provided four segmentation labels: retina, macular hole, intraretinal cysts, and choroid. This dataset offered an excellent opportunity for investigating the accuracy and reliability of different segmentation algorithms for macular holes and a new research insight into the further development of clinical research for macular diseases, which included the retina, lesions, and choroid in quantitative analyses.
Collapse
Affiliation(s)
- Xin Ye
- Center for Rehabilitation Medicine, Department of Ophthalmology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China
| | - Shucheng He
- Center for Rehabilitation Medicine, Department of Ophthalmology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China
| | - Xiaxing Zhong
- Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Jiafeng Yu
- Center for Rehabilitation Medicine, Department of Ophthalmology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China
| | | | - Yingjiao Shen
- Center for Rehabilitation Medicine, Department of Ophthalmology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China
| | - Yiqi Chen
- Center for Rehabilitation Medicine, Department of Ophthalmology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China
| | - Yaqi Wang
- College of Media Engineering, Communication University of Zhejiang, Hangzhou, China
| | - Xingru Huang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK.
| | - Lijun Shen
- Center for Rehabilitation Medicine, Department of Ophthalmology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China.
| |
Collapse
|
7
|
Li D, Ran AR, Cheung CY, Prince JL. Deep learning in optical coherence tomography: Where are the gaps? Clin Exp Ophthalmol 2023; 51:853-863. [PMID: 37245525 PMCID: PMC10825778 DOI: 10.1111/ceo.14258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 04/24/2023] [Accepted: 05/03/2023] [Indexed: 05/30/2023]
Abstract
Optical coherence tomography (OCT) is a non-invasive optical imaging modality, which provides rapid, high-resolution and cross-sectional morphology of macular area and optic nerve head for diagnosis and managing of different eye diseases. However, interpreting OCT images requires experts in both OCT images and eye diseases since many factors such as artefacts and concomitant diseases can affect the accuracy of quantitative measurements made by post-processing algorithms. Currently, there is a growing interest in applying deep learning (DL) methods to analyse OCT images automatically. This review summarises the trends in DL-based OCT image analysis in ophthalmology, discusses the current gaps, and provides potential research directions. DL in OCT analysis shows promising performance in several tasks: (1) layers and features segmentation and quantification; (2) disease classification; (3) disease progression and prognosis; and (4) referral triage level prediction. Different studies and trends in the development of DL-based OCT image analysis are described and the following challenges are identified and described: (1) public OCT data are scarce and scattered; (2) models show performance discrepancies in real-world settings; (3) models lack of transparency; (4) there is a lack of societal acceptance and regulatory standards; and (5) OCT is still not widely available in underprivileged areas. More work is needed to tackle the challenges and gaps, before DL is further applied in OCT image analysis for clinical use.
Collapse
Affiliation(s)
- Dawei Li
- College of Future Technology, Peking University, Beijing, China
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Jerry L. Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
8
|
Li F, Pan W, Xiang W, Zou H. Automatic segmentation of multitype retinal fluid from optical coherence tomography images using semisupervised deep learning network. Br J Ophthalmol 2023; 107:1350-1355. [PMID: 35697498 DOI: 10.1136/bjophthalmol-2022-321348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 05/19/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND/AIMS To develop and validate a deep learning model for automated segmentation of multitype retinal fluid using optical coherence tomography (OCT) images. METHODS We retrospectively collected a total of 2814 completely anonymised OCT images with subretinal fluid (SRF) and intraretinal fluid (IRF) from 141 patients between July 2018 and June 2020, constituting our in-house retinal OCT dataset. On this dataset, we developed a novel semisupervised retinal fluid segmentation deep network (Ref-Net) to automatically identify SRF and IRF in a coarse-to-refine fashion. We performed quantitative and qualitative analyses on the model's performance while verifying its generalisation ability by using our in-house retinal OCT dataset for training and an unseen Kermany dataset for testing. We also determined the importance of major components in the semisupervised Ref-Net through extensive ablation. The main outcome measures were Dice similarity coefficient (Dice), sensitivity (Sen), specificity (Spe) and mean absolute error (MAE). RESULTS Our model trained on a handful of labelled OCT images manifested higher performance (Dice: 81.2%, Sen: 87.3%, Spe: 98.8% and MAE: 1.1% for SRF; Dice: 78.0%, Sen: 83.6%, Spe: 99.3% and MAE: 0.5% for IRF) over most cutting-edge segmentation models. It obtained expert-level performance with only 80 labelled OCT images and even exceeded two out of three ophthalmologists with 160 labelled OCT images. Its satisfactory generalisation capability across an unseen dataset was also demonstrated. CONCLUSION The semisupervised Ref-Net required only la few labelled OCT images to generate outstanding performance in automate segmentation of multitype retinal fluid, which has the potential for providing assistance for clinicians in the management of ocular disease.
Collapse
Affiliation(s)
- Feng Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - WenZhe Pan
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Wenjie Xiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Haidong Zou
- Shanghai Eye Disease Prevention and Treatment Center, Shanghai, China
- Shanghai General Hospital, Shanghai, China
| |
Collapse
|
9
|
Zhang H, Yang J, Zheng C, Zhao S, Zhang A. Annotation-efficient learning for OCT segmentation. BIOMEDICAL OPTICS EXPRESS 2023; 14:3294-3307. [PMID: 37497504 PMCID: PMC10368022 DOI: 10.1364/boe.486276] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 04/29/2023] [Accepted: 05/26/2023] [Indexed: 07/28/2023]
Abstract
Deep learning has been successfully applied to OCT segmentation. However, for data from different manufacturers and imaging protocols, and for different regions of interest (ROIs), it requires laborious and time-consuming data annotation and training, which is undesirable in many scenarios, such as surgical navigation and multi-center clinical trials. Here we propose an annotation-efficient learning method for OCT segmentation that could significantly reduce annotation costs. Leveraging self-supervised generative learning, we train a Transformer-based model to learn the OCT imagery. Then we connect the trained Transformer-based encoder to a CNN-based decoder, to learn the dense pixel-wise prediction in OCT segmentation. These training phases use open-access data and thus incur no annotation costs, and the pre-trained model can be adapted to different data and ROIs without re-training. Based on the greedy approximation for the k-center problem, we also introduce an algorithm for the selective annotation of the target data. We verified our method on publicly-available and private OCT datasets. Compared to the widely-used U-Net model with 100% training data, our method only requires ∼10% of the data for achieving the same segmentation accuracy, and it speeds the training up to ∼3.5 times. Furthermore, our proposed method outperforms other potential strategies that could improve annotation efficiency. We think this emphasis on learning efficiency may help improve the intelligence and application penetration of OCT-based technologies.
Collapse
Affiliation(s)
- Haoran Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jianlong Yang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ce Zheng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shiqing Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Aili Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
10
|
Feng H, Chen J, Zhang Z, Lou Y, Zhang S, Yang W. A bibliometric analysis of artificial intelligence applications in macular edema: exploring research hotspots and Frontiers. Front Cell Dev Biol 2023; 11:1174936. [PMID: 37255600 PMCID: PMC10225517 DOI: 10.3389/fcell.2023.1174936] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 05/02/2023] [Indexed: 06/01/2023] Open
Abstract
Background: Artificial intelligence (AI) is used in ophthalmological disease screening and diagnostics, medical image diagnostics, and predicting late-disease progression rates. We reviewed all AI publications associated with macular edema (ME) research Between 2011 and 2022 and performed modeling, quantitative, and qualitative investigations. Methods: On 1st February 2023, we screened the Web of Science Core Collection for AI applications related to ME, from which 297 studies were identified and analyzed (2011-2022). We collected information on: publications, institutions, country/region, keywords, journal name, references, and research hotspots. Literature clustering networks and Frontier knowledge bases were investigated using bibliometrix-BiblioShiny, VOSviewer, and CiteSpace bibliometric platforms. We used the R "bibliometrix" package to synopsize our observations, enumerate keywords, visualize collaboration networks between countries/regions, and generate a topic trends plot. VOSviewer was used to examine cooperation between institutions and identify citation relationships between journals. We used CiteSpace to identify clustering keywords over the timeline and identify keywords with the strongest citation bursts. Results: In total, 47 countries published AI studies related to ME; the United States had the highest H-index, thus the greatest influence. China and the United States cooperated most closely between all countries. Also, 613 institutions generated publications - the Medical University of Vienna had the highest number of studies. This publication record and H-index meant the university was the most influential in the ME field. Reference clusters were also categorized into 10 headings: retinal Optical Coherence Tomography (OCT) fluid detection, convolutional network models, deep learning (DL)-based single-shot predictions, retinal vascular disease, diabetic retinopathy (DR), convolutional neural networks (CNNs), automated macular pathology diagnosis, dry age-related macular degeneration (DARMD), class weight, and advanced DL architecture systems. Frontier keywords were represented by diabetic macular edema (DME) (2021-2022). Conclusion: Our review of the AI-related ME literature was comprehensive, systematic, and objective, and identified future trends and current hotspots. With increased DL outputs, the ME research focus has gradually shifted from manual ME examinations to automatic ME detection and associated symptoms. In this review, we present a comprehensive and dynamic overview of AI in ME and identify future research areas.
Collapse
Affiliation(s)
- Haiwen Feng
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Jiaqi Chen
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Zhichang Zhang
- Department of Computer, School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| | - Yan Lou
- Department of Computer, School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| | - Shaochong Zhang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Weihua Yang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| |
Collapse
|
11
|
Rasti R, Biglari A, Rezapourian M, Yang Z, Farsiu S. RetiFluidNet: A Self-Adaptive and Multi-Attention Deep Convolutional Network for Retinal OCT Fluid Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1413-1423. [PMID: 37015695 DOI: 10.1109/tmi.2022.3228285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Optical coherence tomography (OCT) helps ophthalmologists assess macular edema, accumulation of fluids, and lesions at microscopic resolution. Quantification of retinal fluids is necessary for OCT-guided treatment management, which relies on a precise image segmentation step. As manual analysis of retinal fluids is a time-consuming, subjective, and error-prone task, there is increasing demand for fast and robust automatic solutions. In this study, a new convolutional neural architecture named RetiFluidNet is proposed for multi-class retinal fluid segmentation. The model benefits from hierarchical representation learning of textural, contextual, and edge features using a new self-adaptive dual-attention (SDA) module, multiple self-adaptive attention-based skip connections (SASC), and a novel multi-scale deep self-supervision learning (DSL) scheme. The attention mechanism in the proposed SDA module enables the model to automatically extract deformation-aware representations at different levels, and the introduced SASC paths further consider spatial-channel interdependencies for concatenation of counterpart encoder and decoder units, which improve representational capability. RetiFluidNet is also optimized using a joint loss function comprising a weighted version of dice overlap and edge-preserved connectivity-based losses, where several hierarchical stages of multi-scale local losses are integrated into the optimization process. The model is validated based on three publicly available datasets: RETOUCH, OPTIMA, and DUKE, with comparisons against several baselines. Experimental results on the datasets prove the effectiveness of the proposed model in retinal OCT fluid segmentation and reveal that the suggested method is more effective than existing state-of-the-art fluid segmentation algorithms in adapting to retinal OCT scans recorded by various image scanning instruments.
Collapse
|
12
|
Song S, Jin K, Wang S, Yang C, Zhou J, Chen Z, Ye J. Retinal fluid is associated with cytokines of aqueous humor in age-related macular degeneration using automatic 3-dimensional quantification. Front Cell Dev Biol 2023; 11:1157497. [PMID: 36968207 PMCID: PMC10030496 DOI: 10.3389/fcell.2023.1157497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 02/27/2023] [Indexed: 03/29/2023] Open
Abstract
Background: To explain the biological role of cytokines in the eye and the possible role of cytokines in the pathogenesis of neovascular age-related macular degeneration (nAMD) by comparing the correlation between cytokine of aqueous humor concentration and optical coherence tomography (OCT) retinal fluid. Methods: Spectral-domain OCT (SD-OCT) images and aqueous humor samples were collected from 20 nAMD patient's three clinical visits. Retinal fluid volume in OCT was automatically quantified using deep learning--Deeplabv3+. Eighteen cytokines were detected in aqueous humor using the Luminex technology. OCT fluid volume measurements were correlated with changes in aqueous humor cytokine levels using Pearson's correlation coefficient (PCC). Results: The patients with intraretinal fluid (IRF) showed significantly lower levels of cytokines, such as C-X-C motif chemokine ligand 2 (CXCL2) (p = 0.03) and CXCL11 (p = 0.009), compared with the patients without IRF. And the IRF volume was negatively correlated with CXCL2 (r = -0.407, p = 0.048) and CXCL11 (r = -0.410, p = 0.046) concentration in the patients with IRF. Meanwhile, the subretinal fluid (SRF) volume was positively correlated with vascular endothelial growth factor (VEGF) concentration (r = 0.299, p = 0.027) and negatively correlated with interleukin (IL)-36β concentration (r = -0.295, p = 0.029) in the patients with SRF. Conclusion: Decreased level of VEGF was associated with decreased OCT-based retinal fluid volume in nAMD patients, while increased levels of CXCL2, CXCL11, and IL-36β were associated with decreased OCT-based retinal fluid volume in nAMD patients, which may suggest a role for inflammatory cytokines in retinal morphological changes and pathogenesis of nAMD patients.
Collapse
Affiliation(s)
- Siyuan Song
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Kai Jin
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Shuai Wang
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, China
- School of Cyberspace, Hangzhou Dianzi University, Hangzhou, China
| | - Ce Yang
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, China
| | - Jingxin Zhou
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Zhiqing Chen
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| |
Collapse
|
13
|
Li X, Niu S, Gao X, Zhou X, Dong J, Zhao H. Self-training adversarial learning for cross-domain retinal OCT fluid segmentation. Comput Biol Med 2023; 155:106650. [PMID: 36821970 DOI: 10.1016/j.compbiomed.2023.106650] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 11/22/2022] [Accepted: 02/07/2023] [Indexed: 02/12/2023]
Abstract
Accurate measurements of the size, shape and volume of macular edema can provide important biomarkers to jointly assess disease progression and treatment outcome. Although many deep learning-based segmentation algorithms have achieved remarkable success in semantic segmentation, these methods have difficulty obtaining satisfactory segmentation results in retinal optical coherence tomography (OCT) fluid segmentation tasks due to low contrast, blurred boundaries, and varied distribution. Moreover, directly applying a well-trained model on one device to test the images from other devices may cause the performance degradation in the joint analysis of multi-domain OCT images. In this paper, we propose a self-training adversarial learning framework for unsupervised domain adaptation in retinal OCT fluid segmentation tasks. Specifically, we develop an image style transfer module and a fine-grained feature transfer module to reduce discrepancies in the appearance and high-level features of images from different devices. Importantly, we transfer the target images to the appearance of source images to ensure that no image information of the source domain for supervised training is lost. To capture specific features of the target domain, we design a self-training module based on a discrepancy and similarity strategy to select the images with better segmentation results from the target domain and then introduce them into the source domain for the iterative training segmentation model. Extensive experiments on two challenging datasets demonstrate the effectiveness of our proposed method. In Particular, our proposed method achieves comparable results on cross-domain retinal OCT fluid segmentation compared with the state-of-the-art methods.
Collapse
Affiliation(s)
- Xiaohui Li
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan, 250022, Shandong, China
| | - Sijie Niu
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan, 250022, Shandong, China
| | - Xizhan Gao
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan, 250022, Shandong, China.
| | - Xueying Zhou
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan, 250022, Shandong, China
| | - Jiwen Dong
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan, 250022, Shandong, China
| | - Hui Zhao
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan, 250022, Shandong, China
| |
Collapse
|
14
|
An Automatic Image Processing Method Based on Artificial Intelligence for Locating the Key Boundary Points in the Central Serous Chorioretinopathy Lesion Area. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:1839387. [PMID: 36818580 PMCID: PMC9937763 DOI: 10.1155/2023/1839387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 10/08/2022] [Accepted: 01/25/2023] [Indexed: 02/12/2023]
Abstract
Accurately and rapidly measuring the diameter of central serous chorioretinopathy (CSCR) lesion area is the key to judge the severity of CSCR and evaluate the efficacy of the corresponding treatments. Currently, the manual measurement scheme based on a single or a small number of optical coherence tomography (OCT) B-scan images encounters the dilemma of incredibility. Although manually measuring the diameters of all OCT B-scan images of a single patient can alleviate the previous issue, the situation of inefficiency will thus arise. Additionally, manual operation is subject to subjective factors of ophthalmologists, resulting in unrepeatable measurement results. Therefore, an automatic image processing method (i.e., a joint framework) based on artificial intelligence (AI) is innovatively proposed for locating the key boundary points of CSCR lesion area to assist the diameter measurement. Firstly, the initial location module (ILM) benefiting from multitask learning is properly adjusted and tentatively achieves the preliminary location of key boundary points. Secondly, the location task is formulated as a Markov decision process, aiming at further improving the location accuracy by utilizing the single agent reinforcement learning module (SARLM). Finally, the joint framework based on the ILM and SARLM is skillfully established, in which ILM provides an initial starting point for SARLM to narrow the active region of agent, and SARLM makes up for the defect of low generalization of ILM by virtue of the independent exploration ability of agent. Experiments reveal the AI-based method which joins the multitask learning, and single agent reinforcement learning paradigms enable agents to work in local region, alleviating the time-consuming problem of SARLM, performing location task in a global scope, and improving the location accuracy of ILM, thus reflecting its effectiveness and clinical application value in the task of rapidly and accurately measuring the diameter of CSCR lesions.
Collapse
|
15
|
TSSK-Net: Weakly supervised biomarker localization and segmentation with image-level annotation in retinal OCT images. Comput Biol Med 2023; 153:106467. [PMID: 36584602 DOI: 10.1016/j.compbiomed.2022.106467] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 11/16/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
The localization and segmentation of biomarkers in OCT images are critical steps in retina-related disease diagnosis. Although fully supervised deep learning models can segment pathological regions, their performance relies on labor-intensive pixel-level annotations. Compared with dense pixel-level annotation, image-level annotation can reduce the burden of manual annotation. Existing methods for image-level annotation are usually based on class activation maps (CAM). However, current methods still suffer from model collapse, training instability, and anatomical mismatch due to the considerable variation in retinal biomarkers' shape, texture, and size. This paper proposes a novel weakly supervised biomarkers localization and segmentation method, requiring only image-level annotations. The technique is a Teacher-Student network with joint Self-supervised contrastive learning and Knowledge distillation-based anomaly localization, namely TSSK-Net. Specifically, we treat retinal biomarker regions as abnormal regions distinct from normal regions. First, we propose a novel pre-training strategy based on supervised contrastive learning that encourages the model to learn the anatomical structure of normal OCT images. Second, we design a fine-tuning module and propose a novel hybrid network structure. The network includes supervised contrastive loss for feature learning and cross-entropy loss for classification learning. To further improve the performance, we propose an efficient strategy to combine these two losses to preserve the anatomical structure and enhance the encoding representation of features. Finally, we design a knowledge distillation-based anomaly segmentation method that is effectively combined with the previous model to alleviate the challenge of insufficient supervision. Experimental results on a local dataset and a public dataset demonstrated the effectiveness of our proposed method. Our proposed method can effectively reduce the annotation burden of ophthalmologists in OCT images.
Collapse
|
16
|
Philippi D, Rothaus K, Castelli M. A vision transformer architecture for the automated segmentation of retinal lesions in spectral domain optical coherence tomography images. Sci Rep 2023; 13:517. [PMID: 36627357 PMCID: PMC9832034 DOI: 10.1038/s41598-023-27616-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 01/04/2023] [Indexed: 01/12/2023] Open
Abstract
Neovascular age-related macular degeneration (nAMD) is one of the major causes of irreversible blindness and is characterized by accumulations of different lesions inside the retina. AMD biomarkers enable experts to grade the AMD and could be used for therapy prognosis and individualized treatment decisions. In particular, intra-retinal fluid (IRF), sub-retinal fluid (SRF), and pigment epithelium detachment (PED) are prominent biomarkers for grading neovascular AMD. Spectral-domain optical coherence tomography (SD-OCT) revolutionized nAMD early diagnosis by providing cross-sectional images of the retina. Automatic segmentation and quantification of IRF, SRF, and PED in SD-OCT images can be extremely useful for clinical decision-making. Despite the excellent performance of convolutional neural network (CNN)-based methods, the task still presents some challenges due to relevant variations in the location, size, shape, and texture of the lesions. This work adopts a transformer-based method to automatically segment retinal lesion from SD-OCT images and qualitatively and quantitatively evaluate its performance against CNN-based methods. The method combines the efficient long-range feature extraction and aggregation capabilities of Vision Transformers with data-efficient training of CNNs. The proposed method was tested on a private dataset containing 3842 2-dimensional SD-OCT retina images, manually labeled by experts of the Franziskus Eye-Center, Muenster. While one of the competitors presents a better performance in terms of Dice score, the proposed method is significantly less computationally expensive. Thus, future research will focus on the proposed network's architecture to increase its segmentation performance while maintaining its computational efficiency.
Collapse
Affiliation(s)
- Daniel Philippi
- grid.10772.330000000121511713NOVA Information Management School (NOVA IMS), Universidade Nova de Lisboa, 1070-312 Lisbon, Portugal
| | - Kai Rothaus
- grid.416655.5Department of Ophthalmology, St. Franziskus Hospital, 48145 Muenster, Germany
| | - Mauro Castelli
- NOVA Information Management School (NOVA IMS), Universidade Nova de Lisboa, 1070-312, Lisbon, Portugal. .,School of Economics and Business, University of Ljubljana, Ljubljana, Slovenia.
| |
Collapse
|
17
|
Pavithra K, Kumar P, Geetha M, Bhandary SV. Computer aided diagnosis of diabetic macular edema in retinal fundus and OCT images: A review. Biocybern Biomed Eng 2023. [DOI: 10.1016/j.bbe.2022.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
18
|
He X, Zhong Z, Fang L, He M, Sebe N. Structure-Guided Cross-Attention Network for Cross-Domain OCT Fluid Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; PP:309-320. [PMID: 37015552 DOI: 10.1109/tip.2022.3228163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Accurate retinal fluid segmentation on Optical Coherence Tomography (OCT) images plays an important role in diagnosing and treating various eye diseases. The art deep models have shown promising performance on OCT image segmentation given pixel-wise annotated training data. However, the learned model will achieve poor performance on OCT images that are obtained from different devices (domains) due to the domain shift issue. This problem largely limits the real-world application of OCT image segmentation since the types of devices usually are different in each hospital. In this paper, we study the task of cross-domain OCT fluid segmentation, where we are given a labeled dataset of the source device (domain) and an unlabeled dataset of the target device (domain). The goal is to learn a model that can perform well on the target domain. To solve this problem, in this paper, we propose a novel Structure-guided Cross-Attention Network (SCAN), which leverages the retinal layer structure to facilitate domain alignment. Our SCAN is inspired by the fact that the retinal layer structure is robust to domains and can reflect regions that are important to fluid segmentation. In light of this, we build our SCAN in a multi-task manner by jointly learning the retinal structure prediction and fluid segmentation. To exploit the mutual benefit between layer structure and fluid segmentation, we further introduce a cross-attention module to measure the correlation between the layer-specific feature and the fluid-specific feature encouraging the model to concentrate on highly relative regions during domain alignment. Moreover, an adaptation difficulty map is evaluated based on the retinal structure predictions from different domains, which enforces the model focus on hard regions during structure-aware adversarial learning. Extensive experiments on the three domains of the RETOUCH dataset demonstrate the effectiveness of the proposed method and show that our approach produces state-of-the-art performance on cross-domain OCT fluid segmentation.
Collapse
|
19
|
Wongchaisuwat P, Thamphithak R, Jitpukdee P, Wongchaisuwat N. Application of Deep Learning for Automated Detection of Polypoidal Choroidal Vasculopathy in Spectral Domain Optical Coherence Tomography. Transl Vis Sci Technol 2022; 11:16. [PMID: 36219163 PMCID: PMC9580222 DOI: 10.1167/tvst.11.10.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 08/29/2022] [Indexed: 11/25/2022] Open
Abstract
Objective To develop an automated polypoidal choroidal vasculopathy (PCV) screening model to distinguish PCV from wet age-related macular degeneration (wet AMD). Methods A retrospective review of spectral domain optical coherence tomography (SD-OCT) images was undertaken. The included SD-OCT images were classified into two distinct categories (PCV or wet AMD) prior to the development of the PCV screening model. The automated detection of PCV using the developed model was compared with the results of gold-standard fundus fluorescein angiography and indocyanine green (FFA + ICG) angiography. A framework of SHapley Additive exPlanations was used to interpret the results from the model. Results A total of 2334 SD-OCT images were enrolled for training purposes, and an additional 1171 SD-OCT images were used for external validation. The ResNet attention model yielded superior performance with average area under the curve values of 0.8 and 0.81 for the training and external validation data sets, respectively. The sensitivity/specificity calculated at a patient level was 100%/60% and 85%/71% for the training and external validation data sets, respectively. Conclusions A conventional FFA + ICG investigation to differentiate PCV from wet AMD requires intense health care resources and adversely affects patients. A deep learning algorithm is proposed to automatically distinguish PCV from wet AMD. The developed algorithm exhibited promising performance for further development into an alternative PCV screening tool. Enhancement of the model's performance with additional data is needed prior to implementation of this diagnostic tool in real-world clinical practice. The invisibility of disease signs within SD-OCT images is the main limitation of the proposed model. Translational Relevance Basic research of deep learning algorithms was applied to differentiate PCV from wet AMD based on OCT images, benefiting a diagnosis process and minimizing a risk of ICG angiogram.
Collapse
Affiliation(s)
- Papis Wongchaisuwat
- Department of Industrial Engineering, Faculty of Engineering, Kasetsart University, Bangkok, Thailand
| | - Ranida Thamphithak
- Department of Ophthalmology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Peerakarn Jitpukdee
- Department of Industrial Engineering, Faculty of Engineering, Kasetsart University, Bangkok, Thailand
| | - Nida Wongchaisuwat
- Department of Ophthalmology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| |
Collapse
|
20
|
|
21
|
Valmaggia P, Friedli P, Hörmann B, Kaiser P, Scholl HPN, Cattin PC, Sandkühler R, Maloca PM. Feasibility of Automated Segmentation of Pigmented Choroidal Lesions in OCT Data With Deep Learning. Transl Vis Sci Technol 2022; 11:25. [PMID: 36156729 PMCID: PMC9526362 DOI: 10.1167/tvst.11.9.25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To evaluate the feasibility of automated segmentation of pigmented choroidal lesions (PCLs) in optical coherence tomography (OCT) data and compare the performance of different deep neural networks. Methods Swept-source OCT image volumes were annotated pixel-wise for PCLs and background. Three deep neural network architectures were applied to the data: the multi-dimensional gated recurrent units (MD-GRU), the V-Net, and the nnU-Net. The nnU-Net was used to compare the performance of two-dimensional (2D) versus three-dimensional (3D) predictions. Results A total of 121 OCT volumes were analyzed (100 normal and 21 PCLs). Automated PCL segmentations were successful with all neural networks. The 3D nnU-Net predictions showed the highest recall with a mean of 0.77 ± 0.22 (MD-GRU, 0.60 ± 0.31; V-Net, 0.61 ± 0.25). The 3D nnU-Net predicted PCLs with a Dice coefficient of 0.78 ± 0.13, outperforming MD-GRU (0.62 ± 0.23) and V-Net (0.59 ± 0.24). The smallest distance to the manual annotation was found using 3D nnU-Net with a mean maximum Hausdorff distance of 315 ± 172 µm (MD-GRU, 1542 ± 1169 µm; V-Net, 2408 ± 1060 µm). The 3D nnU-Net showed a superior performance compared with stacked 2D predictions. Conclusions The feasibility of automated deep learning segmentation of PCLs was demonstrated in OCT data. The neural network architecture had a relevant impact on PCL predictions. Translational Relevance This work serves as proof of concept for segmentations of choroidal pathologies in volumetric OCT data; improvements are conceivable to meet clinical demands for the diagnosis, monitoring, and treatment evaluation of PCLs.
Collapse
Affiliation(s)
- Philippe Valmaggia
- Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland.,Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland.,Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
| | | | | | | | - Hendrik P N Scholl
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland.,Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
| | - Philippe C Cattin
- Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
| | - Robin Sandkühler
- Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
| | - Peter M Maloca
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland.,Department of Ophthalmology, University Hospital Basel, Basel, Switzerland.,Moorfields Eye Hospital NHS Foundation Trust, London, EC1V 2PD, UK
| |
Collapse
|
22
|
Ma F, Dai C, Meng J, Li Y, Zhao J, Zhang Y, Wang S, Zhang X, Cheng R. Classification-based framework for binarization on mice eye image in vivo with optical coherence tomography. JOURNAL OF BIOPHOTONICS 2022; 15:e202100336. [PMID: 35305080 DOI: 10.1002/jbio.202100336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 02/27/2022] [Accepted: 03/16/2022] [Indexed: 06/14/2023]
Abstract
Optical coherence tomography (OCT) angiography has drawn much attention in the medical imaging field. Binarization plays an important role in quantitative analysis of eye with optical coherence tomography. To address the problem of few training samples and contrast-limited scene, we proposed a new binarization framework with specific-patch SVM (SPSVM) for low-intensity OCT image, which is open and classification-based framework. This new framework contains two phases: training model and binarization threshold. In the training phase, firstly, the patches of target and background from few training samples are extracted as the ROI and the background, respectively. Then, PCA is conducted on all patches to reduce the dimension and learn the eigenvector subspace. Finally, the classification model is trained from the features of patches to get the target value of different patches. In the testing phase, the learned eigenvector subspace is conducted on the pixels of each patch. The binarization threshold of patch is obtained with the learned SVM model. We acquire a new OCT mice eye (OCT-ME) database, which is publicly available at https://mip2019.github.io/spsvm. Extensive experiments were performed to demonstrate the effectiveness of the proposed SPSVM framework.
Collapse
Affiliation(s)
- Fei Ma
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Cuixia Dai
- College Science, Shanghai Institute of Technology, Shanghai, China
| | - Jing Meng
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Ying Li
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Jingxiu Zhao
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Yuanke Zhang
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Shengbo Wang
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Xueting Zhang
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Ronghua Cheng
- School of Computer Science, Qufu Normal University, Shandong, China
| |
Collapse
|
23
|
Tang W, Ye Y, Chen X, Shi F, Xiang D, Chen Z, Zhu W. Multi-class retinal fluid joint segmentation based on cascaded convolutional neural networks. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 05/25/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Retinal fluid mainly includes intra-retinal fluid (IRF), sub-retinal fluid (SRF) and pigment epithelial detachment (PED), whose accurate segmentation in optical coherence tomography (OCT) image is of great importance to the diagnosis and treatment of the relative fundus diseases. Approach. In this paper, a novel two-stage multi-class retinal fluid joint segmentation framework based on cascaded convolutional neural networks is proposed. In the pre-segmentation stage, a U-shape encoder–decoder network is adopted to acquire the retinal mask and generate a retinal relative distance map, which can provide the spatial prior information for the next fluid segmentation. In the fluid segmentation stage, an improved context attention and fusion network based on context shrinkage encode module and multi-scale and multi-category semantic supervision module (named as ICAF-Net) is proposed to jointly segment IRF, SRF and PED. Main results. the proposed segmentation framework was evaluated on the dataset of RETOUCH challenge. The average Dice similarity coefficient, intersection over union and accuracy (Acc) reach 76.39%, 64.03% and 99.32% respectively. Significance. The proposed framework can achieve good performance in the joint segmentation of multi-class fluid in retinal OCT images and outperforms some state-of-the-art segmentation networks.
Collapse
|
24
|
Xing G, Chen L, Wang H, Zhang J, Sun D, Xu F, Lei J, Xu X. Multi-Scale Pathological Fluid Segmentation in OCT With a Novel Curvature Loss in Convolutional Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1547-1559. [PMID: 35015634 DOI: 10.1109/tmi.2022.3142048] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The segmentation of pathological fluid lesions in optical coherence tomography (OCT), including intraretinal fluid, subretinal fluid, and pigment epithelial detachment, is of great importance for the diagnosis and treatment of various eye diseases such as neovascular age-related macular degeneration and diabetic macular edema. Although significant progress has been achieved with the rapid development of fully convolutional neural networks (FCN) in recent years, some important issues remain unsolved. First, pathological fluid lesions in OCT show large variations in location, size, and shape, imposing challenges on the design of FCN architecture. Second, fluid lesions should be continuous regions without holes inside. But the current architectures lack the capability to preserve the shape prior information. In this study, we introduce an FCN architecture for the simultaneous segmentation of three types of pathological fluid lesions in OCT. First, attention gate and spatial pyramid pooling modules are employed to improve the ability of the network to extract multi-scale objects. Then, we introduce a novel curvature regularization term in the loss function to incorporate shape prior information. The proposed method was extensively evaluated on public and clinical datasets with significantly improved performance compared with the state-of-the-art methods.
Collapse
|
25
|
López-Varela E, Vidal PL, Pascual NO, Novo J, Ortega M. Fully-Automatic 3D Intuitive Visualization of Age-Related Macular Degeneration Fluid Accumulations in OCT Cubes. J Digit Imaging 2022; 35:1271-1282. [PMID: 35513586 PMCID: PMC9582110 DOI: 10.1007/s10278-022-00643-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Revised: 04/06/2022] [Accepted: 04/13/2022] [Indexed: 11/16/2022] Open
Abstract
Age-related macular degeneration is the leading cause of vision loss in developed countries, and wet-type AMD requires urgent treatment and rapid diagnosis because it causes rapid irreversible vision loss. Currently, AMD diagnosis is mainly carried out using images obtained by optical coherence tomography. This diagnostic process is performed by human clinicians, so human error may occur in some cases. Therefore, fully automatic methodologies are highly desirable adding a layer of robustness to the diagnosis. In this work, a novel computer-aided diagnosis and visualization methodology is proposed for the rapid identification and visualization of wet AMD. We adapted a convolutional neural network for segmentation of a similar domain of medical images to the problem of wet AMD segmentation, taking advantage of transfer learning, which allows us to work with and exploit a reduced number of samples. We generate a 3D intuitive visualization where the existence, position and severity of the fluid were represented in a clear and intuitive way to facilitate the analysis of the clinicians. The 3D visualization is robust and accurate, obtaining satisfactory 0.949 and 0.960 Dice coefficients in the different evaluated OCT cube configurations, allowing to quickly assess the presence and extension of the fluid associated to wet AMD.
Collapse
Affiliation(s)
- Emilio López-Varela
- Grupo VARPA, Instituto de investigación Biomédica de A Coruña (INIBIC), Xubias de Arriba, 84, A Coruña, 15006 Spain
- Centro de investigación CITIC, Universidade da Coruña, Campus de Elviña, s/n, A Coruña, 15071 Spain
| | - Plácido L. Vidal
- Grupo VARPA, Instituto de investigación Biomédica de A Coruña (INIBIC), Xubias de Arriba, 84, A Coruña, 15006 Spain
- Centro de investigación CITIC, Universidade da Coruña, Campus de Elviña, s/n, A Coruña, 15071 Spain
| | - Nuria Olivier Pascual
- Servizo de Oftalmoloxía, Complexo Hospitalario Universitario de Ferrol, CHUF, Av. da Residencia, S/N, Ferrol, 15405 Spain
| | - Jorge Novo
- Grupo VARPA, Instituto de investigación Biomédica de A Coruña (INIBIC), Xubias de Arriba, 84, A Coruña, 15006 Spain
- Centro de investigación CITIC, Universidade da Coruña, Campus de Elviña, s/n, A Coruña, 15071 Spain
| | - Marcos Ortega
- Grupo VARPA, Instituto de investigación Biomédica de A Coruña (INIBIC), Xubias de Arriba, 84, A Coruña, 15006 Spain
- Centro de investigación CITIC, Universidade da Coruña, Campus de Elviña, s/n, A Coruña, 15071 Spain
| |
Collapse
|
26
|
Recent Advanced Deep Learning Architectures for Retinal Fluid Segmentation on Optical Coherence Tomography Images. SENSORS 2022; 22:s22083055. [PMID: 35459040 PMCID: PMC9029682 DOI: 10.3390/s22083055] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 04/10/2022] [Accepted: 04/13/2022] [Indexed: 11/16/2022]
Abstract
With non-invasive and high-resolution properties, optical coherence tomography (OCT) has been widely used as a retinal imaging modality for the effective diagnosis of ophthalmic diseases. The retinal fluid is often segmented by medical experts as a pivotal biomarker to assist in the clinical diagnosis of age-related macular diseases, diabetic macular edema, and retinal vein occlusion. In recent years, the advanced machine learning methods, such as deep learning paradigms, have attracted more and more attention from academia in the retinal fluid segmentation applications. The automatic retinal fluid segmentation based on deep learning can improve the semantic segmentation accuracy and efficiency of macular change analysis, which has potential clinical implications for ophthalmic pathology detection. This article summarizes several different deep learning paradigms reported in the up-to-date literature for the retinal fluid segmentation in OCT images. The deep learning architectures include the backbone of convolutional neural network (CNN), fully convolutional network (FCN), U-shape network (U-Net), and the other hybrid computational methods. The article also provides a survey on the prevailing OCT image datasets used in recent retinal segmentation investigations. The future perspectives and some potential retinal segmentation directions are discussed in the concluding context.
Collapse
|
27
|
He X, Fang L, Tan M, Chen X. Intra- and Inter-Slice Contrastive Learning for Point Supervised OCT Fluid Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1870-1881. [PMID: 35139015 DOI: 10.1109/tip.2022.3148814] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OCT fluid segmentation is a crucial task for diagnosis and therapy in ophthalmology. The current convolutional neural networks (CNNs) supervised by pixel-wise annotated masks achieve great success in OCT fluid segmentation. However, requiring pixel-wise masks from OCT images is time-consuming, expensive and expertise needed. This paper proposes an Intra- and inter-Slice Contrastive Learning Network (ISCLNet) for OCT fluid segmentation with only point supervision. Our ISCLNet learns visual representation by designing contrastive tasks that exploit the inherent similarity or dissimilarity from unlabeled OCT data. Specifically, we propose an intra-slice contrastive learning strategy to leverage the fluid-background similarity and the retinal layer-background dissimilarity. Moreover, we construct an inter-slice contrastive learning architecture to learn the similarity of adjacent OCT slices from one OCT volume. Finally, an end-to-end model combining intra- and inter-slice contrastive learning processes learns to segment fluid under the point supervision. The experimental results on two public OCT fluid segmentation datasets (i.e., AI Challenger and RETOUCH) demonstrate that the ISCLNet bridges the gap between fully-supervised and weakly-supervised OCT fluid segmentation and outperforms other well-known point-supervised segmentation methods.
Collapse
|
28
|
Stankiewicz A, Marciniak T, Dabrowski A, Stopa M, Marciniak E, Obara B. Segmentation of Preretinal Space in Optical Coherence Tomography Images Using Deep Neural Networks. SENSORS 2021; 21:s21227521. [PMID: 34833597 PMCID: PMC8623441 DOI: 10.3390/s21227521] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 11/08/2021] [Accepted: 11/09/2021] [Indexed: 02/01/2023]
Abstract
This paper proposes an efficient segmentation of the preretinal area between the inner limiting membrane (ILM) and posterior cortical vitreous (PCV) of the human eye in an image obtained with the use of optical coherence tomography (OCT). The research was carried out using a database of three-dimensional OCT imaging scans obtained with the Optovue RTVue XR Avanti device. Various types of neural networks (UNet, Attention UNet, ReLayNet, LFUNet) were tested for semantic segmentation, their effectiveness was assessed using the Dice coefficient and compared to the graph theory techniques. Improvement in segmentation efficiency was achieved through the use of relative distance maps. We also show that selecting a larger kernel size for convolutional layers can improve segmentation quality depending on the neural network model. In the case of PVC, we obtain the effectiveness reaching up to 96.35%. The proposed solution can be widely used to diagnose vitreomacular traction changes, which is not yet available in scientific or commercial OCT imaging solutions.
Collapse
Affiliation(s)
- Agnieszka Stankiewicz
- Division of Electronic Systems and Signal Processing, Institute of Automatic Control and Robotics, Poznan University of Technology, 60-965 Poznan, Poland; (A.S.); (A.D.)
| | - Tomasz Marciniak
- Division of Electronic Systems and Signal Processing, Institute of Automatic Control and Robotics, Poznan University of Technology, 60-965 Poznan, Poland; (A.S.); (A.D.)
- Correspondence:
| | - Adam Dabrowski
- Division of Electronic Systems and Signal Processing, Institute of Automatic Control and Robotics, Poznan University of Technology, 60-965 Poznan, Poland; (A.S.); (A.D.)
| | - Marcin Stopa
- Department of Ophthalmology, Chair of Ophthalmology and Optometry, Heliodor Swiecicki University Hospital, Poznan University of Medical Sciences, 60-780 Poznan, Poland; (M.S.); (E.M.)
| | - Elzbieta Marciniak
- Department of Ophthalmology, Chair of Ophthalmology and Optometry, Heliodor Swiecicki University Hospital, Poznan University of Medical Sciences, 60-780 Poznan, Poland; (M.S.); (E.M.)
| | - Boguslaw Obara
- School of Computing, Newcastle University, Newcastle upon Tyne NE4 5TG, UK;
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE2 4HH, UK
| |
Collapse
|
29
|
Ma D, Lu D, Chen S, Heisler M, Dabiri S, Lee S, Lee H, Ding GW, Sarunic MV, Beg MF. LF-UNet - A novel anatomical-aware dual-branch cascaded deep neural network for segmentation of retinal layers and fluid from optical coherence tomography images. Comput Med Imaging Graph 2021; 94:101988. [PMID: 34717264 DOI: 10.1016/j.compmedimag.2021.101988] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 08/31/2021] [Accepted: 09/11/2021] [Indexed: 11/17/2022]
Abstract
Computer-assistant diagnosis of retinal disease relies heavily on the accurate detection of retinal boundaries and other pathological features such as fluid accumulation. Optical coherence tomography (OCT) is a non-invasive ophthalmological imaging technique that has become a standard modality in the field due to its ability to detect cross-sectional retinal pathologies at the micrometer level. In this work, we presented a novel framework to achieve simultaneous retinal layers and fluid segmentation. A dual-branch deep neural network, termed LF-UNet, was proposed which combines the expansion path of the U-Net and original fully convolutional network, with a dilated network. In addition, we introduced a cascaded network framework to include the anatomical awareness embedded in the volumetric image. Cross validation experiments showed that the proposed LF-UNet has superior performance compared to the state-of-the-art methods, and that incorporating the relative positional map structural prior information could further improve the performance regardless of the network. The generalizability of the proposed network was demonstrated on an independent dataset acquired from the same types of device with different field of view, or images acquired from different device.
Collapse
Affiliation(s)
- Da Ma
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada
| | - Donghuan Lu
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada; Tencent Jarvis Lab, Shenzhen, China
| | - Shuo Chen
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada
| | - Morgan Heisler
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada
| | - Setareh Dabiri
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada
| | - Sieun Lee
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada
| | - Hyunwoo Lee
- Division of Neurology, Department of Medicine, University of British Columbia, Canada
| | - Gavin Weiguang Ding
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada
| | - Marinko V Sarunic
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada
| | - Mirza Faisal Beg
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada.
| |
Collapse
|
30
|
Prediction of postoperative visual acuity after vitrectomy for macular hole using deep learning-based artificial intelligence. Graefes Arch Clin Exp Ophthalmol 2021; 260:1113-1123. [PMID: 34636995 DOI: 10.1007/s00417-021-05427-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 08/20/2021] [Accepted: 09/19/2021] [Indexed: 10/20/2022] Open
Abstract
PURPOSE To create a model for prediction of postoperative visual acuity (VA) after vitrectomy for macular hole (MH) treatment using preoperative optical coherence tomography (OCT) images, using deep learning (DL)-based artificial intelligence. METHODS This was a retrospective single-center study. We evaluated 259 eyes that underwent vitrectomy for MHs. We divided the eyes into four groups, based on their 6-month postoperative Snellen VA values: (A) ≥ 20/20; (B) 20/25-20/32; (C) 20/32-20/63; and (D) ≤ 20/100. Training data were randomly selected, comprising 20 eyes in each group. Test data were also randomly selected, comprising 52 total eyes in the same proportions as those of each group in the total database. Preoperative OCT images with corresponding postoperative VA values were used to train the original DL network. The final prediction of postoperative VA was subjected to regression analysis based on inferences made with DL network output. We created a model for predicting postoperative VA from preoperative VA, MH size, and age using multivariate linear regression. Precision values were determined, and correlation coefficients between predicted and actual postoperative VA values were calculated in two models. RESULTS The DL and multivariate models had precision values of 46% and 40%, respectively. The predicted postoperative VA values on the basis of DL and on preoperative VA and MH size were correlated with actual postoperative VA at 6 months postoperatively (P < .0001 and P < .0001, r = .62 and r = .55, respectively). CONCLUSION Postoperative VA after MH treatment could be predicted via DL using preoperative OCT images with greater accuracy than multivariate linear regression using preoperative VA, MH size, and age.
Collapse
|
31
|
Wu M, Chen W, Chen Q, Park H. Noise Reduction for SD-OCT Using a Structure-Preserving Domain Transfer Approach. IEEE J Biomed Health Inform 2021; 25:3460-3472. [PMID: 33822730 DOI: 10.1109/jbhi.2021.3071421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Spectral-domain optical coherence tomography (SD-OCT) images inevitably suffer from multiplicative speckle noise caused by random interference. This study proposes an unsupervised domain adaptation approach for noise reduction by translating the SD-OCT to the corresponding high-quality enhanced depth imaging (EDI)-OCT. We propose a structure-persevered cycle-consistent generative adversarial network for unpaired image-to-image translation, which can be applied to imbalanced unpaired data, and can effectively preserve retinal details based on a structure-specific cross-domain description. It also imposes smoothness by penalizing the intensity variation of the low reflective region between consecutive slices. Our approach was tested on a local data set that consisted of 268 SD-OCT volumes and two public independent validation datasets including 20 SD-OCT volumes and 17 B-scans, respectively. Experimental results show that our method can effectively suppress noise and maintain the retinal structure, compared with other traditional approaches and deep learning methods in terms of qualitative and quantitative assessments. Our proposed method shows good performance for speckle noise reduction and can assist downstream tasks of OCT analysis.
Collapse
|
32
|
Yang SD, Zhao YQ, Zhang F, Liao M, Yang Z, Wang YJ, Yu LL. An efficient two-step multi-organ registration on abdominal CT via deep-learning based segmentation. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103027] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
33
|
Hassan B, Qin S, Ahmed R, Hassan T, Taguri AH, Hashmi S, Werghi N. Deep learning based joint segmentation and characterization of multi-class retinal fluid lesions on OCT scans for clinical use in anti-VEGF therapy. Comput Biol Med 2021; 136:104727. [PMID: 34385089 DOI: 10.1016/j.compbiomed.2021.104727] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Revised: 07/31/2021] [Accepted: 08/01/2021] [Indexed: 11/19/2022]
Abstract
BACKGROUND In anti-vascular endothelial growth factor (anti-VEGF) therapy, an accurate estimation of multi-class retinal fluid (MRF) is required for the activity prescription and intravitreal dose. This study proposes an end-to-end deep learning-based retinal fluids segmentation network (RFS-Net) to segment and recognize three MRF lesion manifestations, namely, intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED), from multi-vendor optical coherence tomography (OCT) imagery. The proposed image analysis tool will optimize anti-VEGF therapy and contribute to reducing the inter- and intra-observer variability. METHOD The proposed RFS-Net architecture integrates the atrous spatial pyramid pooling (ASPP), residual, and inception modules in the encoder path to learn better features and conserve more global information for precise segmentation and characterization of MRF lesions. The RFS-Net model is trained and validated using OCT scans from multiple vendors (Topcon, Cirrus, Spectralis), collected from three publicly available datasets. The first dataset consisted of OCT volumes obtained from 112 subjects (a total of 11,334 B-scans) is used for both training and evaluation purposes. Moreover, the remaining two datasets are only used for evaluation purposes to check the trained RFS-Net's generalizability on unseen OCT scans. The two evaluation datasets contain a total of 1572 OCT B-scans from 1255 subjects. The performance of the proposed RFS-Net model is assessed through various evaluation metrics. RESULTS The proposed RFS-Net model achieved the mean F1 scores of 0.762, 0.796, and 0.805 for segmenting IRF, SRF, and PED. Moreover, with the automated segmentation of the three retinal manifestations, the RFS-Net brings a considerable gain in efficiency compared to the tedious and demanding manual segmentation procedure of the MRF. CONCLUSIONS Our proposed RFS-Net is a potential diagnostic tool for the automatic segmentation of MRF (IRF, SRF, and PED) lesions. It is expected to strengthen the inter-observer agreement, and standardization of dosimetry is envisaged as a result.
Collapse
Affiliation(s)
- Bilal Hassan
- School of Automation Science and Electrical Engineering, Beihang University (BUAA), Beijing, 100191, China.
| | - Shiyin Qin
- School of Automation Science and Electrical Engineering, Beihang University (BUAA), Beijing, 100191, China; School of Electrical Engineering and Intelligentization, Dongguan University of Technology, Dongguan, 523808, China
| | - Ramsha Ahmed
- School of Computer and Communication Engineering, University of Science and Technology Beijing (USTB), Beijing, 100083, China
| | - Taimur Hassan
- Center for Cyber-Physical Systems, Khalifa University of Science and Technology, Abu Dhabi, 127788, United Arab Emirates
| | - Abdel Hakeem Taguri
- Abu Dhabi Healthcare Company (SEHA), Abu Dhabi, 127788, United Arab Emirates
| | - Shahrukh Hashmi
- Abu Dhabi Healthcare Company (SEHA), Abu Dhabi, 127788, United Arab Emirates
| | - Naoufel Werghi
- Center for Cyber-Physical Systems, Khalifa University of Science and Technology, Abu Dhabi, 127788, United Arab Emirates
| |
Collapse
|
34
|
Tian L, Hunt B, Bell MAL, Yi J, Smith JT, Ochoa M, Intes X, Durr NJ. Deep Learning in Biomedical Optics. Lasers Surg Med 2021; 53:748-775. [PMID: 34015146 PMCID: PMC8273152 DOI: 10.1002/lsm.23414] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 04/02/2021] [Accepted: 04/15/2021] [Indexed: 01/02/2023]
Abstract
This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.
Collapse
Affiliation(s)
- L. Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, USA
| | - B. Hunt
- Thayer School of Engineering, Dartmouth College, Hanover, NH, USA
| | - M. A. L. Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - J. Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Ophthalmology, Johns Hopkins University, Baltimore, MD, USA
| | - J. T. Smith
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - M. Ochoa
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - X. Intes
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - N. J. Durr
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
35
|
García-Ordás MT, Alaiz-Moretón H, Benítez-Andrades JA, García-Rodríguez I, García-Olalla O, Benavides C. Sentiment analysis in non-fixed length audios using a Fully Convolutional Neural Network. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102946] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
36
|
|
37
|
Wilson M, Chopra R, Wilson MZ, Cooper C, MacWilliams P, Liu Y, Wulczyn E, Florea D, Hughes CO, Karthikesalingam A, Khalid H, Vermeirsch S, Nicholson L, Keane PA, Balaskas K, Kelly CJ. Validation and Clinical Applicability of Whole-Volume Automated Segmentation of Optical Coherence Tomography in Retinal Disease Using Deep Learning. JAMA Ophthalmol 2021; 139:964-973. [PMID: 34236406 PMCID: PMC8444027 DOI: 10.1001/jamaophthalmol.2021.2273] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Question Is deep learning–based segmentation of macular disease in optical coherence tomography (OCT) suitable for clinical use? Findings In this diagnostic study of OCT data from 173 patients with age-related macular degeneration or diabetic macular edema, model segmentations qualitatively ranked better or comparable for clinical applicability to 1 or more expert grader segmentations in 127 scans (73%) by a panel of 3 retinal specialists. Scans with high quantitative accuracy scores were not reliably associated with higher rankings. Meaning These findings suggest that qualitative evaluation adds to quantitative approaches when assessing clinical applicability of segmentation tools and clinician satisfaction in practice. Importance Quantitative volumetric measures of retinal disease in optical coherence tomography (OCT) scans are infeasible to perform owing to the time required for manual grading. Expert-level deep learning systems for automatic OCT segmentation have recently been developed. However, the potential clinical applicability of these systems is largely unknown. Objective To evaluate a deep learning model for whole-volume segmentation of 4 clinically important pathological features and assess clinical applicability. Design, Setting, Participants This diagnostic study used OCT data from 173 patients with a total of 15 558 B-scans, treated at Moorfields Eye Hospital. The data set included 2 common OCT devices and 2 macular conditions: wet age-related macular degeneration (107 scans) and diabetic macular edema (66 scans), covering the full range of severity, and from 3 points during treatment. Two expert graders performed pixel-level segmentations of intraretinal fluid, subretinal fluid, subretinal hyperreflective material, and pigment epithelial detachment, including all B-scans in each OCT volume, taking as long as 50 hours per scan. Quantitative evaluation of whole-volume model segmentations was performed. Qualitative evaluation of clinical applicability by 3 retinal experts was also conducted. Data were collected from June 1, 2012, to January 31, 2017, for set 1 and from January 1 to December 31, 2017, for set 2; graded between November 2018 and January 2020; and analyzed from February 2020 to November 2020. Main Outcomes and Measures Rating and stack ranking for clinical applicability by retinal specialists, model-grader agreement for voxelwise segmentations, and total volume evaluated using Dice similarity coefficients, Bland-Altman plots, and intraclass correlation coefficients. Results Among the 173 patients included in the analysis (92 [53%] women), qualitative assessment found that automated whole-volume segmentation ranked better than or comparable to at least 1 expert grader in 127 scans (73%; 95% CI, 66%-79%). A neutral or positive rating was given to 135 model segmentations (78%; 95% CI, 71%-84%) and 309 expert gradings (2 per scan) (89%; 95% CI, 86%-92%). The model was rated neutrally or positively in 86% to 92% of diabetic macular edema scans and 53% to 87% of age-related macular degeneration scans. Intraclass correlations ranged from 0.33 (95% CI, 0.08-0.96) to 0.96 (95% CI, 0.90-0.99). Dice similarity coefficients ranged from 0.43 (95% CI, 0.29-0.66) to 0.78 (95% CI, 0.57-0.85). Conclusions and Relevance This deep learning–based segmentation tool provided clinically useful measures of retinal disease that would otherwise be infeasible to obtain. Qualitative evaluation was additionally important to reveal clinical applicability for both care management and research.
Collapse
Affiliation(s)
| | - Reena Chopra
- Google Health, London, United Kingdom.,National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS (National Health Service) Foundation Trust, London, United Kingdom.,University College London Institute of Ophthalmology, London, United Kingdom
| | | | | | | | - Yun Liu
- Google Health, Palo Alto, California
| | | | - Daniela Florea
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS (National Health Service) Foundation Trust, London, United Kingdom.,University College London Institute of Ophthalmology, London, United Kingdom
| | | | | | - Hagar Khalid
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS (National Health Service) Foundation Trust, London, United Kingdom.,University College London Institute of Ophthalmology, London, United Kingdom
| | - Sandra Vermeirsch
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS (National Health Service) Foundation Trust, London, United Kingdom.,University College London Institute of Ophthalmology, London, United Kingdom
| | - Luke Nicholson
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS (National Health Service) Foundation Trust, London, United Kingdom.,University College London Institute of Ophthalmology, London, United Kingdom
| | - Pearse A Keane
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS (National Health Service) Foundation Trust, London, United Kingdom.,University College London Institute of Ophthalmology, London, United Kingdom
| | - Konstantinos Balaskas
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS (National Health Service) Foundation Trust, London, United Kingdom.,University College London Institute of Ophthalmology, London, United Kingdom
| | | |
Collapse
|
38
|
Automated segmentation of macular edema for the diagnosis of ocular disease using deep learning method. Sci Rep 2021; 11:13392. [PMID: 34183684 PMCID: PMC8238965 DOI: 10.1038/s41598-021-92458-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Accepted: 06/08/2021] [Indexed: 11/21/2022] Open
Abstract
Macular edema is considered as a major cause of visual loss and blindness in patients with ocular fundus diseases. Optical coherence tomography (OCT) is a non-invasive imaging technique, which has been widely applied for diagnosing macular edema due to its non-invasive and high resolution properties. However, the practical applications remain challenges due to the distorted retinal morphology and blurred boundaries near macular edema. Herein, we developed a novel deep learning model for the segmentation of macular edema in OCT images based on DeepLab framework (OCT-DeepLab). In this model, we used atrous spatial pyramid pooling (ASPP) to detect macular edema at multiple features and used the fully connected conditional random field (CRF) to refine the boundary of macular edema. OCT-DeepLab model was compared against the traditional hand-crafted methods (C-V and SBG) and the end-to-end methods (FCN, PSPnet, and U-net) to estimate the segmentation performance. OCT-DeepLab showed great advantage over the hand-crafted methods (C-V and SBG) and end-to-end methods (FCN, PSPnet, and U-net) as shown by higher precision, sensitivity, specificity, and F1-score. The segmentation performance of OCT-DeepLab was comparable to that of manual label, with an average area under the curve (AUC) of 0.963, which was superior to other end-to-end methods (FCN, PSPnet, and U-net). Collectively, OCT-DeepLab model is suitable for the segmentation of macular edema and assist ophthalmologists in the management of ocular disease.
Collapse
|
39
|
Mantel I, Mosinska A, Bergin C, Polito MS, Guidotti J, Apostolopoulos S, Ciller C, De Zanet S. Automated Quantification of Pathological Fluids in Neovascular Age-Related Macular Degeneration, and Its Repeatability Using Deep Learning. Transl Vis Sci Technol 2021; 10:17. [PMID: 34003996 PMCID: PMC8083067 DOI: 10.1167/tvst.10.4.17] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
Purpose To develop a reliable algorithm for the automated identification, localization, and volume measurement of exudative manifestations in neovascular age-related macular degeneration (nAMD), including intraretinal (IRF), subretinal fluid (SRF), and pigment epithelium detachment (PED), using a deep-learning approach. Methods One hundred seven spectral domain optical coherence tomography (OCT) cube volumes were extracted from nAMD eyes. Manual annotation of IRF, SRF, and PED was performed. Ninety-two OCT volumes served as training and validation set, and 15 OCT volumes from different patients as test set. The performance of our fluid segmentation method was quantified by means of pixel-wise metrics and volume correlations and compared to other methods. Repeatability was tested on 42 other eyes with five OCT volume scans acquired on the same day. Results The fully automated algorithm achieved good performance for the detection of IRF, SRF, and PED. The area under the curve for detection, sensitivity, and specificity was 0.97, 0.95, and 0.99, respectively. The correlation coefficients for the fluid volumes were 0.99, 0.99, and 0.91, respectively. The Dice score was 0.73, 0.67, and 0.82, respectively. For the largest volume quartiles the Dice scores were >0.90. Including retinal layer segmentation contributed positively to the performance. The repeatability of volume prediction showed a standard deviations of 4.0 nL, 3.5 nL, and 20.0 nL for IRF, SRF, and PED, respectively. Conclusions The deep-learning algorithm can simultaneously acquire a high level of performance for the identification and volume measurements of IRF, SRF, and PED in nAMD, providing accurate and repeatable predictions. Including layer segmentation during training and squeeze-excite block in the network architecture were shown to boost the performance. Translational Relevance Potential applications include measurements of specific fluid compartments with high reproducibility, assistance in treatment decisions, and the diagnostic or scientific evaluation of relevant subgroups.
Collapse
Affiliation(s)
- Irmela Mantel
- Department of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | | | - Ciara Bergin
- Department of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Maria Sole Polito
- Department of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Jacopo Guidotti
- Department of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | | | | | | |
Collapse
|
40
|
Terry L, Trikha S, Bhatia KK, Graham MS, Wood A. Evaluation of Automated Multiclass Fluid Segmentation in Optical Coherence Tomography Images Using the Pegasus Fluid Segmentation Algorithms. Transl Vis Sci Technol 2021; 10:27. [PMID: 34008019 PMCID: PMC9354552 DOI: 10.1167/tvst.10.1.27] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To evaluate the performance of the Pegasus-OCT (Visulytix Ltd) multiclass automated fluid segmentation algorithms on independent spectral domain optical coherence tomography data sets. Methods The Pegasus automated fluid segmentation algorithms were applied to three data sets with edematous pathology, comprising 750, 600, and 110 b-scans, respectively. Intraretinal fluid (IRF), sub-retinal fluid (SRF), and pigment epithelial detachment (PED) were automatically segmented by Pegasus-OCT for each b-scan where ground truth from data set owners was available. Detection performance was assessed by calculating sensitivities and specificities, while Dice coefficients were used to assess agreement between the segmentation methods. Results For two data sets, IRF detection yielded promising sensitivities (0.98 and 0.94, respectively) and specificities (1.00 and 0.98) but less consistent agreement with the ground truth (dice coefficients 0.81 and 0.59); likewise, SRF detection showed high sensitivity (0.86 and 0.98) and specificity (0.83 and 0.89) but less consistent agreement (0.59 and 0.78). PED detection on the first data set showed moderate agreement (0.66) with high sensitivity (0.97) and specificity (0.98). IRF detection in a third data set yielded less favorable agreement (0.46-0.57) and sensitivity (0.59-0.68), attributed to image quality and ground truth grader discordance. Conclusions The Pegasus automated fluid segmentation algorithms were able to detect IRF, SRF, and PED in SD-OCT b-scans acquired across multiple independent data sets. Dice coefficients and sensitivity and specificity values indicate the potential for application to automated detection and monitoring of retinal diseases such as age-related macular degeneration and diabetic macular edema. Translational Relevance The potential of Pegasus-OCT for automated fluid quantification and differentiation of IRF, SRF, and PED in OCT images has application to both clinical practice and research.
Collapse
Affiliation(s)
- Louise Terry
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | - Sameer Trikha
- King's College Hospital NHS Foundation Trust, London, UK
| | | | - Mark S Graham
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Ashley Wood
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| |
Collapse
|
41
|
Schmidt-Erfurth U, Reiter GS, Riedl S, Seeböck P, Vogl WD, Blodi BA, Domalpally A, Fawzi A, Jia Y, Sarraf D, Bogunović H. AI-based monitoring of retinal fluid in disease activity and under therapy. Prog Retin Eye Res 2021; 86:100972. [PMID: 34166808 DOI: 10.1016/j.preteyeres.2021.100972] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 05/11/2021] [Accepted: 05/13/2021] [Indexed: 12/21/2022]
Abstract
Retinal fluid as the major biomarker in exudative macular disease is accurately visualized by high-resolution three-dimensional optical coherence tomography (OCT), which is used world-wide as a diagnostic gold standard largely replacing clinical examination. Artificial intelligence (AI) with its capability to objectively identify, localize and quantify fluid introduces fully automated tools into OCT imaging for personalized disease management. Deep learning performance has already proven superior to human experts, including physicians and certified readers, in terms of accuracy and speed. Reproducible measurement of retinal fluid relies on precise AI-based segmentation methods that assign a label to each OCT voxel denoting its fluid type such as intraretinal fluid (IRF) and subretinal fluid (SRF) or pigment epithelial detachment (PED) and its location within the central 1-, 3- and 6-mm macular area. Such reliable analysis is most relevant to reflect differences in pathophysiological mechanisms and impacts on retinal function, and the dynamics of fluid resolution during therapy with different regimens and substances. Yet, an in-depth understanding of the mode of action of supervised and unsupervised learning, the functionality of a convolutional neural net (CNN) and various network architectures is needed. Greater insight regarding adequate methods for performance, validation assessment, and device- and scanning-pattern-dependent variations is necessary to empower ophthalmologists to become qualified AI users. Fluid/function correlation can lead to a better definition of valid fluid variables relevant for optimal outcomes on an individual and a population level. AI-based fluid analysis opens the way for precision medicine in real-world practice of the leading retinal diseases of modern times.
Collapse
Affiliation(s)
- Ursula Schmidt-Erfurth
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Gregor S Reiter
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Sophie Riedl
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Philipp Seeböck
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Wolf-Dieter Vogl
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Barbara A Blodi
- Fundus Photograph Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA.
| | - Amitha Domalpally
- Fundus Photograph Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA.
| | - Amani Fawzi
- Feinberg School of Medicine, Northwestern University, Chicago, IL, USA.
| | - Yali Jia
- Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.
| | - David Sarraf
- Stein Eye Institute, University of California Los Angeles, Los Angeles, CA, USA.
| | - Hrvoje Bogunović
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| |
Collapse
|
42
|
Liu Y, Han L, Wang H, Yin B. Classification of papillary thyroid carcinoma histological images based on deep learning. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-210100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Papillary thyroid carcinoma (PTC) is a common carcinoma in thyroid. As many benign thyroid nodules have the papillary structure which could easily be confused with PTC in morphology. Thus, pathologists have to take a lot of time on differential diagnosis of PTC besides personal diagnostic experience and there is no doubt that it is subjective and difficult to obtain consistency among observers. To address this issue, we applied deep learning to the differential diagnosis of PTC and proposed a histological image classification method for PTC based on the Inception Residual convolutional neural network (IRCNN) and support vector machine (SVM). First, in order to expand the dataset and solve the problem of histological image color inconsistency, a pre-processing module was constructed that included color transfer and mirror transform. Then, to alleviate overfitting of the deep learning model, we optimized the convolution neural network by combining Inception Network and Residual Network to extract image features. Finally, the SVM was trained via image features extracted by IRCNN to perform the classification task. Experimental results show effectiveness of the proposed method in the classification of PTC histological images.
Collapse
Affiliation(s)
- Yaning Liu
- College of Information Science and Engineering, Ocean University of China, Qingdao, China
| | - Lin Han
- School of Information and Control Engineering, Qingdao University of Technology, Qingdao, China
| | - Hexiang Wang
- Department of Pathology, Qingdao Hospital of Traditional Chinese Medicine, Qingdao, China
| | - Bo Yin
- College of Information Science and Engineering, Ocean University of China, Qingdao, China
| |
Collapse
|
43
|
Sappa LB, Okuwobi IP, Li M, Zhang Y, Xie S, Yuan S, Chen Q. RetFluidNet: Retinal Fluid Segmentation for SD-OCT Images Using Convolutional Neural Network. J Digit Imaging 2021; 34:691-704. [PMID: 34080105 PMCID: PMC8329142 DOI: 10.1007/s10278-021-00459-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 12/03/2020] [Accepted: 04/29/2021] [Indexed: 11/25/2022] Open
Abstract
Age-related macular degeneration (AMD) is one of the leading causes of irreversible blindness and is characterized by fluid-related accumulations such as intra-retinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED). Spectral-domain optical coherence tomography (SD-OCT) is the primary modality used to diagnose AMD, yet it does not have algorithms that directly detect and quantify the fluid. This work presents an improved convolutional neural network (CNN)-based architecture called RetFluidNet to segment three types of fluid abnormalities from SD-OCT images. The model assimilates different skip-connect operations and atrous spatial pyramid pooling (ASPP) to integrate multi-scale contextual information; thus, achieving the best performance. This work also investigates between consequential and comparatively inconsequential hyperparameters and skip-connect techniques for fluid segmentation from the SD-OCT image to indicate the starting choice for future related researches. RetFluidNet was trained and tested on SD-OCT images from 124 patients and achieved an accuracy of 80.05%, 92.74%, and 95.53% for IRF, PED, and SRF, respectively. RetFluidNet showed significant improvement over competitive works to be clinically applicable in reasonable accuracy and time efficiency. RetFluidNet is a fully automated method that can support early detection and follow-up of AMD.
Collapse
Affiliation(s)
- Loza Bekalo Sappa
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China
| | - Idowu Paul Okuwobi
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China
| | - Mingchao Li
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China
| | - Yuhan Zhang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China
| | - Sha Xie
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China
| | - Songtao Yuan
- Department of Ophthalmology, The First Affiliated Hospital With Nanjing Medical University, 300 Guangzhou Road, Nanjing, 210029, China
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China.
| |
Collapse
|
44
|
Yoo TK, Choi JY, Kim HK, Ryu IH, Kim JK. Adopting low-shot deep learning for the detection of conjunctival melanoma using ocular surface images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106086. [PMID: 33862570 DOI: 10.1016/j.cmpb.2021.106086] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Accepted: 03/30/2021] [Indexed: 05/05/2023]
Abstract
BACKGROUND AND OBJECTIVE The purpose of the present study was to investigate low-shot deep learning models applied to conjunctival melanoma detection using a small dataset with ocular surface images. METHODS A dataset was composed of anonymized images of four classes; conjunctival melanoma (136), nevus or melanosis (93), pterygium (75), and normal conjunctiva (94). Before training involving conventional deep learning models, two generative adversarial networks (GANs) were constructed to augment the training dataset for low-shot learning. The collected data were randomly divided into training (70%), validation (10%), and test (20%) datasets. Moreover, 3D melanoma phantoms were designed to build an external validation set using a smartphone. The GoogleNet, InceptionV3, NASNet, ResNet50, and MobileNetV2 architectures were trained through transfer learning and validated using the test and external validation datasets. RESULTS The deep learning model demonstrated a significant improvement in the classification accuracy of conjunctival lesions using synthetic images generated by the GAN models. MobileNetV2 with GAN-based augmentation displayed the highest accuracy of 87.5% in the four-class classification and 97.2% in the binary classification for the detection of conjunctival melanoma. It showed an accuracy of 94.0% using 3D melanoma phantom images captured using a smartphone camera. CONCLUSIONS The present study described a low-shot deep learning model that can detect conjunctival melanomas using ocular surface images. To the best of our knowledge, this study is the first to develop a deep learning model to detect conjunctival melanoma using a digital imaging device such as smartphone camera.
Collapse
Affiliation(s)
- Tae Keun Yoo
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, Republic of Korea.
| | - Joon Yul Choi
- Epilepsy Center, Neurological Institute, Cleveland Clinic, Cleveland, OH, USA.
| | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea
| |
Collapse
|
45
|
Chen D, Zhang X, Mei Y, Liao F, Xu H, Li Z, Xiao Q, Guo W, Zhang H, Yan T, Xiong J, Ventikos Y. Multi-stage learning for segmentation of aortic dissections using a prior aortic anatomy simplification. Med Image Anal 2020; 69:101931. [PMID: 33618153 DOI: 10.1016/j.media.2020.101931] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Revised: 11/20/2020] [Accepted: 11/27/2020] [Indexed: 12/30/2022]
Abstract
Aortic dissection (AD) is a life-threatening cardiovascular disease with a high mortality rate. The accurate and generalized 3-D reconstruction of AD from CT-angiography can effectively assist clinical procedures and surgery plans, however, is clinically unavaliable due to the lacking of efficient tools. In this study, we presented a novel multi-stage segmentation framework for type B AD to extract true lumen (TL), false lumen (FL) and all branches (BR) as different classes. Two cascaded neural networks were used to segment the aortic trunk and branches and to separate the dual lumen, respectively. An aortic straightening method was designed based on the prior vascular anatomy of AD, simplifying the curved aortic shape before the second network. The straightening-based method achieved the mean Dice scores of 0.96, 0.95 and 0.89 for TL, FL, and BR on a multi-center dataset involving 120 patients, outperforming the end-to-end multi-class methods and the multi-stage methods without straightening on the dual-lumen segmentation, even using different network architectures. Both the global volumetric features of the aorta and the local characteristics of the primary tear could be better identified and quantified based on the straightening. Comparing to previous deep learning methods dealing with AD segmentations, the proposed framework presented advantages in segmentation accuracy.
Collapse
Affiliation(s)
- Duanduan Chen
- School of Life Science, Beijing Institute of Technology, Beijing, China.
| | - Xuyang Zhang
- School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Yuqian Mei
- School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Fangzhou Liao
- Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China
| | - Huanming Xu
- School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Zhenfeng Li
- School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Qianjiang Xiao
- Shukun (Beijing) Network Technology Co.Ltd., Beijing, China
| | - Wei Guo
- Department of Vascular and Endovascular Surgery, Chinese PLA General Hospital, Beijing, China
| | - Hongkun Zhang
- Department of Vascular Surgery, First Affiliated Hospital of Medical College, Zhejiang University, Hangzhou, China
| | - Tianyi Yan
- School of Life Science, Beijing Institute of Technology, Beijing, China.
| | - Jiang Xiong
- Department of Vascular and Endovascular Surgery, Chinese PLA General Hospital, Beijing, China.
| | - Yiannis Ventikos
- Department of Mechanical Engineering, University College London, London, UK; School of Life Science, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
46
|
Zhong P, Wang J, Guo Y, Fu X, Wang R. Multiclass retinal disease classification and lesion segmentation in OCT B-scan images using cascaded convolutional networks. APPLIED OPTICS 2020; 59:10312-10320. [PMID: 33361962 DOI: 10.1364/ao.409414] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 10/24/2020] [Indexed: 06/12/2023]
Abstract
Disease classification and lesion segmentation of retinal optical coherence tomography images play important roles in ophthalmic computer-aided diagnosis. However, existing methods achieve the two tasks separately, which is insufficient for clinical application and ignores the internal relation of disease and lesion features. In this paper, a framework of cascaded convolutional networks is proposed to jointly classify retinal diseases and segment lesions. First, we adopt an auxiliary binary classification network to identify normal and abnormal images. Then a novel, to the best of our knowledge, U-shaped multi-task network, BDA-Net, combined with a bidirectional decoder and self-attention mechanism, is used to further analyze abnormal images. Experimental results show that the proposed method reaches an accuracy of 0.9913 in classification and achieves an improvement of around 3% in Dice compared to the baseline U-shaped model in segmentation.
Collapse
|
47
|
Devalla SK, Pham TH, Panda SK, Zhang L, Subramanian G, Swaminathan A, Yun CZ, Rajan M, Mohan S, Krishnadas R, Senthil V, De Leon JMS, Tun TA, Cheng CY, Schmetterer L, Perera S, Aung T, Thiéry AH, Girard MJA. Towards label-free 3D segmentation of optical coherence tomography images of the optic nerve head using deep learning. BIOMEDICAL OPTICS EXPRESS 2020; 11:6356-6378. [PMID: 33282495 PMCID: PMC7687952 DOI: 10.1364/boe.395934] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 08/17/2020] [Accepted: 08/19/2020] [Indexed: 05/06/2023]
Abstract
Recently proposed deep learning (DL) algorithms for the segmentation of optical coherence tomography (OCT) images to quantify the morphological changes to the optic nerve head (ONH) tissues during glaucoma have limited clinical adoption due to their device specific nature and the difficulty in preparing manual segmentations (training data). We propose a DL-based 3D segmentation framework that is easily translatable across OCT devices in a label-free manner (i.e. without the need to manually re-segment data for each device). Specifically, we developed 2 sets of DL networks: the 'enhancer' (enhance OCT image quality and harmonize image characteristics from 3 devices) and the 'ONH-Net' (3D segmentation of 6 ONH tissues). We found that only when the 'enhancer' was used to preprocess the OCT images, the 'ONH-Net' trained on any of the 3 devices successfully segmented ONH tissues from the other two unseen devices with high performance (Dice coefficients > 0.92). We demonstrate that is possible to automatically segment OCT images from new devices without ever needing manual segmentation data from them.
Collapse
Affiliation(s)
- Sripad Krishna Devalla
- Ophthalmic Engineering & Innovation Laboratory, Department of Biomedical Engineering, Faculty of Engineering, National University of Singapore, Singapore
| | - Tan Hung Pham
- Ophthalmic Engineering & Innovation Laboratory, Department of Biomedical Engineering, Faculty of Engineering, National University of Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Satish Kumar Panda
- Ophthalmic Engineering & Innovation Laboratory, Department of Biomedical Engineering, Faculty of Engineering, National University of Singapore, Singapore
| | - Liang Zhang
- Ophthalmic Engineering & Innovation Laboratory, Department of Biomedical Engineering, Faculty of Engineering, National University of Singapore, Singapore
| | - Giridhar Subramanian
- Ophthalmic Engineering & Innovation Laboratory, Department of Biomedical Engineering, Faculty of Engineering, National University of Singapore, Singapore
| | - Anirudh Swaminathan
- Ophthalmic Engineering & Innovation Laboratory, Department of Biomedical Engineering, Faculty of Engineering, National University of Singapore, Singapore
| | - Chin Zhi Yun
- Ophthalmic Engineering & Innovation Laboratory, Department of Biomedical Engineering, Faculty of Engineering, National University of Singapore, Singapore
| | | | | | | | | | - John Mark S De Leon
- Department of Health Eye Center, East Avenue Medical Center, Quezon City, Philippines
| | - Tin A Tun
- Ophthalmic Engineering & Innovation Laboratory, Department of Biomedical Engineering, Faculty of Engineering, National University of Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Nanyang Technological University, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Austria
- Institute of Clinical and Molecular Ophthalmology, Basel, Switzerland
| | - Shamira Perera
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-NUS Graduate Medical School, 8 College Rd, Singapore 169857, Singapore
| | - Tin Aung
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-NUS Graduate Medical School, 8 College Rd, Singapore 169857, Singapore
| | - Alexandre H Thiéry
- Department of Statistics and Applied Probability, National University of Singapore, Singapore
| | - Michaël J A Girard
- Ophthalmic Engineering and Innovation Laboratory (OEIL), Singapore Eye Research Institute, 20 College Road, Singapore 169856, Singapore
| |
Collapse
|
48
|
Guo Y, Hormel TT, Xiong H, Wang J, Hwang TS, Jia Y. Automated Segmentation of Retinal Fluid Volumes From Structural and Angiographic Optical Coherence Tomography Using Deep Learning. Transl Vis Sci Technol 2020; 9:54. [PMID: 33110708 PMCID: PMC7552937 DOI: 10.1167/tvst.9.2.54] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Accepted: 09/07/2020] [Indexed: 01/08/2023] Open
Abstract
Purpose We proposed a deep convolutional neural network (CNN), named Retinal Fluid Segmentation Network (ReF-Net), to segment retinal fluid in diabetic macular edema (DME) in optical coherence tomography (OCT) volumes. Methods The 3- × 3-mm OCT scans were acquired on one eye by a 70-kHz OCT commercial AngioVue system (RTVue-XR; Optovue, Inc., Fremont, CA, USA) from 51 participants in a clinical diabetic retinopathy (DR) study (45 with retinal edema and six healthy controls, age 61.3 ± 10.1 (mean ± SD), 33% female, and all DR cases were diagnosed as severe NPDR or PDR). A CNN with U-Net-like architecture was constructed to detect and segment the retinal fluid. Cross-sectional OCT and angiography (OCTA) scans were used for training and testing ReF-Net. The effect of including OCTA data for retinal fluid segmentation was investigated in this study. Volumetric retinal fluid can be constructed using the output of ReF-Net. Area-under-receiver-operating-characteristic-curve, intersection-over-union (IoU), and F1-score were calculated to evaluate the performance of ReF-Net. Results ReF-Net shows high accuracy (F1 = 0.864 ± 0.084) in retinal fluid segmentation. The performance can be further improved (F1 = 0.892 ± 0.038) by including information from both OCTA and structural OCT. ReF-Net also shows strong robustness to shadow artifacts. Volumetric retinal fluid can provide more comprehensive information than the two-dimensional (2D) area, whether cross-sectional or en face projections. Conclusions A deep-learning-based method can accurately segment retinal fluid volumetrically on OCT/OCTA scans with strong robustness to shadow artifacts. OCTA data can improve retinal fluid segmentation. Volumetric representations of retinal fluid are superior to 2D projections. Translational Relevance Using a deep learning method to segment retinal fluid volumetrically has the potential to improve the diagnostic accuracy of diabetic macular edema by OCT systems.
Collapse
Affiliation(s)
- Yukun Guo
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Tristan T Hormel
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Honglian Xiong
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.,School of Physics and Optoelectronic Engineering, Foshan University, Foshan, Guangdong, China
| | - Jie Wang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.,Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR, USA
| | - Thomas S Hwang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.,Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR, USA
| |
Collapse
|
49
|
Moraes G, Fu DJ, Wilson M, Khalid H, Wagner SK, Korot E, Ferraz D, Faes L, Kelly CJ, Spitz T, Patel PJ, Balaskas K, Keenan TDL, Keane PA, Chopra R. Quantitative Analysis of OCT for Neovascular Age-Related Macular Degeneration Using Deep Learning. Ophthalmology 2020; 128:693-705. [PMID: 32980396 PMCID: PMC8528155 DOI: 10.1016/j.ophtha.2020.09.025] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 08/25/2020] [Accepted: 09/21/2020] [Indexed: 12/12/2022] Open
Abstract
PURPOSE To apply a deep learning algorithm for automated, objective, and comprehensive quantification of OCT scans to a large real-world dataset of eyes with neovascular age-related macular degeneration (AMD) and make the raw segmentation output data openly available for further research. DESIGN Retrospective analysis of OCT images from the Moorfields Eye Hospital AMD Database. PARTICIPANTS A total of 2473 first-treated eyes and 493 second-treated eyes that commenced therapy for neovascular AMD between June 2012 and June 2017. METHODS A deep learning algorithm was used to segment all baseline OCT scans. Volumes were calculated for segmented features such as neurosensory retina (NSR), drusen, intraretinal fluid (IRF), subretinal fluid (SRF), subretinal hyperreflective material (SHRM), retinal pigment epithelium (RPE), hyperreflective foci (HRF), fibrovascular pigment epithelium detachment (fvPED), and serous PED (sPED). Analyses included comparisons between first- and second-treated eyes by visual acuity (VA) and race/ethnicity and correlations between volumes. MAIN OUTCOME MEASURES Volumes of segmented features (mm3) and central subfield thickness (CST) (μm). RESULTS In first-treated eyes, the majority had both IRF and SRF (54.7%). First-treated eyes had greater volumes for all segmented tissues, with the exception of drusen, which was greater in second-treated eyes. In first-treated eyes, older age was associated with lower volumes for RPE, SRF, NSR, and sPED; in second-treated eyes, older age was associated with lower volumes of NSR, RPE, sPED, fvPED, and SRF. Eyes from Black individuals had higher SRF, RPE, and serous PED volumes compared with other ethnic groups. Greater volumes of the majority of features were associated with worse VA. CONCLUSIONS We report the results of large-scale automated quantification of a novel range of baseline features in neovascular AMD. Major differences between first- and second-treated eyes, with increasing age, and between ethnicities are highlighted. In the coming years, enhanced, automated OCT segmentation may assist personalization of real-world care and the detection of novel structure-function correlations. These data will be made publicly available for replication and future investigation by the AMD research community.
Collapse
Affiliation(s)
- Gabriella Moraes
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Dun Jack Fu
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | | | - Hagar Khalid
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Siegfried K Wagner
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Edward Korot
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Daniel Ferraz
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom; Department of Ophthalmology, Federal University São Paulo, São Paulo, Brazil
| | - Livia Faes
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | | | | | - Praveen J Patel
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Konstantinos Balaskas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom.
| | - Reena Chopra
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom; Google Health, London, United Kingdom
| |
Collapse
|
50
|
Tan B, Sim R, Chua J, Wong DWK, Yao X, Garhöfer G, Schmidl D, Werkmeister RM, Schmetterer L. Approaches to quantify optical coherence tomography angiography metrics. ANNALS OF TRANSLATIONAL MEDICINE 2020; 8:1205. [PMID: 33241054 PMCID: PMC7576021 DOI: 10.21037/atm-20-3246] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Accepted: 06/16/2020] [Indexed: 12/13/2022]
Abstract
Optical coherence tomography (OCT) has revolutionized the field of ophthalmology in the last three decades. As an OCT extension, OCT angiography (OCTA) utilizes a fast OCT system to detect motion contrast in ocular tissue and provides a three-dimensional representation of the ocular vasculature in a non-invasive, dye-free manner. The first OCT machine equipped with OCTA function was approved by U.S. Food and Drug Administration in 2016 and now it is widely applied in clinics. To date, numerous methods have been developed to aid OCTA interpretation and quantification. In this review, we focused on the workflow of OCTA-based interpretation, beginning from the generation of the OCTA images using signal decorrelation, which we divided into intensity-based, phase-based and phasor-based methods. We further discussed methods used to address image artifacts that are commonly observed in clinical settings, to the algorithms for image enhancement, binarization, and OCTA metrics extraction. We believe a better grasp of these technical aspects of OCTA will enhance the understanding of the technology and its potential application in disease diagnosis and management. Moreover, future studies will also explore the use of ocular OCTA as a window to link ocular vasculature to the function of other organs such as the kidney and brain.
Collapse
Affiliation(s)
- Bingyao Tan
- Institute for Health Technologies, Nanyang Technological University, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore, Singapore
| | - Ralene Sim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Jacqueline Chua
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
| | - Damon W. K. Wong
- Institute for Health Technologies, Nanyang Technological University, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore, Singapore
| | - Xinwen Yao
- Institute for Health Technologies, Nanyang Technological University, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore, Singapore
| | - Gerhard Garhöfer
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
| | - Doreen Schmidl
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
| | - René M. Werkmeister
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore, Singapore
- Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore, Singapore
- Department of Ophthalmology, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| |
Collapse
|