1
|
Zhang Y, Chung ACS. Retinal Vessel Segmentation by a Transformer-U-Net Hybrid Model With Dual-Path Decoder. IEEE J Biomed Health Inform 2024; 28:5347-5359. [PMID: 38669172 DOI: 10.1109/jbhi.2024.3394151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/28/2024]
Abstract
This paper introduces an effective and efficient framework for retinal vessel segmentation. First, we design a Transformer-CNN hybrid model in which a Transformer module is inserted inside the U-Net to capture long-range interactions. Second, we design a dual-path decoder in the U-Net framework, which contains two decoding paths for multi-task outputs. Specifically, we train the extra decoder to predict vessel skeletons as an auxiliary task which helps the model learn balanced features. The proposed framework, named as TSNet, not only achieves good performances in a fully supervised learning manner but also enables a rough skeleton annotation process. The annotators only need to roughly delineate vessel skeletons instead of giving precise pixel-wise vessel annotations. To learn with rough skeleton annotations plus a few precise vessel annotations, we propose a skeleton semi-supervised learning scheme. We adopt a mean teacher model to produce pseudo vessel annotations and conduct annotation correction for roughly labeled skeletons annotations. This learning scheme can achieve promising performance with fewer annotation efforts. We have evaluated TSNet through extensive experiments on five benchmarking datasets. Experimental results show that TSNet yields state-of-the-art performances on retinal vessel segmentation and provides an efficient training scheme in practice.
Collapse
|
2
|
Noman MK, Shamsul Islam SM, Jafar Jalali SM, Abu-Khalaf J, Lavery P. BAOS-CNN: A novel deep neuroevolution algorithm for multispecies seagrass detection. PLoS One 2024; 19:e0281568. [PMID: 38917071 PMCID: PMC11198790 DOI: 10.1371/journal.pone.0281568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 11/05/2023] [Indexed: 06/27/2024] Open
Abstract
Deep learning, a subset of machine learning that utilizes neural networks, has seen significant advancements in recent years. These advancements have led to breakthroughs in a wide range of fields, from natural language processing to computer vision, and have the potential to revolutionize many industries or organizations. They have also demonstrated exceptional performance in the identification and mapping of seagrass images. However, these deep learning models, particularly the popular Convolutional Neural Networks (CNNs) require architectural engineering and hyperparameter tuning. This paper proposes a Deep Neuroevolutionary (DNE) model that can automate the architectural engineering and hyperparameter tuning of CNNs models by developing and using a novel metaheuristic algorithm, named 'Boosted Atomic Orbital Search (BAOS)'. The proposed BAOS is an improved version of the recently proposed Atomic Orbital Search (AOS) algorithm which is based on the principle of atomic model and quantum mechanics. The proposed algorithm leverages the power of the Lévy flight technique to boost the performance of the AOS algorithm. The proposed DNE algorithm (BAOS-CNN) is trained, evaluated and compared with six popular optimisation algorithms on a patch-based multi-species seagrass dataset. This proposed BAOS-CNN model achieves the highest overall accuracy (97.48%) among the seven evolutionary-based CNN models. The proposed model also achieves the state-of-the-art overall accuracy of 92.30% and 93.5% on the publicly available four classes and five classes version of the 'DeepSeagrass' dataset, respectively. This multi-species seagrass dataset is available at: https://ro.ecu.edu.au/datasets/141/.
Collapse
Affiliation(s)
- Md Kislu Noman
- School of Science, Edith Cowan University, Perth, Australia
| | | | | | | | - Paul Lavery
- School of Science, Edith Cowan University, Perth, Australia
| |
Collapse
|
3
|
Wang X, Li H, Zheng H, Sun G, Wang W, Yi Z, Xu A, He L, Wang H, Jia W, Li Z, Li C, Ye M, Du B, Chen C. Automatic Detection of 30 Fundus Diseases Using Ultra-Widefield Fluorescein Angiography with Deep Experts Aggregation. Ophthalmol Ther 2024; 13:1125-1144. [PMID: 38416330 DOI: 10.1007/s40123-024-00900-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 01/26/2024] [Indexed: 02/29/2024] Open
Abstract
INTRODUCTION Inaccurate, untimely diagnoses of fundus diseases leads to vision-threatening complications and even blindness. We built a deep learning platform (DLP) for automatic detection of 30 fundus diseases using ultra-widefield fluorescein angiography (UWFFA) with deep experts aggregation. METHODS This retrospective and cross-sectional database study included a total of 61,609 UWFFA images dating from 2016 to 2021, involving more than 3364 subjects in multiple centers across China. All subjects were divided into 30 different groups. The state-of-the-art convolutional neural network architecture, ConvNeXt, was chosen as the backbone to train and test the receiver operating characteristic curve (ROC) of the proposed system on test data and external test date. We compared the classification performance of the proposed system with that of ophthalmologists, including two retinal specialists. RESULTS We built a DLP to analyze UWFFA, which can detect up to 30 fundus diseases, with a frequency-weighted average area under the receiver operating characteristic curve (AUC) of 0.940 in the primary test dataset and 0.954 in the external multi-hospital test dataset. The tool shows comparable accuracy with retina specialists in diagnosis and evaluation. CONCLUSIONS This is the first study on a large-scale UWFFA dataset for multi-retina disease classification. We believe that our UWFFA DLP advances the diagnosis by artificial intelligence (AI) in various retinal diseases and would contribute to labor-saving and precision medicine especially in remote areas.
Collapse
Affiliation(s)
- Xiaoling Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - He Li
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, 430072, Hubei, China
| | - Hongmei Zheng
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Gongpeng Sun
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Wenyu Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Zuohuizi Yi
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - A'min Xu
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Lu He
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Haiyan Wang
- Shaanxi Eye Hospital, Xi'an People's Hospital (Xi'an Fourth Hospital), No. 21, Jiefang Road, Xi'an, 710004, Shaanxi, China
| | - Wei Jia
- Shaanxi Eye Hospital, Xi'an People's Hospital (Xi'an Fourth Hospital), No. 21, Jiefang Road, Xi'an, 710004, Shaanxi, China
| | - Zhiqing Li
- Tianjin Medical University Eye Hospital, No. 251, Fukang Road, Nankai District, Tianjin, 300384, China
| | - Chang Li
- Tianjin Medical University Eye Hospital, No. 251, Fukang Road, Nankai District, Tianjin, 300384, China
| | - Mang Ye
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, 430072, Hubei, China.
| | - Bo Du
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, 430072, Hubei, China.
| | - Changzheng Chen
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China.
| |
Collapse
|
4
|
Shi D, Zhou Y, He S, Wagner SK, Huang Y, Keane PA, Ting DS, Zhang L, Zheng Y, He M. Cross-modality Labeling Enables Noninvasive Capillary Quantification as a Sensitive Biomarker for Assessing Cardiovascular Risk. OPHTHALMOLOGY SCIENCE 2024; 4:100441. [PMID: 38420613 PMCID: PMC10899028 DOI: 10.1016/j.xops.2023.100441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 11/26/2023] [Accepted: 11/27/2023] [Indexed: 03/02/2024]
Abstract
Purpose We aim to use fundus fluorescein angiography (FFA) to label the capillaries on color fundus (CF) photographs and train a deep learning model to quantify retinal capillaries noninvasively from CF and apply it to cardiovascular disease (CVD) risk assessment. Design Cross-sectional and longitudinal study. Participants A total of 90732 pairs of CF-FFA images from 3893 participants for segmentation model development, and 49229 participants in the UK Biobank for association analysis. Methods We matched the vessels extracted from FFA and CF, and used vessels from FFA as labels to train a deep learning model (RMHAS-FA) to segment retinal capillaries using CF. We tested the model's accuracy on a manually labeled internal test set (FundusCapi). For external validation, we tested the segmentation model on 7 vessel segmentation datasets, and investigated the clinical value of the segmented vessels in predicting CVD events in the UK Biobank. Main Outcome Measures Area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity for segmentation. Hazard ratio (HR; 95% confidence interval [CI]) for Cox regression analysis. Results On the FundusCapi dataset, the segmentation performance was AUC = 0.95, accuracy = 0.94, sensitivity = 0.90, and specificity = 0.93. Smaller vessel skeleton density had a stronger correlation with CVD risk factors and incidence (P < 0.01). Reduced density of small vessel skeletons was strongly associated with an increased risk of CVD incidence and mortality for women (HR [95% CI] = 0.91 [0.84-0.98] and 0.68 [0.54-0.86], respectively). Conclusions Using paired CF-FFA images, we automated the laborious manual labeling process and enabled noninvasive capillary quantification from CF, supporting its potential as a sensitive screening method for identifying individuals at high risk of future CVD events. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Danli Shi
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yukun Zhou
- Centre for Medical Image Computing, University College London, London, UK
| | - Shuang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Siegfried K. Wagner
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Yu Huang
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Pearse A. Keane
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Daniel S.W. Ting
- Singapore National Eye Center, Singapore Eye Research Institute, and Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Lei Zhang
- Faculty of Medicine, Central Clinical School, Monash University, Melbourne, Victoria, Australia
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Mingguang He
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| |
Collapse
|
5
|
Ma X, Ji Z, Chen Q, Ge L, Wang X, Chen C, Fan W. Controllable editing via diffusion inversion on ultra-widefield fluorescein angiography for the comprehensive analysis of diabetic retinopathy. BIOMEDICAL OPTICS EXPRESS 2024; 15:1831-1846. [PMID: 38495723 PMCID: PMC10942674 DOI: 10.1364/boe.517819] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 02/16/2024] [Accepted: 02/16/2024] [Indexed: 03/19/2024]
Abstract
By incorporating multiple indicators that facilitate clinical decision making and effective management of diabetic retinopathy (DR), a comprehensive understanding of the progression of the disease can be achieved. However, the diversity of DR complications poses challenges to the automatic analysis of various information within images. This study aims to establish a deep learning system designed to examine various metrics linked to DR in ultra-widefield fluorescein angiography (UWFA) images. We have developed a unified model based on image generation that transforms input images into corresponding disease-free versions. By incorporating an image-level supervised training process, the model significantly reduces the need for extensive manual involvement in clinical applications. Furthermore, compared to other comparative methods, the quality of our generated images is significantly superior.
Collapse
Affiliation(s)
- Xiao Ma
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 XiaoLinwei, Nanjing, Jiangsu 210094, China
| | - Zexuan Ji
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 XiaoLinwei, Nanjing, Jiangsu 210094, China
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 XiaoLinwei, Nanjing, Jiangsu 210094, China
| | - Lexin Ge
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, 300 Guangzhou Road, Nanjing, Jiangsu 210029, China
| | - Xiaoling Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, Hubei 430060, China
| | - Changzheng Chen
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, Hubei 430060, China
| | - Wen Fan
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, 300 Guangzhou Road, Nanjing, Jiangsu 210029, China
| |
Collapse
|
6
|
Shi D, He S, Yang J, Zheng Y, He M. One-shot Retinal Artery and Vein Segmentation via Cross-modality Pretraining. OPHTHALMOLOGY SCIENCE 2024; 4:100363. [PMID: 37868792 PMCID: PMC10585631 DOI: 10.1016/j.xops.2023.100363] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 06/29/2023] [Accepted: 06/30/2023] [Indexed: 10/24/2023]
Abstract
Purpose To perform one-shot retinal artery and vein segmentation with cross-modality artery-vein (AV) soft-label pretraining. Design Cross-sectional study. Subjects The study included 6479 color fundus photography (CFP) and arterial-venous fundus fluorescein angiography (FFA) pairs from 1964 participants for pretraining and 6 AV segmentation data sets with various image sources, including RITE, HRF, LES-AV, AV-WIDE, PortableAV, and DRSplusAV for one-shot finetuning and testing. Methods We structurally matched the arterial and venous phase of FFA with CFP, the AV soft labels were automatically generated by utilizing the fluorescein intensity difference of the arterial and venous-phase FFA images, and the soft labels were then used to train a generative adversarial network to learn to generate AV soft segmentations using CFP images as input. We then finetuned the pretrained model to perform AV segmentation using only one image from each of the AV segmentation data sets and test on the remainder. To investigate the effect and reliability of one-shot finetuning, we conducted experiments without finetuning and by finetuning the pretrained model on an iteratively different single image for each data set under the same experimental setting and tested the models on the remaining images. Main Outcome Measures The AV segmentation was assessed by area under the receiver operating characteristic curve (AUC), accuracy, Dice score, sensitivity, and specificity. Results After the FFA-AV soft label pretraining, our method required only one exemplar image from each camera or modality and achieved similar performance with full-data training, with AUC ranging from 0.901 to 0.971, accuracy from 0.959 to 0.980, Dice score from 0.585 to 0.773, sensitivity from 0.574 to 0.763, and specificity from 0.981 to 0.991. Compared with no finetuning, the segmentation performance improved after one-shot finetuning. When finetuned on different images in each data set, the standard deviation of the segmentation results across models ranged from 0.001 to 0.10. Conclusions This study presents the first one-shot approach to retinal artery and vein segmentation. The proposed labeling method is time-saving and efficient, demonstrating a promising direction for retinal-vessel segmentation and enabling the potential for widespread application. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Danli Shi
- Centre for Eye and Vision Research (CEVR), Hong Kong SAR, China
- The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Shuang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Jiancheng Yang
- Swiss Federal Institute of Technology in Lausanne (EPFL), Lausanne, Switzerland
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Mingguang He
- Centre for Eye and Vision Research (CEVR), Hong Kong SAR, China
- The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| |
Collapse
|
7
|
Liu X, Wu J, Shao A, Shen W, Ye P, Wang Y, Ye J, Jin K, Yang J. Uncovering Language Disparity of ChatGPT on Retinal Vascular Disease Classification: Cross-Sectional Study. J Med Internet Res 2024; 26:e51926. [PMID: 38252483 PMCID: PMC10845019 DOI: 10.2196/51926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 10/07/2023] [Accepted: 11/30/2023] [Indexed: 01/23/2024] Open
Abstract
BACKGROUND Benefiting from rich knowledge and the exceptional ability to understand text, large language models like ChatGPT have shown great potential in English clinical environments. However, the performance of ChatGPT in non-English clinical settings, as well as its reasoning, have not been explored in depth. OBJECTIVE This study aimed to evaluate ChatGPT's diagnostic performance and inference abilities for retinal vascular diseases in a non-English clinical environment. METHODS In this cross-sectional study, we collected 1226 fundus fluorescein angiography reports and corresponding diagnoses written in Chinese and tested ChatGPT with 4 prompting strategies (direct diagnosis or diagnosis with a step-by-step reasoning process and in Chinese or English). RESULTS Compared with ChatGPT using Chinese prompts for direct diagnosis that achieved an F1-score of 70.47%, ChatGPT using English prompts for direct diagnosis achieved the best diagnostic performance (80.05%), which was inferior to ophthalmologists (89.35%) but close to ophthalmologist interns (82.69%). As for its inference abilities, although ChatGPT can derive a reasoning process with a low error rate (0.4 per report) for both Chinese and English prompts, ophthalmologists identified that the latter brought more reasoning steps with less incompleteness (44.31%), misinformation (1.96%), and hallucinations (0.59%) (all P<.001). Also, analysis of the robustness of ChatGPT with different language prompts indicated significant differences in the recall (P=.03) and F1-score (P=.04) between Chinese and English prompts. In short, when prompted in English, ChatGPT exhibited enhanced diagnostic and inference capabilities for retinal vascular disease classification based on Chinese fundus fluorescein angiography reports. CONCLUSIONS ChatGPT can serve as a helpful medical assistant to provide diagnosis in non-English clinical environments, but there are still performance gaps, language disparities, and errors compared to professionals, which demonstrate the potential limitations and the need to continually explore more robust large language models in ophthalmology practice.
Collapse
Affiliation(s)
- Xiaocong Liu
- Eye Center, The Second Affiliated Hospital, Zhejiang University, Zhejiang, China
- School of Public Health, Zhejiang University School of Medicine, Zhejiang, China
| | - Jiageng Wu
- School of Public Health, Zhejiang University School of Medicine, Zhejiang, China
| | - An Shao
- Eye Center, The Second Affiliated Hospital, Zhejiang University, Zhejiang, China
| | - Wenyue Shen
- Eye Center, The Second Affiliated Hospital, Zhejiang University, Zhejiang, China
| | - Panpan Ye
- Eye Center, The Second Affiliated Hospital, Zhejiang University, Zhejiang, China
| | - Yao Wang
- Eye Center, The Second Affiliated Hospital, Zhejiang University, Zhejiang, China
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital, Zhejiang University, Zhejiang, China
| | - Kai Jin
- Eye Center, The Second Affiliated Hospital, Zhejiang University, Zhejiang, China
| | - Jie Yang
- School of Public Health, Zhejiang University School of Medicine, Zhejiang, China
| |
Collapse
|
8
|
Chen JS, Marra KV, Robles-Holmes HK, Ly KB, Miller J, Wei G, Aguilar E, Bucher F, Ideguchi Y, Coyner AS, Ferrara N, Campbell JP, Friedlander M, Nudleman E. Applications of Deep Learning: Automated Assessment of Vascular Tortuosity in Mouse Models of Oxygen-Induced Retinopathy. OPHTHALMOLOGY SCIENCE 2024; 4:100338. [PMID: 37869029 PMCID: PMC10585474 DOI: 10.1016/j.xops.2023.100338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 05/01/2023] [Accepted: 05/19/2023] [Indexed: 10/24/2023]
Abstract
Objective To develop a generative adversarial network (GAN) to segment major blood vessels from retinal flat-mount images from oxygen-induced retinopathy (OIR) and demonstrate the utility of these GAN-generated vessel segmentations in quantifying vascular tortuosity. Design Development and validation of GAN. Subjects Three datasets containing 1084, 50, and 20 flat-mount mice retina images with various stains used and ages at sacrifice acquired from previously published manuscripts. Methods Four graders manually segmented major blood vessels from flat-mount images of retinas from OIR mice. Pix2Pix, a high-resolution GAN, was trained on 984 pairs of raw flat-mount images and manual vessel segmentations and then tested on 100 and 50 image pairs from a held-out and external test set, respectively. GAN-generated and manual vessel segmentations were then used as an input into a previously published algorithm (iROP-Assist) to generate a vascular cumulative tortuosity index (CTI) for 20 image pairs containing mouse eyes treated with aflibercept versus control. Main Outcome Measures Mean dice coefficients were used to compare segmentation accuracy between the GAN-generated and manually annotated segmentation maps. For the image pairs treated with aflibercept versus control, mean CTIs were also calculated for both GAN-generated and manual vessel maps. Statistical significance was evaluated using Wilcoxon signed-rank tests (P ≤ 0.05 threshold for significance). Results The dice coefficient for the GAN-generated versus manual vessel segmentations was 0.75 ± 0.27 and 0.77 ± 0.17 for the held-out test set and external test set, respectively. The mean CTI generated from the GAN-generated and manual vessel segmentations was 1.12 ± 0.07 versus 1.03 ± 0.02 (P = 0.003) and 1.06 ± 0.04 versus 1.01 ± 0.01 (P < 0.001), respectively, for eyes treated with aflibercept versus control, demonstrating that vascular tortuosity was rescued by aflibercept when quantified by GAN-generated and manual vessel segmentations. Conclusions GANs can be used to accurately generate vessel map segmentations from flat-mount images. These vessel maps may be used to evaluate novel metrics of vascular tortuosity in OIR, such as CTI, and have the potential to accelerate research in treatments for ischemic retinopathies. Financial Disclosures The author(s) have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Jimmy S. Chen
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - Kyle V. Marra
- Molecular Medicine, the Scripps Research Institute, San Diego, California
- School of Medicine, University of California San Diego, San Diego, California
| | - Hailey K. Robles-Holmes
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - Kristine B. Ly
- College of Optometry, Pacific University, Forest Grove, Oregon
| | - Joseph Miller
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - Guoqin Wei
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Edith Aguilar
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Felicitas Bucher
- Eye Center, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Yoichi Ideguchi
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Aaron S. Coyner
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Napoleone Ferrara
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - J. Peter Campbell
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Martin Friedlander
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Eric Nudleman
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| |
Collapse
|
9
|
Gao Z, Pan X, Shao J, Jiang X, Su Z, Jin K, Ye J. Automatic interpretation and clinical evaluation for fundus fluorescein angiography images of diabetic retinopathy patients by deep learning. Br J Ophthalmol 2023; 107:1852-1858. [PMID: 36171054 DOI: 10.1136/bjo-2022-321472] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 09/04/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND/AIMS Fundus fluorescein angiography (FFA) is an important technique to evaluate diabetic retinopathy (DR) and other retinal diseases. The interpretation of FFA images is complex and time-consuming, and the ability of diagnosis is uneven among different ophthalmologists. The aim of the study is to develop a clinically usable multilevel classification deep learning model for FFA images, including prediagnosis assessment and lesion classification. METHODS A total of 15 599 FFA images of 1558 eyes from 845 patients diagnosed with DR were collected and annotated. Three convolutional neural network (CNN) models were trained to generate the label of image quality, location, laterality of eye, phase and five lesions. Performance of the models was evaluated by accuracy, F-1 score, the area under the curve and human-machine comparison. The images with false positive and false negative results were analysed in detail. RESULTS Compared with LeNet-5 and VGG16, ResNet18 got the best result, achieving an accuracy of 80.79%-93.34% for prediagnosis assessment and an accuracy of 63.67%-88.88% for lesion detection. The human-machine comparison showed that the CNN had similar accuracy with junior ophthalmologists. The false positive and false negative analysis indicated a direction of improvement. CONCLUSION This is the first study to do automated standardised labelling on FFA images. Our model is able to be applied in clinical practice, and will make great contributions to the development of intelligent diagnosis of FFA images.
Collapse
Affiliation(s)
- Zhiyuan Gao
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Xiangji Pan
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Ji Shao
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Xiaoyu Jiang
- College of Control Science and Engineering, Zhejiang University, Hangzhou, Zhejiang, China
| | - Zhaoan Su
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Kai Jin
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Juan Ye
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| |
Collapse
|
10
|
Zhao X, Lin Z, Yu S, Xiao J, Xie L, Xu Y, Tsui CK, Cui K, Zhao L, Zhang G, Zhang S, Lu Y, Lin H, Liang X, Lin D. An artificial intelligence system for the whole process from diagnosis to treatment suggestion of ischemic retinal diseases. Cell Rep Med 2023; 4:101197. [PMID: 37734379 PMCID: PMC10591037 DOI: 10.1016/j.xcrm.2023.101197] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 05/29/2023] [Accepted: 08/23/2023] [Indexed: 09/23/2023]
Abstract
Ischemic retinal diseases (IRDs) are a series of common blinding diseases that depend on accurate fundus fluorescein angiography (FFA) image interpretation for diagnosis and treatment. An artificial intelligence system (Ai-Doctor) was developed to interpret FFA images. Ai-Doctor performed well in image phase identification (area under the curve [AUC], 0.991-0.999, range), diabetic retinopathy (DR) and branch retinal vein occlusion (BRVO) diagnosis (AUC, 0.979-0.992), and non-perfusion area segmentation (Dice similarity coefficient [DSC], 89.7%-90.1%) and quantification. The segmentation model was expanded to unencountered IRDs (central RVO and retinal vasculitis), with DSCs of 89.2% and 83.6%, respectively. A clinically applicable ischemia index (CAII) was proposed to evaluate ischemic degree; patients with CAII values exceeding 0.17 in BRVO and 0.08 in DR may be associated with increased possibility for laser therapy. Ai-Doctor is expected to achieve accurate FFA image interpretation for IRDs, potentially reducing the reliance on retinal specialists.
Collapse
Affiliation(s)
- Xinyu Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China; Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen 518040, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Shanshan Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Jun Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Liqiong Xie
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Yue Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Ching-Kit Tsui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Kaixuan Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen 518040, China
| | - Shaochong Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen 518040, China
| | - Yan Lu
- Foshan Second People's Hospital, Foshan 528001, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou 570311, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510080, China.
| | - Xiaoling Liang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
11
|
Wei X, Ye F, Wan H, Xu J, Min W. TANet: Triple Attention Network for medical image segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
12
|
Bhambra N, Antaki F, Malt FE, Xu A, Duval R. Deep learning for ultra-widefield imaging: a scoping review. Graefes Arch Clin Exp Ophthalmol 2022; 260:3737-3778. [PMID: 35857087 DOI: 10.1007/s00417-022-05741-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 05/16/2022] [Accepted: 06/22/2022] [Indexed: 11/04/2022] Open
Abstract
PURPOSE This article is a scoping review of published and peer-reviewed articles using deep-learning (DL) applied to ultra-widefield (UWF) imaging. This study provides an overview of the published uses of DL and UWF imaging for the detection of ophthalmic and systemic diseases, generative image synthesis, quality assessment of images, and segmentation and localization of ophthalmic image features. METHODS A literature search was performed up to August 31st, 2021 using PubMed, Embase, Cochrane Library, and Google Scholar. The inclusion criteria were as follows: (1) deep learning, (2) ultra-widefield imaging. The exclusion criteria were as follows: (1) articles published in any language other than English, (2) articles not peer-reviewed (usually preprints), (3) no full-text availability, (4) articles using machine learning algorithms other than deep learning. No study design was excluded from consideration. RESULTS A total of 36 studies were included. Twenty-three studies discussed ophthalmic disease detection and classification, 5 discussed segmentation and localization of ultra-widefield images (UWFIs), 3 discussed generative image synthesis, 3 discussed ophthalmic image quality assessment, and 2 discussed detecting systemic diseases via UWF imaging. CONCLUSION The application of DL to UWF imaging has demonstrated significant effectiveness in the diagnosis and detection of ophthalmic diseases including diabetic retinopathy, retinal detachment, and glaucoma. DL has also been applied in the generation of synthetic ophthalmic images. This scoping review highlights and discusses the current uses of DL with UWF imaging, and the future of DL applications in this field.
Collapse
Affiliation(s)
- Nishaant Bhambra
- Faculty of Medicine, McGill University, Montréal, Québec, Canada
| | - Fares Antaki
- Department of Ophthalmology, Université de Montréal, Montréal, Québec, Canada
- Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de L'Est-de-L'Île-de-Montréal, 5415 Assumption Blvd, Montréal, Québec, H1T 2M4, Canada
| | - Farida El Malt
- Faculty of Medicine, McGill University, Montréal, Québec, Canada
| | - AnQi Xu
- Faculty of Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Renaud Duval
- Department of Ophthalmology, Université de Montréal, Montréal, Québec, Canada.
- Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de L'Est-de-L'Île-de-Montréal, 5415 Assumption Blvd, Montréal, Québec, H1T 2M4, Canada.
| |
Collapse
|
13
|
Segmentation of macular neovascularization and leakage in fluorescein angiography images in neovascular age-related macular degeneration using deep learning. Eye (Lond) 2022; 37:1439-1444. [PMID: 35778604 PMCID: PMC10169785 DOI: 10.1038/s41433-022-02156-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 05/31/2022] [Accepted: 06/16/2022] [Indexed: 11/08/2022] Open
Abstract
BACKGROUND/OBJECTIVES We aim to develop an objective fully automated Artificial intelligence (AI) algorithm for MNV lesion size and leakage area segmentation on fluorescein angiography (FA) in patients with neovascular age-related macular degeneration (nAMD). SUBJECTS/METHODS Two FA image datasets collected form large prospective multicentre trials consisting of 4710 images from 513 patients and 4558 images from 514 patients were used to develop and evaluate a deep learning-based algorithm to detect CNV lesion size and leakage area automatically. Manual segmentation of was performed by certified FA graders of the Vienna Reading Center. Precision, Recall and F1 score between AI predictions and manual annotations were computed. In addition, two masked retina experts conducted a clinical-applicability evaluation, comparing the quality of AI based and manual segmentations. RESULTS For CNV lesion size and leakage area segmentation, we obtained F1 scores of 0.73 and 0.65, respectively. Expert review resulted in a slight preference for the automated segmentations in both datasets. The quality of automated segmentations was slightly more often judged as good compared to manual annotations. CONCLUSIONS CNV lesion size and leakage area can be segmented by our automated model at human-level performance, its output being well-accepted during clinical applicability testing. The results provide proof-of-concept that an automated deep learning approach can improve efficacy of objective biomarker analysis in FA images and will be well-suited for clinical application.
Collapse
|
14
|
Hofer D, Schmidt-Erfurth U, Orlando JI, Goldbach F, Gerendas BS, Seeböck P. Improving foveal avascular zone segmentation in fluorescein angiograms by leveraging manual vessel labels from public color fundus pictures. BIOMEDICAL OPTICS EXPRESS 2022; 13:2566-2580. [PMID: 35774310 PMCID: PMC9203117 DOI: 10.1364/boe.452873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 03/11/2022] [Accepted: 03/24/2022] [Indexed: 06/15/2023]
Abstract
In clinical routine, ophthalmologists frequently analyze the shape and size of the foveal avascular zone (FAZ) to detect and monitor retinal diseases. In order to extract those parameters, the contours of the FAZ need to be segmented, which is normally achieved by analyzing the retinal vasculature (RV) around the macula in fluorescein angiograms (FA). Computer-aided segmentation methods based on deep learning (DL) can automate this task. However, current approaches for segmenting the FAZ are often tailored to a specific dataset or require manual initialization. Furthermore, they do not take the variability and challenges of clinical FA into account, which are often of low quality and difficult to analyze. In this paper we propose a DL-based framework to automatically segment the FAZ in challenging FA scans from clinical routine. Our approach mimics the workflow of retinal experts by using additional RV labels as a guidance during training. Hence, our model is able to produce RV segmentations simultaneously. We minimize the annotation work by using a multi-modal approach that leverages already available public datasets of color fundus pictures (CFPs) and their respective manual RV labels. Our experimental evaluation on two datasets with FA from 1) clinical routine and 2) large multicenter clinical trials shows that the addition of weak RV labels as a guidance during training improves the FAZ segmentation significantly with respect to using only manual FAZ annotations.
Collapse
Affiliation(s)
- Dominik Hofer
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - José Ignacio Orlando
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
- Yatiris Group, PLADEMA Institute, CON-ICET, Universidad Nacional del Centro de la Provincia de Buenos Aires, Gral. Pinto 399, Tandil, Buenos Aires, Argentina
| | - Felix Goldbach
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - Bianca S. Gerendas
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - Philipp Seeböck
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| |
Collapse
|
15
|
Hatamizadeh A, Hosseini H, Patel N, Choi J, Pole CC, Hoeferlin CM, Schwartz SD, Terzopoulos D. RAVIR: A Dataset and Methodology for the Semantic Segmentation and Quantitative Analysis of Retinal Arteries and Veins in Infrared Reflectance Imaging. IEEE J Biomed Health Inform 2022; 26:3272-3283. [PMID: 35349464 DOI: 10.1109/jbhi.2022.3163352] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The retinal vasculature provides important clues in the diagnosis and monitoring of systemic diseases including hypertension and diabetes. The microvascular system is of primary involvement in such conditions, and the retina is the only anatomical site where the microvasculature can be directly observed. The objective assessment of retinal vessels has long been considered a surrogate biomarker for systemic vascular diseases, and with recent advancements in retinal imaging and computer vision technologies, this topic has become the subject of renewed attention. In this paper, we present a novel dataset, dubbed RAVIR, for the semantic segmentation of Retinal Arteries and Veins in Infrared Reflectance (IR) imaging. It enables the creation of deep learning-based models that distinguish extracted vessel type without extensive post-processing. We propose a novel deep learning-based methodology, denoted as SegRAVIR, for the semantic segmentation of retinal arteries and veins and the quantitative measurement of the widths of segmented vessels. Our extensive experiments validate the effectiveness of SegRAVIR and demonstrate its superior performance in comparison to state-of-the-art models. Additionally, we propose a knowledge distillation framework for the domain adaptation of RAVIR pretrained networks on color images. We demonstrate that our pretraining procedure yields new state-of-the-art benchmarks on the DRIVE, STARE, and CHASE\_DB1 datasets. Dataset link: https://ravirdataset.github.io/data.
Collapse
|
16
|
Shi T, Boutry N, Xu Y, Geraud T. Local Intensity Order Transformation for Robust Curvilinear Object Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:2557-2569. [PMID: 35275816 DOI: 10.1109/tip.2022.3155954] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Segmentation of curvilinear structures is important in many applications, such as retinal blood vessel segmentation for early detection of vessel diseases and pavement crack segmentation for road condition evaluation and maintenance. Currently, deep learning-based methods have achieved impressive performance on these tasks. Yet, most of them mainly focus on finding powerful deep architectures but ignore capturing the inherent curvilinear structure feature (e.g., the curvilinear structure is darker than the context) for a more robust representation. In consequence, the performance usually drops a lot on cross-datasets, which poses great challenges in practice. In this paper, we aim to improve the generalizability by introducing a novel local intensity order transformation (LIOT). Specifically, we transfer a gray-scale image into a contrast-invariant four-channel image based on the intensity order between each pixel and its nearby pixels along with the four (horizontal and vertical) directions. This results in a representation that preserves the inherent characteristic of the curvilinear structure while being robust to contrast changes. Cross-dataset evaluation on three retinal blood vessel segmentation datasets demonstrates that LIOT improves the generalizability of some state-of-the-art methods. Additionally, the cross-dataset evaluation between retinal blood vessel segmentation and pavement crack segmentation shows that LIOT is able to preserve the inherent characteristic of curvilinear structure with large appearance gaps. An implementation of the proposed method is available at https://github.com/TY-Shi/LIOT.
Collapse
|
17
|
Gour N, Tanveer M, Khanna P. Challenges for ocular disease identification in the era of artificial intelligence. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06770-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
18
|
Ding L, Kuriyan AE, Ramchandran RS, Wykoff CC, Sharma G. Weakly-Supervised Vessel Detection in Ultra-Widefield Fundus Photography via Iterative Multi-Modal Registration and Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2748-2758. [PMID: 32991281 PMCID: PMC8513803 DOI: 10.1109/tmi.2020.3027665] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We propose a deep-learning based annotation-efficient framework for vessel detection in ultra-widefield (UWF) fundus photography (FP) that does not require de novo labeled UWF FP vessel maps. Our approach utilizes concurrently captured UWF fluorescein angiography (FA) images, for which effective deep learning approaches have recently become available, and iterates between a multi-modal registration step and a weakly-supervised learning step. In the registration step, the UWF FA vessel maps detected with a pre-trained deep neural network (DNN) are registered with the UWF FP via parametric chamfer alignment. The warped vessel maps can be used as the tentative training data but inevitably contain incorrect (noisy) labels due to the differences between FA and FP modalities and the errors in the registration. In the learning step, a robust learning method is proposed to train DNNs with noisy labels. The detected FP vessel maps are used for the registration in the following iteration. The registration and the vessel detection benefit from each other and are progressively improved. Once trained, the UWF FP vessel detection DNN from the proposed approach allows FP vessel detection without requiring concurrently captured UWF FA images. We validate the proposed framework on a new UWF FP dataset, PRIME-FP20, and on existing narrow-field FP datasets. Experimental evaluation, using both pixel-wise metrics and the CAL metrics designed to provide better agreement with human assessment, shows that the proposed approach provides accurate vessel detection, without requiring manually labeled UWF FP training data.
Collapse
|
19
|
|
20
|
Nanodiagnostics and Nanotherapeutics for age-related macular degeneration. J Control Release 2021; 329:1262-1282. [DOI: 10.1016/j.jconrel.2020.10.054] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 10/24/2020] [Accepted: 10/25/2020] [Indexed: 12/15/2022]
|
21
|
Wang X, Ji Z, Ma X, Zhang Z, Yi Z, Zheng H, Fan W, Chen C. Automated Grading of Diabetic Retinopathy with Ultra-Widefield Fluorescein Angiography and Deep Learning. J Diabetes Res 2021; 2021:2611250. [PMID: 34541004 PMCID: PMC8445732 DOI: 10.1155/2021/2611250] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Revised: 08/07/2021] [Accepted: 08/19/2021] [Indexed: 11/17/2022] Open
Abstract
PURPOSE The objective of this study was to establish diagnostic technology to automatically grade the severity of diabetic retinopathy (DR) according to the ischemic index and leakage index with ultra-widefield fluorescein angiography (UWFA) and the Early Treatment Diabetic Retinopathy Study (ETDRS) 7-standard field (7-SF). METHODS This is a cross-sectional study. UWFA samples from 280 diabetic patients and 119 normal patients were used to train and test an artificial intelligence model to differentiate PDR and NPDR based on the ischemic index and leakage index with UWFA. A panel of retinal specialists determined the ground truth for our data set before experimentation. A confusion matrix as a metric was used to measure the precision of our algorithm, and a simple linear regression function was implemented to explore the discrimination of indexes on the DR grades. In addition, the model was tested with simulated 7-SF. RESULTS The model classification of DR in the original UWFA images achieved 88.50% accuracy and 73.68% accuracy in the simulated 7-SF images. A simple linear regression function demonstrated that there is a significant relationship between the ischemic index and leakage index and the severity of DR. These two thresholds were set to classify the grade of DR, which achieved 76.8% accuracy. CONCLUSIONS The optimization of the cycle generative adversarial network (CycleGAN) and convolutional neural network (CNN) model classifier achieved DR grading based on the ischemic index and leakage index with UWFA and simulated 7-SF and provided accurate inference results. The classification accuracy with UWFA is slightly higher than that of simulated 7-SF.
Collapse
Affiliation(s)
- Xiaoling Wang
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, China
| | - Zexuan Ji
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Xiao Ma
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Ziyue Zhang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Zuohuizi Yi
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, China
| | - Hongmei Zheng
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, China
| | - Wen Fan
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Changzheng Chen
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, China
| |
Collapse
|
22
|
Rodrigues EO, Conci A, Liatsis P. ELEMENT: Multi-Modal Retinal Vessel Segmentation Based on a Coupled Region Growing and Machine Learning Approach. IEEE J Biomed Health Inform 2020; 24:3507-3519. [PMID: 32750920 DOI: 10.1109/jbhi.2020.2999257] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Vascular structures in the retina contain important information for the detection and analysis of ocular diseases, including age-related macular degeneration, diabetic retinopathy and glaucoma. Commonly used modalities in diagnosis of these diseases are fundus photography, scanning laser ophthalmoscope (SLO) and fluorescein angiography (FA). Typically, retinal vessel segmentation is carried out either manually or interactively, which makes it time consuming and prone to human errors. In this research, we propose a new multi-modal framework for vessel segmentation called ELEMENT (vEsseL sEgmentation using Machine lEarning and coNnecTivity). This framework consists of feature extraction and pixel-based classification using region growing and machine learning. The proposed features capture complementary evidence based on grey level and vessel connectivity properties. The latter information is seamlessly propagated through the pixels at the classification phase. ELEMENT reduces inconsistencies and speeds up the segmentation throughput. We analyze and compare the performance of the proposed approach against state-of-the-art vessel segmentation algorithms in three major groups of experiments, for each of the ocular modalities. Our method produced higher overall performance, with an overall accuracy of 97.40%, compared to 25 of the 26 state-of-the-art approaches, including six works based on deep learning, evaluated on the widely known DRIVE fundus image dataset. In the case of the STARE, CHASE-DB, VAMPIRE FA, IOSTAR SLO and RC-SLO datasets, the proposed framework outperformed all of the state-of-the-art methods with accuracies of 98.27%, 97.78%, 98.34%, 98.04% and 98.35%, respectively.
Collapse
|
23
|
Bawany MH, Ding L, Ramchandran RS, Sharma G, Wykoff CC, Kuriyan AE. Automated vessel density detection in fluorescein angiography images correlates with vision in proliferative diabetic retinopathy. PLoS One 2020; 15:e0238958. [PMID: 32915904 PMCID: PMC7485882 DOI: 10.1371/journal.pone.0238958] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Accepted: 08/26/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose To investigate the correlation between quantifiable vessel density, computed in an automated fashion, from ultra-widefield fluorescein angiography (UWFFA) images from patients with proliferative diabetic retinopathy (PDR) with visual acuity and macular thickness. Methods We performed a secondary analysis of a prospective randomized controlled trial. We designed and trained an algorithm to automate retinal vessel detection from input UWFFA images. We then used our algorithm to study the correlation between baseline vessel density and best corrected visual acuity (BCVA) and CRT for patients in the RECOVERY study. Reliability of the algorithm was tested using the intraclass correlation (ICC). 42 patients from the Intravitreal Aflibercept for Retinal Non-Perfusion in Proliferative Diabetic Retinopathy (RECOVERY) trial who had both baseline UWFFA images and optical coherence tomography (OCT) data were included in our study. These patients had PDR without significant center-involving diabetic macular edema (central retinal thickness [CRT] ≤320μm). Results Our algorithm analyzed UWFFA images with a reliability measure (ICC) of 0.98. A positive correlation (r = 0.4071, p = 0.0075) was found between vessel density and BCVA. No correlation was found between vessel density and CRT. Conclusions Our algorithm is capable of reliably quantifying vessel density in an automated fashion from baseline UWFFA images. We found a positive correlation between computed vessel density and BCVA in PDR patients without center-involving macular edema, but not CRT. Translational relevance Our work is the first to offer an algorithm capable of quantifying vessel density in an automated fashion from UWFFA images, allowing us to work toward studying the relationship between retinal vascular changes and important clinical endpoints, including visual acuity, in ischemic eye diseases.
Collapse
Affiliation(s)
- Mohammad H. Bawany
- University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
| | - Li Ding
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, New York, United States of America
| | - Rajeev S. Ramchandran
- Department of Ophthalmology, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Gaurav Sharma
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, New York, United States of America
| | - Charles C. Wykoff
- Retina Consultants of Houston, Houston, Texas, United States of America
- Blanton Eye Institute, Houston Methodist Hospital & Weill Cornell Medical College, Houston, Texas, United States of America
| | - Ajay E. Kuriyan
- Department of Ophthalmology, University of Rochester Medical Center, Rochester, New York, United States of America
- Retina Service, Wills Eye Hospital, Philadelphia, Pennsylvania, United States of America
- Center for Visual Science, University of Rochester, Rochester, New York, United States of America
- * E-mail:
| |
Collapse
|
24
|
Kalavar M, Al-Khersan H, Sridhar J, Gorniak RJ, Lakhani PC, Flanders AE, Kuriyan AE. Applications of Artificial Intelligence for the Detection, Management, and Treatment of Diabetic Retinopathy. Int Ophthalmol Clin 2020; 60:127-145. [PMID: 33093322 PMCID: PMC8514105 DOI: 10.1097/iio.0000000000000333] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Rates of diabetic retinopathy (DR) and diabetic macular edema (DME), a common ocular complication of diabetes mellitus, are increasing worldwide. There is a substantial burden concerning the detection and management of this condition, particularly in low-resource settings, due to limitations such as the time, cost, and labor associated with current screening and treatment methods. Artificial intelligence (AI) is a modality of pattern recognition that has the potential to combat these limitations in a reliable and cost-effective way. This review explores the various applications of AI on the screening, management, and treatment of DR and DME. AI applications for detecting referable DR and DME have been the most thoroughly researched applications for this condition. While some studies exist using AI to stratify DR patients based on the risk of progression, predict treatment outcomes to anti-VEGF therapy, and explore the utilization of AI for clinical trials to develop new treatments for DR, further validation studies on larger datasets are warranted.
Collapse
Affiliation(s)
- Meghana Kalavar
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL
| | - Hasenin Al-Khersan
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL
| | - Jayanth Sridhar
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL
| | | | - Paras C. Lakhani
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA
| | - Adam E. Flanders
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA
| | - Ajay E. Kuriyan
- Mid Atlantic Retina, Philadelphia, PA
- The Retina Service, Wills Eye Hospital, Philadelphia, PA
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA
| |
Collapse
|