1
|
Chen S, Hoang L, Kashani AH, Yi J. Robust semi-automatic vessel tracing in the human retinal image by an instance segmentation neural network. SCIENCE ADVANCES 2025; 11:eado8268. [PMID: 40184448 PMCID: PMC11970458 DOI: 10.1126/sciadv.ado8268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Accepted: 02/28/2025] [Indexed: 04/06/2025]
Abstract
Vasculature morphology and hierarchy are essential for blood perfusion. Human retinal circulation is an intricate vascular system emerging and remerging at the optic nerve head (ONH). Tracing retinal vascular branching from ONH can allow detailed morphological quantification, and yet remains a challenging task. We presented a robust semi-automatic vessel tracing algorithm on human fundus images by an instance segmentation neural network (InSegNN). InSegNN separates and labels individual vascular trees and enables tracing each tree throughout its branching. We have three strategies to improve robustness and accuracy: pseudotemporal learning, spatial multisampling, and dynamic probability map. We achieved 83% specificity, 50% improvement in symmetric best dice (SBD) compared to literature, and outperformed baseline U-net, and achieved 91% precision with 71% sensitivity. We have demonstrated tracing individual vessel trees from fundus images, and simultaneously retain vessel hierarchy information. InSegNN paves a way for subsequent analysis of vascular morphology in relation to retinal diseases.
Collapse
Affiliation(s)
- Siyi Chen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21231, USA
| | - Linh Hoang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21231, USA
| | - Amir H. Kashani
- Department of Ophthalmology, Johns Hopkins University, School of Medicine, Baltimore, MD 21231, USA
| | - Ji Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21231, USA
- Department of Ophthalmology, Johns Hopkins University, School of Medicine, Baltimore, MD 21231, USA
| |
Collapse
|
2
|
Chen Q, Peng J, Zhao S, Liu W. Automatic artery/vein classification methods for retinal blood vessel: A review. Comput Med Imaging Graph 2024; 113:102355. [PMID: 38377630 DOI: 10.1016/j.compmedimag.2024.102355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 02/06/2024] [Accepted: 02/06/2024] [Indexed: 02/22/2024]
Abstract
Automatic retinal arteriovenous classification can assist ophthalmologists in disease early diagnosis. Deep learning-based methods and topological graph-based methods have become the main solutions for retinal arteriovenous classification in recent years. This paper reviews the automatic retinal arteriovenous classification methods from 2003 to 2022. Firstly, we compare different methods and provide comparison tables of the summary results. Secondly, we complete the classification of the public arteriovenous classification datasets and provide the annotation development tables of different datasets. Finally, we sort out the challenges of evaluation methods and provide a comprehensive evaluation system. Quantitative and qualitative analysis shows the changes in research hotspots over time, Quantitative and qualitative analyses reveal the evolution of research hotspots over time, highlighting the significance of exploring the integration of deep learning with topological information in future research.
Collapse
Affiliation(s)
- Qihan Chen
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
| | - Jianqing Peng
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China; Guangdong Provincial Key Laboratory of Fire Science and Technology, Guangzhou 510006, China.
| | - Shen Zhao
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China.
| | - Wanquan Liu
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
| |
Collapse
|
3
|
Zhou Y, Xu M, Hu Y, Blumberg SB, Zhao A, Wagner SK, Keane PA, Alexander DC. CF-Loss: Clinically-relevant feature optimised loss function for retinal multi-class vessel segmentation and vascular feature measurement. Med Image Anal 2024; 93:103098. [PMID: 38320370 DOI: 10.1016/j.media.2024.103098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 05/22/2023] [Accepted: 01/30/2024] [Indexed: 02/08/2024]
Abstract
Characterising clinically-relevant vascular features, such as vessel density and fractal dimension, can benefit biomarker discovery and disease diagnosis for both ophthalmic and systemic diseases. In this work, we explicitly encode vascular features into an end-to-end loss function for multi-class vessel segmentation, categorising pixels into artery, vein, uncertain pixels, and background. This clinically-relevant feature optimised loss function (CF-Loss) regulates networks to segment accurate multi-class vessel maps that produce precise vascular features. Our experiments first verify that CF-Loss significantly improves both multi-class vessel segmentation and vascular feature estimation, with two standard segmentation networks, on three publicly available datasets. We reveal that pixel-based segmentation performance is not always positively correlated with accuracy of vascular features, thus highlighting the importance of optimising vascular features directly via CF-Loss. Finally, we show that improved vascular features from CF-Loss, as biomarkers, can yield quantitative improvements in the prediction of ischaemic stroke, a real-world clinical downstream task. The code is available at https://github.com/rmaphoh/feature-loss.
Collapse
Affiliation(s)
- Yukun Zhou
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London EC1V 9EL, UK; Institute of Ophthalmology, University College London, London EC1V 9EL, UK.
| | - MouCheng Xu
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TS, UK
| | - Stefano B Blumberg
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Computer Science, University College London, London WC1E 6BT, UK
| | - An Zhao
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Computer Science, University College London, London WC1E 6BT, UK
| | - Siegfried K Wagner
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London EC1V 9EL, UK; Institute of Ophthalmology, University College London, London EC1V 9EL, UK
| | - Pearse A Keane
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London EC1V 9EL, UK; Institute of Ophthalmology, University College London, London EC1V 9EL, UK
| | - Daniel C Alexander
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Computer Science, University College London, London WC1E 6BT, UK
| |
Collapse
|
4
|
Shi D, He S, Yang J, Zheng Y, He M. One-shot Retinal Artery and Vein Segmentation via Cross-modality Pretraining. OPHTHALMOLOGY SCIENCE 2024; 4:100363. [PMID: 37868792 PMCID: PMC10585631 DOI: 10.1016/j.xops.2023.100363] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 06/29/2023] [Accepted: 06/30/2023] [Indexed: 10/24/2023]
Abstract
Purpose To perform one-shot retinal artery and vein segmentation with cross-modality artery-vein (AV) soft-label pretraining. Design Cross-sectional study. Subjects The study included 6479 color fundus photography (CFP) and arterial-venous fundus fluorescein angiography (FFA) pairs from 1964 participants for pretraining and 6 AV segmentation data sets with various image sources, including RITE, HRF, LES-AV, AV-WIDE, PortableAV, and DRSplusAV for one-shot finetuning and testing. Methods We structurally matched the arterial and venous phase of FFA with CFP, the AV soft labels were automatically generated by utilizing the fluorescein intensity difference of the arterial and venous-phase FFA images, and the soft labels were then used to train a generative adversarial network to learn to generate AV soft segmentations using CFP images as input. We then finetuned the pretrained model to perform AV segmentation using only one image from each of the AV segmentation data sets and test on the remainder. To investigate the effect and reliability of one-shot finetuning, we conducted experiments without finetuning and by finetuning the pretrained model on an iteratively different single image for each data set under the same experimental setting and tested the models on the remaining images. Main Outcome Measures The AV segmentation was assessed by area under the receiver operating characteristic curve (AUC), accuracy, Dice score, sensitivity, and specificity. Results After the FFA-AV soft label pretraining, our method required only one exemplar image from each camera or modality and achieved similar performance with full-data training, with AUC ranging from 0.901 to 0.971, accuracy from 0.959 to 0.980, Dice score from 0.585 to 0.773, sensitivity from 0.574 to 0.763, and specificity from 0.981 to 0.991. Compared with no finetuning, the segmentation performance improved after one-shot finetuning. When finetuned on different images in each data set, the standard deviation of the segmentation results across models ranged from 0.001 to 0.10. Conclusions This study presents the first one-shot approach to retinal artery and vein segmentation. The proposed labeling method is time-saving and efficient, demonstrating a promising direction for retinal-vessel segmentation and enabling the potential for widespread application. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Danli Shi
- Centre for Eye and Vision Research (CEVR), Hong Kong SAR, China
- The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Shuang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Jiancheng Yang
- Swiss Federal Institute of Technology in Lausanne (EPFL), Lausanne, Switzerland
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Mingguang He
- Centre for Eye and Vision Research (CEVR), Hong Kong SAR, China
- The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| |
Collapse
|
5
|
Gratacos G, Chakrabarti A, Ju T. Tree Recovery by Dynamic Programming. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:15870-15882. [PMID: 37505999 DOI: 10.1109/tpami.2023.3299868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2023]
Abstract
Tree-like structures are common, naturally occurring objects that are of interest to many fields of study, such as plant science and biomedicine. Analysis of these structures is typically based on skeletons extracted from captured data, which often contain spurious cycles that need to be removed. We propose a dynamic programming algorithm for solving the NP-hard tree recovery problem formulated by (Estrada et al. 2015), which seeks a least-cost partitioning of the graph nodes that yields a directed tree. Our algorithm finds the optimal solution by iteratively contracting the graph via node-merging until the problem can be trivially solved. By carefully designing the merging sequence, our algorithm can efficiently recover optimal trees for many real-world data where (Estrada et al. 2015) only produces sub-optimal solutions. We also propose an approximate variant of dynamic programming using beam search, which can process graphs containing thousands of cycles with significantly improved optimality and efficiency compared with (Estrada et al. 2015).
Collapse
|
6
|
Suman S, Tiwari AK, Singh K. Computer-aided diagnostic system for hypertensive retinopathy: A review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107627. [PMID: 37320942 DOI: 10.1016/j.cmpb.2023.107627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 05/03/2023] [Accepted: 05/27/2023] [Indexed: 06/17/2023]
Abstract
Hypertensive Retinopathy (HR) is a retinal disease caused by elevated blood pressure for a prolonged period. There are no obvious signs in the early stages of high blood pressure, but it affects various body parts over time, including the eyes. HR is a biomarker for several illnesses, including retinal diseases, atherosclerosis, strokes, kidney disease, and cardiovascular risks. Early microcirculation abnormalities in chronic diseases can be diagnosed through retinal examination prior to the onset of major clinical consequences. Computer-aided diagnosis (CAD) plays a vital role in the early identification of HR with improved diagnostic accuracy, which is time-efficient and demands fewer resources. Recently, numerous studies have been reported on the automatic identification of HR. This paper provides a comprehensive review of the automated tasks of Artery-Vein (A/V) classification, Arteriovenous ratio (AVR) computation, HR detection (Binary classification), and HR severity grading. The review is conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol. The paper discusses the clinical features of HR, the availability of datasets, existing methods used for A/V classification, AVR computation, HR detection, and severity grading, and performance evaluation metrics. The reviewed articles are summarized with classifiers details, adoption of different kinds of methodologies, performance comparisons, datasets details, their pros and cons, and computational platform. For each task, a summary and critical in-depth analysis are provided, as well as common research issues and challenges in the existing studies. Finally, the paper proposes future research directions to overcome challenges associated with data set availability, HR detection, and severity grading.
Collapse
Affiliation(s)
- Supriya Suman
- Interdisciplinary Research Platform (IDRP): Smart Healthcare, Indian Institute of Technology, N.H. 62, Nagaur Road, Karwar, Jodhpur, Rajasthan 342030, India.
| | - Anil Kumar Tiwari
- Department of Electrical Engineering, Indian Institute of Technology, N.H. 62, Nagaur Road, Karwar, Jodhpur, Rajasthan 342030, India
| | - Kuldeep Singh
- Department of Pediatrics, All India Institute of Medical Sciences, Basni Industrial Area Phase-2, Jodhpur, Rajasthan 342005, India
| |
Collapse
|
7
|
Gosak M, Milojević M, Duh M, Skok K, Perc M. Uncovering the secrets of nature's design: Reply to comments on "Networks behind the morphology and structural design of living systems". Phys Life Rev 2023; 46:65-68. [PMID: 37263120 DOI: 10.1016/j.plrev.2023.05.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 05/16/2023] [Indexed: 06/03/2023]
Affiliation(s)
- Marko Gosak
- Faculty of Natural Sciences and Mathematics, University of Maribor, Koroška cesta 160, 2000 Maribor, Slovenia; Faculty of Medicine, University of Maribor, Taborska ulica 8, 2000 Maribor, Slovenia; Alma Mater Europaea, Slovenska ulica 17, 2000 Maribor, Slovenia
| | - Marko Milojević
- Faculty of Medicine, University of Maribor, Taborska ulica 8, 2000 Maribor, Slovenia
| | - Maja Duh
- Faculty of Natural Sciences and Mathematics, University of Maribor, Koroška cesta 160, 2000 Maribor, Slovenia
| | - Kristijan Skok
- Faculty of Medicine, University of Maribor, Taborska ulica 8, 2000 Maribor, Slovenia
| | - Matjaž Perc
- Faculty of Natural Sciences and Mathematics, University of Maribor, Koroška cesta 160, 2000 Maribor, Slovenia; Alma Mater Europaea, Slovenska ulica 17, 2000 Maribor, Slovenia; Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 404332, Taiwan; Complexity Science Hub Vienna, Josefstädterstrasse 39, 1080 Vienna, Austria.
| |
Collapse
|
8
|
Yi J, Chen C. Multi-Task Segmentation and Classification Network for Artery/Vein Classification in Retina Fundus. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1148. [PMID: 37628178 PMCID: PMC10453284 DOI: 10.3390/e25081148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 07/15/2023] [Accepted: 07/25/2023] [Indexed: 08/27/2023]
Abstract
Automatic classification of arteries and veins (A/V) in fundus images has gained considerable attention from researchers due to its potential to detect vascular abnormalities and facilitate the diagnosis of some systemic diseases. However, the variability in vessel structures and the marginal distinction between arteries and veins poses challenges to accurate A/V classification. This paper proposes a novel Multi-task Segmentation and Classification Network (MSC-Net) that utilizes the vessel features extracted by a specific module to improve A/V classification and alleviate the aforementioned limitations. The proposed method introduces three modules to enhance the performance of A/V classification: a Multi-scale Vessel Extraction (MVE) module, which distinguishes between vessel pixels and background using semantics of vessels, a Multi-structure A/V Extraction (MAE) module that classifies arteries and veins by combining the original image with the vessel features produced by the MVE module, and a Multi-source Feature Integration (MFI) module that merges the outputs from the former two modules to obtain the final A/V classification results. Extensive empirical experiments verify the high performance of the proposed MSC-Net for retinal A/V classification over state-of-the-art methods on several public datasets.
Collapse
Affiliation(s)
| | - Chouyu Chen
- Department of Computer Science and Technology, Beijing University of Civil Engineering and Architecture, Beijing 100044, China;
| |
Collapse
|
9
|
Krzywicki T, Brona P, Zbrzezny AM, Grzybowski AE. A Global Review of Publicly Available Datasets Containing Fundus Images: Characteristics, Barriers to Access, Usability, and Generalizability. J Clin Med 2023; 12:jcm12103587. [PMID: 37240693 DOI: 10.3390/jcm12103587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/29/2023] [Accepted: 05/17/2023] [Indexed: 05/28/2023] Open
Abstract
This article provides a comprehensive and up-to-date overview of the repositories that contain color fundus images. We analyzed them regarding availability and legality, presented the datasets' characteristics, and identified labeled and unlabeled image sets. This study aimed to complete all publicly available color fundus image datasets to create a central catalog of available color fundus image datasets.
Collapse
Affiliation(s)
- Tomasz Krzywicki
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland
| | - Piotr Brona
- Department of Ophthalmology, Poznan City Hospital, 61-285 Poznań, Poland
| | - Agnieszka M Zbrzezny
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland
- Faculty of Design, SWPS University of Social Sciences and Humanities, Chodakowska 19/31, 03-815 Warsaw, Poland
| | - Andrzej E Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, 60-836 Poznań, Poland
| |
Collapse
|
10
|
End-to-End Automatic Classification of Retinal Vessel Based on Generative Adversarial Networks with Improved U-Net. Diagnostics (Basel) 2023; 13:diagnostics13061148. [PMID: 36980456 PMCID: PMC10047448 DOI: 10.3390/diagnostics13061148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/07/2023] [Accepted: 03/13/2023] [Indexed: 03/19/2023] Open
Abstract
The retinal vessels in the human body are the only ones that can be observed directly by non-invasive imaging techniques. Retinal vessel morphology and structure are the important objects of concern for physicians in the early diagnosis and treatment of related diseases. The classification of retinal vessels has important guiding significance in the basic stage of diagnostic treatment. This paper proposes a novel method based on generative adversarial networks with improved U-Net, which can achieve synchronous automatic segmentation and classification of blood vessels by an end-to-end network. The proposed method avoids the dependency of the segmentation results in the multiple classification tasks. Moreover, the proposed method builds on an accurate classification of arteries and veins while also classifying arteriovenous crossings. The validity of the proposed method is evaluated on the RITE dataset: the accuracy of image comprehensive classification reaches 96.87%. The sensitivity and specificity of arteriovenous classification reach 91.78% and 97.25%. The results verify the effectiveness of the proposed method and show the competitive classification performance.
Collapse
|
11
|
Xu X, Yang P, Wang H, Xiao Z, Xing G, Zhang X, Wang W, Xu F, Zhang J, Lei J. AV-casNet: Fully Automatic Arteriole-Venule Segmentation and Differentiation in OCT Angiography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:481-492. [PMID: 36227826 DOI: 10.1109/tmi.2022.3214291] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Automatic segmentation and differentiation of retinal arteriole and venule (AV), defined as small blood vessels directly before and after the capillary plexus, are of great importance for the diagnosis of various eye diseases and systemic diseases, such as diabetic retinopathy, hypertension, and cardiovascular diseases. Optical coherence tomography angiography (OCTA) is a recent imaging modality that provides capillary-level blood flow information. However, OCTA does not have the colorimetric and geometric differences between AV as the fundus photography does. Various methods have been proposed to differentiate AV in OCTA, which typically needs the guidance of other imaging modalities. In this study, we propose a cascaded neural network to automatically segment and differentiate AV solely based on OCTA. A convolutional neural network (CNN) module is first applied to generate an initial segmentation, followed by a graph neural network (GNN) to improve the connectivity of the initial segmentation. Various CNN and GNN architectures are employed and compared. The proposed method is evaluated on multi-center clinical datasets, including 3 ×3 mm2 and 6 ×6 mm2 OCTA. The proposed method holds the potential to enrich OCTA image information for the diagnosis of various diseases.
Collapse
|
12
|
Iqbal S, Khan TM, Naveed K, Naqvi SS, Nawaz SJ. Recent trends and advances in fundus image analysis: A review. Comput Biol Med 2022; 151:106277. [PMID: 36370579 DOI: 10.1016/j.compbiomed.2022.106277] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/19/2022] [Accepted: 10/30/2022] [Indexed: 11/05/2022]
Abstract
Automated retinal image analysis holds prime significance in the accurate diagnosis of various critical eye diseases that include diabetic retinopathy (DR), age-related macular degeneration (AMD), atherosclerosis, and glaucoma. Manual diagnosis of retinal diseases by ophthalmologists takes time, effort, and financial resources, and is prone to error, in comparison to computer-aided diagnosis systems. In this context, robust classification and segmentation of retinal images are primary operations that aid clinicians in the early screening of patients to ensure the prevention and/or treatment of these diseases. This paper conducts an extensive review of the state-of-the-art methods for the detection and segmentation of retinal image features. Existing notable techniques for the detection of retinal features are categorized into essential groups and compared in depth. Additionally, a summary of quantifiable performance measures for various important stages of retinal image analysis, such as image acquisition and preprocessing, is provided. Finally, the widely used in the literature datasets for analyzing retinal images are described and their significance is emphasized.
Collapse
Affiliation(s)
- Shahzaib Iqbal
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Tariq M Khan
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia.
| | - Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan; Department of Electrical and Computer Engineering, Aarhus University, Aarhus, Denmark
| | - Syed S Naqvi
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Syed Junaid Nawaz
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| |
Collapse
|
13
|
García-Sierra R, López-Lifante VM, Isusquiza Garcia E, Heras A, Besada I, Verde Lopez D, Alzamora MT, Forés R, Montero-Alia P, Ugarte Anduaga J, Torán-Monserrat P. Automated Systems for Calculating Arteriovenous Ratio in Retinographies: A Scoping Review. Diagnostics (Basel) 2022; 12:2865. [PMID: 36428925 PMCID: PMC9689345 DOI: 10.3390/diagnostics12112865] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2022] [Revised: 10/29/2022] [Accepted: 11/14/2022] [Indexed: 11/22/2022] Open
Abstract
There is evidence of an association between hypertension and retinal arteriolar narrowing. Manual measurement of retinal vessels comes with additional variability, which can be eliminated using automated software. This scoping review aims to summarize research on automated retinal vessel analysis systems. Searches were performed on Medline, Scopus, and Cochrane to find studies examining automated systems for the diagnosis of retinal vascular alterations caused by hypertension using the following keywords: diagnosis; diagnostic screening programs; image processing, computer-assisted; artificial intelligence; electronic data processing; hypertensive retinopathy; hypertension; retinal vessels; arteriovenous ratio and retinal image analysis. The searches generated 433 articles. Of these, 25 articles published from 2010 to 2022 were included in the review. The retinographies analyzed were extracted from international databases and real scenarios. Automated systems to detect alterations in the retinal vasculature are being introduced into clinical practice for diagnosis in ophthalmology and other medical specialties due to the association of such changes with various diseases. These systems make the classification of hypertensive retinopathy and cardiovascular risk more reliable. They also make it possible for diagnosis to be performed in primary care, thus optimizing ophthalmological visits.
Collapse
Affiliation(s)
- Rosa García-Sierra
- Research Support Unit Metropolitana Nord, Primary Care Research Institut Jordi Gol (IDIAPJGol), 08303 Mataró, Spain
- Multidisciplinary Research Group in Health and Society GREMSAS (2017 SGR 917), 08007 Barcelona, Spain
- Nursing Department, Faculty of Medicine, Universitat Autònoma de Barcelona, Campus Bellaterra, 08193 Barcelona, Spain
- Primary Care Group, Germans Trias i Pujol Research Institute (IGTP), 08916 Badalona, Spain
| | - Victor M. López-Lifante
- Research Support Unit Metropolitana Nord, Primary Care Research Institut Jordi Gol (IDIAPJGol), 08303 Mataró, Spain
- Palau-solità i Plegamans Primary Healthcare Centre, Palau-solità i Plegamans, Gerència d’Àmbit d’Atenció Primària Metropolitana Nord, Institut Català de la Salut, 08184 Barcelona, Spain
| | | | - Antonio Heras
- Research Support Unit Metropolitana Nord, Primary Care Research Institut Jordi Gol (IDIAPJGol), 08303 Mataró, Spain
- Primary Healthcare Centre Riu Nord-Riu Sud, Gerència d’Àmbit d’Atenció Primària Metropolitana Nord, Institut Català de la Salut, Santa Coloma de Gramenet, 08921 Barcelona, Spain
| | - Idoia Besada
- ULMA Medical Technologies, S. Coop, 20560 Onati, Spain
| | - David Verde Lopez
- Institut Universitari d’Investigació en Atenció Primària Jordi Gol (IDIAP Jordi Gol), 08007 Barcelona, Spain
| | - Maria Teresa Alzamora
- Research Support Unit Metropolitana Nord, Primary Care Research Institut Jordi Gol (IDIAPJGol), 08303 Mataró, Spain
- Primary Healthcare Centre Riu Nord-Riu Sud, Gerència d’Àmbit d’Atenció Primària Metropolitana Nord, Institut Català de la Salut, Santa Coloma de Gramenet, 08921 Barcelona, Spain
| | - Rosa Forés
- Research Support Unit Metropolitana Nord, Primary Care Research Institut Jordi Gol (IDIAPJGol), 08303 Mataró, Spain
| | - Pilar Montero-Alia
- Research Support Unit Metropolitana Nord, Primary Care Research Institut Jordi Gol (IDIAPJGol), 08303 Mataró, Spain
| | | | - Pere Torán-Monserrat
- Research Support Unit Metropolitana Nord, Primary Care Research Institut Jordi Gol (IDIAPJGol), 08303 Mataró, Spain
- Multidisciplinary Research Group in Health and Society GREMSAS (2017 SGR 917), 08007 Barcelona, Spain
- Primary Care Group, Germans Trias i Pujol Research Institute (IGTP), 08916 Badalona, Spain
- Department of Medicine, Faculty of Medicine, Universitat de Girona, 17004 Girona, Spain
| |
Collapse
|
14
|
Chowdhury AZME, Mann G, Morgan WH, Vukmirovic A, Mehnert A, Sohel F. MSGANet-RAV: A multiscale guided attention network for artery-vein segmentation and classification from optic disc and retinal images. JOURNAL OF OPTOMETRY 2022; 15 Suppl 1:S58-S69. [PMID: 36396540 PMCID: PMC9732479 DOI: 10.1016/j.optom.2022.11.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 09/23/2022] [Accepted: 11/06/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND Retinal and optic disc images are used to assess changes in the retinal vasculature. These can be changes associated with diseases such as diabetic retinopathy and glaucoma or induced using ophthalmodynamometry to measure arterial and venous pressure. Key steps toward automating the assessment of these changes are the segmentation and classification of the veins and arteries. However, such segmentation and classification are still required to be manually labelled by experts. Such automated labelling is challenging because of the complex morphology, anatomical variations, alterations due to disease and scarcity of labelled data for algorithm development. We present a deep machine learning solution called the multiscale guided attention network for retinal artery and vein segmentation and classification (MSGANet-RAV). METHODS MSGANet-RAV was developed and tested on 383 colour clinical optic disc images from LEI-CENTRAL, constructed in-house and 40 colour fundus images from the AV-DRIVE public dataset. The datasets have a mean optic disc occupancy per image of 60.6% and 2.18%, respectively. MSGANet-RAV is a U-shaped encoder-decoder network, where the encoder extracts multiscale features, and the decoder includes a sequence of self-attention modules. The self-attention modules explore, guide and incorporate vessel-specific structural and contextual feature information to segment and classify central optic disc and retinal vessel pixels. RESULTS MSGANet-RAV achieved a pixel classification accuracy of 93.15%, sensitivity of 92.19%, and specificity of 94.13% on LEI-CENTRAL, outperforming several reference models. It similarly performed highly on AV-DRIVE with an accuracy, sensitivity and specificity of 95.48%, 93.59% and 97.27%, respectively. CONCLUSION The results show the efficacy of MSGANet-RAV for identifying central optic disc and retinal arteries and veins. The method can be used in automated systems designed to assess vascular changes in retinal and optic disc images quantitatively.
Collapse
Affiliation(s)
- A Z M Ehtesham Chowdhury
- School of Information Technology, Murdoch University, 90 South Street, Murdoch, WA 6150, Australia
| | - Graham Mann
- School of Information Technology, Murdoch University, 90 South Street, Murdoch, WA 6150, Australia
| | - William Huxley Morgan
- Lions Eye Institute, 2 Verdun Street, Nedlands, WA 6009, Australia; Centre for Ophthalmology and Visual Science, The University of Western Australia, 35 Stirling Highway, Perth, WA 6009, Australia
| | - Aleksandar Vukmirovic
- Lions Eye Institute, 2 Verdun Street, Nedlands, WA 6009, Australia; Centre for Ophthalmology and Visual Science, The University of Western Australia, 35 Stirling Highway, Perth, WA 6009, Australia
| | - Andrew Mehnert
- Lions Eye Institute, 2 Verdun Street, Nedlands, WA 6009, Australia; Centre for Ophthalmology and Visual Science, The University of Western Australia, 35 Stirling Highway, Perth, WA 6009, Australia
| | - Ferdous Sohel
- School of Information Technology, Murdoch University, 90 South Street, Murdoch, WA 6150, Australia.
| |
Collapse
|
15
|
Zhou Y, Wagner SK, Chia MA, Zhao A, Woodward-Court P, Xu M, Struyven R, Alexander DC, Keane PA. AutoMorph: Automated Retinal Vascular Morphology Quantification Via a Deep Learning Pipeline. Transl Vis Sci Technol 2022; 11:12. [PMID: 35833885 PMCID: PMC9290317 DOI: 10.1167/tvst.11.7.12] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 06/06/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose To externally validate a deep learning pipeline (AutoMorph) for automated analysis of retinal vascular morphology on fundus photographs. AutoMorph has been made publicly available, facilitating widespread research in ophthalmic and systemic diseases. Methods AutoMorph consists of four functional modules: image preprocessing, image quality grading, anatomical segmentation (including binary vessel, artery/vein, and optic disc/cup segmentation), and vascular morphology feature measurement. Image quality grading and anatomical segmentation use the most recent deep learning techniques. We employ a model ensemble strategy to achieve robust results and analyze the prediction confidence to rectify false gradable cases in image quality grading. We externally validate the performance of each module on several independent publicly available datasets. Results The EfficientNet-b4 architecture used in the image grading module achieves performance comparable to that of the state of the art for EyePACS-Q, with an F1-score of 0.86. The confidence analysis reduces the number of images incorrectly assessed as gradable by 76%. Binary vessel segmentation achieves an F1-score of 0.73 on AV-WIDE and 0.78 on DR HAGIS. Artery/vein scores are 0.66 on IOSTAR-AV, and disc segmentation achieves 0.94 in IDRID. Vascular morphology features measured from the AutoMorph segmentation map and expert annotation show good to excellent agreement. Conclusions AutoMorph modules perform well even when external validation data show domain differences from training data (e.g., with different imaging devices). This fully automated pipeline can thus allow detailed, efficient, and comprehensive analysis of retinal vascular morphology on color fundus photographs. Translational Relevance By making AutoMorph publicly available and open source, we hope to facilitate ophthalmic and systemic disease research, particularly in the emerging field of oculomics.
Collapse
Affiliation(s)
- Yukun Zhou
- Centre for Medical Image Computing, University College London, London, UK
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Siegfried K. Wagner
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Mark A. Chia
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - An Zhao
- Centre for Medical Image Computing, University College London, London, UK
- Department of Computer Science, University College London, London, UK
| | - Peter Woodward-Court
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
- Institute of Health Informatics, University College London, London, UK
| | - Moucheng Xu
- Centre for Medical Image Computing, University College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Robbert Struyven
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Daniel C. Alexander
- Centre for Medical Image Computing, University College London, London, UK
- Department of Computer Science, University College London, London, UK
| | - Pearse A. Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| |
Collapse
|
16
|
Lee AX, Saxena A, Chua J, Schmetterer L, Tan B. Automated Retinal Vascular Topological Information Extraction From OCTA. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1839-1842. [PMID: 36086557 DOI: 10.1109/embc48229.2022.9871160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The retinal vascular system adapts and reacts rapidly to ocular diseases such as glaucoma, diabetic retinopathy and age-related macular degeneration. Here we present a combination of methods to further extract vascular information from [Formula: see text] wide-field optical coherence tomography angiography (OCTA). An integrated U-Net for the segmentation and classification of arteries and veins reached a segmentation IoU of 0.7095±0.0224, and classification IoU of 0.8793±0.1049 and 0.8928±0.0929 respectively. A correcting algorithm which uses topological information was created to correct the misclassification and connectivity of the vessels, which showed an average increase of 8.29% in IoU. Finally, the vessel morphometry of branch orders was extracted, where this allows the direct comparison of artery/vein, arterioles/venules and capillaries.
Collapse
|
17
|
Khan MZ, Lee Y. Stacked Ensemble Network to Assess the Structural Variations in Retina: A Bio-marker for Early Disease Diagnosis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3222-3226. [PMID: 36085628 DOI: 10.1109/embc48229.2022.9871379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The retina is a unique tissue that extends the human brain in transmitting the incoming light into neural spikes. Researchers collaborating with domain experts proposed numerous deep networks to extract vessels from the retina; however, these techniques have the least response for micro-vessels. The proposed method has developed a stacked ensemble network approach with deep neural architectures for precise vessel extraction. Our method has used bi-directional LSTM for filling gaps in dis-joint vessels and applied W-Net for boundary refinement and emphasizing local regions to achieve better results for micro-vessels extraction. The platform has combined the strength of various networks to improve the automated screening process and has shown promising results on benchmark datasets.
Collapse
|
18
|
State-of-the-art retinal vessel segmentation with minimalistic models. Sci Rep 2022; 12:6174. [PMID: 35418576 PMCID: PMC9007957 DOI: 10.1038/s41598-022-09675-y] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 03/10/2022] [Indexed: 01/03/2023] Open
Abstract
The segmentation of retinal vasculature from eye fundus images is a fundamental task in retinal image analysis. Over recent years, increasingly complex approaches based on sophisticated Convolutional Neural Network architectures have been pushing performance on well-established benchmark datasets. In this paper, we take a step back and analyze the real need of such complexity. We first compile and review the performance of 20 different techniques on some popular databases, and we demonstrate that a minimalistic version of a standard U-Net with several orders of magnitude less parameters, carefully trained and rigorously evaluated, closely approximates the performance of current best techniques. We then show that a cascaded extension (W-Net) reaches outstanding performance on several popular datasets, still using orders of magnitude less learnable weights than any previously published work. Furthermore, we provide the most comprehensive cross-dataset performance analysis to date, involving up to 10 different databases. Our analysis demonstrates that the retinal vessel segmentation is far from solved when considering test images that differ substantially from the training data, and that this task represents an ideal scenario for the exploration of domain adaptation techniques. In this context, we experiment with a simple self-labeling strategy that enables moderate enhancement of cross-dataset performance, indicating that there is still much room for improvement in this area. Finally, we test our approach on Artery/Vein and vessel segmentation from OCTA imaging problems, where we again achieve results well-aligned with the state-of-the-art, at a fraction of the model complexity available in recent literature. Code to reproduce the results in this paper is released.
Collapse
|
19
|
Hu J, Wang H, Wu G, Cao Z, Mou L, Zhao Y, Zhang J. Multi-scale Interactive Network with Artery/Vein Discriminator for Retinal Vessel Classification. IEEE J Biomed Health Inform 2022; 26:3896-3905. [PMID: 35394918 DOI: 10.1109/jbhi.2022.3165867] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Automatic classification of retinal arteries and veins plays an important role in assisting clinicians to diagnosis cardiovascular and eye-related diseases. However, due to the high degree of anatomical variation across the population, and the presence of inconsistent labels by the subjective judgment of annotators in available training data, most of existing methods generally suffer from blood vessel discontinuity and arteriovenous confusion, the artery/vein (A/V) classification task still faces great challenges. In this work, we propose a multi-scale interactive network with A/V discriminator for retinal artery and vein recognition, which can reduce the arteriovenous confusion and alleviate the disturbance of noisy label. A multi-scale interaction (MI) module is designed in encoder for realizing the cross-space multi-scale features interaction of fundus images, effectively integrate high-level and low-level context information. In particular, we design an ingenious A/V discriminator (AVD) that utilizes the independent and shared information between arteries and veins, and combine with topology loss, to further strengthen the learning ability of model to resolve the arteriovenous confusion. In addition, we adopt a sample re-weighting (SW) strategy to effectively alleviate the disturbance from data labeling errors. The proposed model is verified on three publicly available fundus image datasets (AV-DRIVE, HRF, LES-AV) and a private dataset. We achieve the accuracy of 97.47%, 96.91%, 97.79%, and 98.18% respectively on these four datasets. Extensive experimental results demonstrate that our method achieves competitive performance compared with state-of-the-art methods for A/V classification. To address the problem of training data scarcity, we publicly release 100 fundus images with A/V annotations to promote relevant research in the community.
Collapse
|
20
|
TW-GAN: Topology and width aware GAN for retinal artery/vein classification. Med Image Anal 2022; 77:102340. [DOI: 10.1016/j.media.2021.102340] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 12/18/2021] [Accepted: 12/20/2021] [Indexed: 11/20/2022]
|
21
|
Hatamizadeh A, Hosseini H, Patel N, Choi J, Pole CC, Hoeferlin CM, Schwartz SD, Terzopoulos D. RAVIR: A Dataset and Methodology for the Semantic Segmentation and Quantitative Analysis of Retinal Arteries and Veins in Infrared Reflectance Imaging. IEEE J Biomed Health Inform 2022; 26:3272-3283. [PMID: 35349464 DOI: 10.1109/jbhi.2022.3163352] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The retinal vasculature provides important clues in the diagnosis and monitoring of systemic diseases including hypertension and diabetes. The microvascular system is of primary involvement in such conditions, and the retina is the only anatomical site where the microvasculature can be directly observed. The objective assessment of retinal vessels has long been considered a surrogate biomarker for systemic vascular diseases, and with recent advancements in retinal imaging and computer vision technologies, this topic has become the subject of renewed attention. In this paper, we present a novel dataset, dubbed RAVIR, for the semantic segmentation of Retinal Arteries and Veins in Infrared Reflectance (IR) imaging. It enables the creation of deep learning-based models that distinguish extracted vessel type without extensive post-processing. We propose a novel deep learning-based methodology, denoted as SegRAVIR, for the semantic segmentation of retinal arteries and veins and the quantitative measurement of the widths of segmented vessels. Our extensive experiments validate the effectiveness of SegRAVIR and demonstrate its superior performance in comparison to state-of-the-art models. Additionally, we propose a knowledge distillation framework for the domain adaptation of RAVIR pretrained networks on color images. We demonstrate that our pretraining procedure yields new state-of-the-art benchmarks on the DRIVE, STARE, and CHASE\_DB1 datasets. Dataset link: https://ravirdataset.github.io/data.
Collapse
|
22
|
Shi D, Lin Z, Wang W, Tan Z, Shang X, Zhang X, Meng W, Ge Z, He M. A Deep Learning System for Fully Automated Retinal Vessel Measurement in High Throughput Image Analysis. Front Cardiovasc Med 2022; 9:823436. [PMID: 35391847 PMCID: PMC8980780 DOI: 10.3389/fcvm.2022.823436] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Accepted: 02/22/2022] [Indexed: 11/27/2022] Open
Abstract
Motivation Retinal microvasculature is a unique window for predicting and monitoring major cardiovascular diseases, but high throughput tools based on deep learning for in-detail retinal vessel analysis are lacking. As such, we aim to develop and validate an artificial intelligence system (Retina-based Microvascular Health Assessment System, RMHAS) for fully automated vessel segmentation and quantification of the retinal microvasculature. Results RMHAS achieved good segmentation accuracy across datasets with diverse eye conditions and image resolutions, having AUCs of 0.91, 0.88, 0.95, 0.93, 0.97, 0.95, 0.94 for artery segmentation and 0.92, 0.90, 0.96, 0.95, 0.97, 0.95, 0.96 for vein segmentation on the AV-WIDE, AVRDB, HRF, IOSTAR, LES-AV, RITE, and our internal datasets. Agreement and repeatability analysis supported the robustness of the algorithm. For vessel analysis in quantity, less than 2 s were needed to complete all required analysis.
Collapse
Affiliation(s)
- Danli Shi
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhihong Lin
- Faculty of Engineering, Monash University, Melbourne, VIC, Australia
| | - Wei Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zachary Tan
- Centre for Eye Research Australia, East Melbourne, VIC, Australia
| | - Xianwen Shang
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Xueli Zhang
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wei Meng
- Guangzhou Vision Tech Medical Technology Co., Ltd., Guangzhou, China
| | - Zongyuan Ge
- Research Center and Faculty of Engineering, Monash University, Melbourne, VIC, Australia
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Centre for Eye Research Australia, East Melbourne, VIC, Australia
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
- *Correspondence: Mingguang He
| |
Collapse
|
23
|
Networks behind the morphology and structural design of living systems. Phys Life Rev 2022; 41:1-21. [DOI: 10.1016/j.plrev.2022.03.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 03/04/2022] [Indexed: 01/06/2023]
|
24
|
Mishra S, Wang YX, Wei CC, Chen DZ, Hu XS. VTG-Net: A CNN Based Vessel Topology Graph Network for Retinal Artery/Vein Classification. Front Med (Lausanne) 2021; 8:750396. [PMID: 34820394 PMCID: PMC8606556 DOI: 10.3389/fmed.2021.750396] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 10/14/2021] [Indexed: 11/19/2022] Open
Abstract
From diagnosing cardiovascular diseases to analyzing the progression of diabetic retinopathy, accurate retinal artery/vein (A/V) classification is critical. Promising approaches for A/V classification, ranging from conventional graph based methods to recent convolutional neural network (CNN) based models, have been known. However, the inability of traditional graph based methods to utilize deep hierarchical features extracted by CNNs and the limitations of current CNN based methods to incorporate vessel topology information hinder their effectiveness. In this paper, we propose a new CNN based framework, VTG-Net (vessel topology graph network), for retinal A/V classification by incorporating vessel topology information. VTG-Net exploits retinal vessel topology along with CNN features to improve A/V classification accuracy. Specifically, we transform vessel features extracted by CNN in the image domain into a graph representation preserving the vessel topology. Then by exploiting a graph convolutional network (GCN), we enable our model to learn both CNN features and vessel topological features simultaneously. The final predication is attained by fusing the CNN and GCN outputs. Using a publicly available AV-DRIVE dataset and an in-house dataset, we verify the high performance of our VTG-Net for retinal A/V classification over state-of-the-art methods (with ~2% improvement in accuracy on the AV-DRIVE dataset).
Collapse
Affiliation(s)
- Suraj Mishra
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Ya Xing Wang
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chuan Chuan Wei
- Department of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Danny Z. Chen
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - X. Sharon Hu
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| |
Collapse
|
25
|
A Hybrid Method to Enhance Thick and Thin Vessels for Blood Vessel Segmentation. Diagnostics (Basel) 2021; 11:diagnostics11112017. [PMID: 34829365 PMCID: PMC8621384 DOI: 10.3390/diagnostics11112017] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Revised: 10/25/2021] [Accepted: 10/25/2021] [Indexed: 11/16/2022] Open
Abstract
Retinal blood vessels have been presented to contribute confirmation with regard to tortuosity, branching angles, or change in diameter as a result of ophthalmic disease. Although many enhancement filters are extensively utilized, the Jerman filter responds quite effectively at vessels, edges, and bifurcations and improves the visualization of structures. In contrast, curvelet transform is specifically designed to associate scale with orientation and can be used to recover from noisy data by curvelet shrinkage. This paper describes a method to improve the performance of curvelet transform further. A distinctive fusion of curvelet transform and the Jerman filter is presented for retinal blood vessel segmentation. Mean-C thresholding is employed for the segmentation purpose. The suggested method achieves average accuracies of 0.9600 and 0.9559 for DRIVE and CHASE_DB1, respectively. Simulation results establish a better performance and faster implementation of the suggested scheme in comparison with similar approaches seen in the literature.
Collapse
|
26
|
Li C, Ma W, Sun L, Ding X, Huang Y, Wang G, Yu Y. Hierarchical deep network with uncertainty-aware semi-supervised learning for vessel segmentation. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06578-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
27
|
Simultaneous segmentation and classification of the retinal arteries and veins from color fundus images. Artif Intell Med 2021; 118:102116. [PMID: 34412839 DOI: 10.1016/j.artmed.2021.102116] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 05/20/2021] [Accepted: 05/21/2021] [Indexed: 01/25/2023]
Abstract
BACKGROUND AND OBJECTIVES The study of the retinal vasculature represents a fundamental stage in the screening and diagnosis of many high-incidence diseases, both systemic and ophthalmic. A complete retinal vascular analysis requires the segmentation of the vascular tree along with the classification of the blood vessels into arteries and veins. Early automatic methods approach these complementary segmentation and classification tasks in two sequential stages. However, currently, these two tasks are approached as a joint semantic segmentation, because the classification results highly depend on the effectiveness of the vessel segmentation. In that regard, we propose a novel approach for the simultaneous segmentation and classification of the retinal arteries and veins from eye fundus images. METHODS We propose a novel method that, unlike previous approaches, and thanks to the proposal of a novel loss, decomposes the joint task into three segmentation problems targeting arteries, veins and the whole vascular tree. This configuration allows to handle vessel crossings intuitively and directly provides accurate segmentation masks of the different target vascular trees. RESULTS The provided ablation study on the public Retinal Images vessel Tree Extraction (RITE) dataset demonstrates that the proposed method provides a satisfactory performance, particularly in the segmentation of the different structures. Furthermore, the comparison with the state of the art shows that our method achieves highly competitive results in the artery/vein classification, while significantly improving the vascular segmentation. CONCLUSIONS The proposed multi-segmentation method allows to detect more vessels and better segment the different structures, while achieving a competitive classification performance. Also, in these terms, our approach outperforms the approaches of various reference works. Moreover, in contrast with previous approaches, the proposed method allows to directly detect the vessel crossings, as well as preserving the continuity of both arteries and veins at these complex locations.
Collapse
|
28
|
Hu J, Wang H, Cao Z, Wu G, Jonas JB, Wang YX, Zhang J. Automatic Artery/Vein Classification Using a Vessel-Constraint Network for Multicenter Fundus Images. Front Cell Dev Biol 2021; 9:659941. [PMID: 34178986 PMCID: PMC8226261 DOI: 10.3389/fcell.2021.659941] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 04/19/2021] [Indexed: 11/24/2022] Open
Abstract
Retinal blood vessel morphological abnormalities are generally associated with cardiovascular, cerebrovascular, and systemic diseases, automatic artery/vein (A/V) classification is particularly important for medical image analysis and clinical decision making. However, the current method still has some limitations in A/V classification, especially the blood vessel edge and end error problems caused by the single scale and the blurred boundary of the A/V. To alleviate these problems, in this work, we propose a vessel-constraint network (VC-Net) that utilizes the information of vessel distribution and edge to enhance A/V classification, which is a high-precision A/V classification model based on data fusion. Particularly, the VC-Net introduces a vessel-constraint (VC) module that combines local and global vessel information to generate a weight map to constrain the A/V features, which suppresses the background-prone features and enhances the edge and end features of blood vessels. In addition, the VC-Net employs a multiscale feature (MSF) module to extract blood vessel information with different scales to improve the feature extraction capability and robustness of the model. And the VC-Net can get vessel segmentation results simultaneously. The proposed method is tested on publicly available fundus image datasets with different scales, namely, DRIVE, LES, and HRF, and validated on two newly created multicenter datasets: Tongren and Kailuan. We achieve a balance accuracy of 0.9554 and F1 scores of 0.7616 and 0.7971 for the arteries and veins, respectively, on the DRIVE dataset. The experimental results prove that the proposed model achieves competitive performance in A/V classification and vessel segmentation tasks compared with state-of-the-art methods. Finally, we test the Kailuan dataset with other trained fusion datasets, the results also show good robustness. To promote research in this area, the Tongren dataset and source code will be made publicly available. The dataset and code will be made available at https://github.com/huawang123/VC-Net.
Collapse
Affiliation(s)
- Jingfei Hu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Hua Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Zhaohui Cao
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Guang Wu
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Jost B Jonas
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China.,Department of Ophthalmology, Medical Faculty Mannheim of the Ruprecht-Karls-University Heidelberg, Mannheim, Germany
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China.,Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| |
Collapse
|
29
|
Fukutsu K, Saito M, Noda K, Murata M, Kase S, Shiba R, Isogai N, Asano Y, Hanawa N, Dohke M, Kase M, Ishida S. A Deep Learning Architecture for Vascular Area Measurement in Fundus Images. OPHTHALMOLOGY SCIENCE 2021; 1:100004. [PMID: 36246007 PMCID: PMC9560649 DOI: 10.1016/j.xops.2021.100004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 02/06/2021] [Accepted: 02/16/2021] [Indexed: 12/27/2022]
Abstract
Purpose To develop a novel evaluation system for retinal vessel alterations caused by hypertension using a deep learning algorithm. Design Retrospective study. Participants Fundus photographs (n = 10 571) of health-check participants (n = 5598). Methods The participants were analyzed using a fully automatic architecture assisted by a deep learning system, and the total area of retinal arterioles and venules was assessed separately. The retinal vessels were extracted automatically from each photograph and categorized as arterioles or venules. Subsequently, the total arteriolar area (AA) and total venular area (VA) were measured. The correlations among AA, VA, age, systolic blood pressure (SBP), and diastolic blood pressure were analyzed. Six ophthalmologists manually evaluated the arteriovenous ratio (AVR) in fundus images (n = 102), and the correlation between the SBP and AVR was evaluated manually. Main Outcome Measures Total arteriolar area and VA. Results The deep learning algorithm demonstrated favorable properties of vessel segmentation and arteriovenous classification, comparable with pre-existing techniques. Using the algorithm, a significant positive correlation was found between AA and VA. Both AA and VA demonstrated negative correlations with age and blood pressure. Furthermore, the SBP showed a higher negative correlation with AA measured by the algorithm than with AVR. Conclusions The current data demonstrated that the retinal vascular area measured with the deep learning system could be a novel index of hypertension-related vascular changes.
Collapse
Affiliation(s)
- Kanae Fukutsu
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
| | - Michiyuki Saito
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
| | - Kousuke Noda
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Ocular Circulation and Metabolism, Hokkaido University, Sapporo, Japan
- Correspondence: Kousuke Noda, MD, PhD, Department of Ophthalmology, Hokkaido University Graduate School of Medicine, N-15, W-7, Kita-ku, Sapporo 060-8638, Japan.
| | - Miyuki Murata
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Ocular Circulation and Metabolism, Hokkaido University, Sapporo, Japan
| | - Satoru Kase
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
| | | | | | | | | | | | - Manabu Kase
- Department of Ophthalmology, Teine Keijinkai Hospital, Sapporo, Japan
| | - Susumu Ishida
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Ocular Circulation and Metabolism, Hokkaido University, Sapporo, Japan
| |
Collapse
|
30
|
Alam MN, Le D, Yao X. Differential artery-vein analysis in quantitative retinal imaging: a review. Quant Imaging Med Surg 2021; 11:1102-1119. [PMID: 33654680 PMCID: PMC7829162 DOI: 10.21037/qims-20-557] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Accepted: 06/19/2020] [Indexed: 11/06/2022]
Abstract
Quantitative retinal imaging is essential for eye disease detection, staging classification, and treatment assessment. It is known that different eye diseases or severity stages can affect the artery and vein systems in different ways. Therefore, differential artery-vein (AV) analysis can improve the performance of quantitative retinal imaging. In this article, we provide a brief summary of technical rationales and clinical applications of differential AV analysis in fundus photography, optical coherence tomography (OCT), and OCT angiography (OCTA).
Collapse
Affiliation(s)
- Minhaj Nur Alam
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA
| | - David Le
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA
| | - Xincheng Yao
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| |
Collapse
|
31
|
|
32
|
Relan D, Relan R. Unsupervised sorting of retinal vessels using locally consistent Gaussian mixtures. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 199:105894. [PMID: 33341476 DOI: 10.1016/j.cmpb.2020.105894] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 11/26/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES Retinal blood vessels classification into arterioles and venules is a major task for biomarker identification. Especially, clustering of retinal blood vessels is a challenging task due to factors affecting the images such as contrast variability, non-uniform illumination etc. Hence, a high performance automatic retinal vessel classification system is highly prized. In this paper, we propose a novel unsupervised methodology to classify retinal vessels extracted from fundus camera images into arterioles and venules. METHODS The proposed method utilises the homomorphic filtering (HF) to preprocess the input image for non-uniform illumination and denoising. In the next step, an unsupervised multiscale line operator segmentation technique is used to segment the retinal vasculature before extracting the discriminating features. Finally, the Locally Consistent Gaussian Mixture Model (LCGMM) is utilised for unsupervised sorting of retinal vessels. RESULTS The performance of the proposed unsupervised method was assessed using three publicly accessible databases: INSPIRE-AVR, VICAVR, and MESSIDOR. The proposed framework achieved 90.14%,90.3% and 93.8% classification rate in zone B for the three datasets respectively. CONCLUSIONS The proposed clustering framework provided high classification rate as compared to conventional Gaussian mixture model using Expectation-Maximisation (GMM-EM) approach, thus have a great capability to enhance computer assisted diagnosis and research in field of biomarker discovery.
Collapse
Affiliation(s)
- D Relan
- Department of Computer Science, BML Munjal University, Gurgaon, India.
| | - R Relan
- Department of Applied Mathematics and Computer Science (DTU Compute), Technical University of Denmark, Kongens Lyngby, Denmark; Siemens Energy, Gurgaon, India.
| |
Collapse
|
33
|
Irshad S, Yin X, Zhang Y. A new approach for retinal vessel differentiation using binary particle swarm optimization. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2021. [DOI: 10.1080/21681163.2020.1870001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Samra Irshad
- School of Software and Electrical Engineering, Swinburne University of Technology, Melbourne, Australia
| | - Xiaoxia Yin
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou, China
| | - Yanchun Zhang
- Institute for Sustainable Industries and Liveable Cities, Victoria University, Melbourne, Australia
| |
Collapse
|
34
|
Kim TH, Le D, Son T, Yao X. Vascular morphology and blood flow signatures for differential artery-vein analysis in optical coherence tomography of the retina. BIOMEDICAL OPTICS EXPRESS 2021; 12:367-379. [PMID: 33520388 PMCID: PMC7818960 DOI: 10.1364/boe.413149] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 12/07/2020] [Accepted: 12/09/2020] [Indexed: 05/09/2023]
Abstract
Differential artery-vein (AV) analysis is essential for retinal study, disease detection, and treatment assessment. This study is to characterize vascular reflectance profiles and blood flow patterns of retinal artery and vein systems in optical coherence tomography (OCT) and OCT angiography (OCTA), and establish them as robust signatures for objective AV classification. A custom designed OCT was employed for three-dimensional (3D) imaging of mouse retina, and corresponding OCTA was reconstructed. Radially resliced OCT B-scans revealed two, i.e. top and bottom, hyperreflective wall boundaries in retinal arteries, while these wall boundaries were absent in OCT of retinal veins. Additional OCTA analysis consistently displayed a layered speckle distribution in the vein, which may indicate the venous laminar flow. These OCT and OCTA differences offer unique signatures for objective AV classification in OCT and OCTA.
Collapse
Affiliation(s)
- Tae-Hoon Kim
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - David Le
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Taeyoon Son
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Xincheng Yao
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL 60607, USA
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| |
Collapse
|
35
|
Topology-Aware Retinal Artery–Vein Classification via Deep Vascular Connectivity Prediction. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app11010320] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Retinal artery–vein (AV) classification is a prerequisite for quantitative analysis of retinal vessels, which provides a biomarker for neurologic, cardiac, and systemic diseases, as well as ocular diseases. Although convolutional neural networks have presented remarkable performance on AV classification, it often comes with a topological error, like an abrupt class flipping on the same vessel segment or a weakness for thin vessels due to their indistinct appearances. In this paper, we present a new method for AV classification where the underlying vessel topology is estimated to give consistent prediction along the actual vessel structure. We cast the vessel topology estimation as iterative vascular connectivity prediction, which is implemented as deep-learning-based pairwise classification. In consequence, a whole vessel graph is separated into sub-trees, and each of them is classified as an artery or vein in whole via a voting scheme. The effectiveness and efficiency of the proposed method is validated by conducting experiments on two retinal image datasets acquired using different imaging techniques called DRIVE and IOSTAR.
Collapse
|
36
|
Mookiah MRK, Hogg S, MacGillivray TJ, Prathiba V, Pradeepa R, Mohan V, Anjana RM, Doney AS, Palmer CNA, Trucco E. A review of machine learning methods for retinal blood vessel segmentation and artery/vein classification. Med Image Anal 2020; 68:101905. [PMID: 33385700 DOI: 10.1016/j.media.2020.101905] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 11/10/2020] [Accepted: 11/11/2020] [Indexed: 12/20/2022]
Abstract
The eye affords a unique opportunity to inspect a rich part of the human microvasculature non-invasively via retinal imaging. Retinal blood vessel segmentation and classification are prime steps for the diagnosis and risk assessment of microvascular and systemic diseases. A high volume of techniques based on deep learning have been published in recent years. In this context, we review 158 papers published between 2012 and 2020, focussing on methods based on machine and deep learning (DL) for automatic vessel segmentation and classification for fundus camera images. We divide the methods into various classes by task (segmentation or artery-vein classification), technique (supervised or unsupervised, deep and non-deep learning, hand-crafted methods) and more specific algorithms (e.g. multiscale, morphology). We discuss advantages and limitations, and include tables summarising results at-a-glance. Finally, we attempt to assess the quantitative merit of DL methods in terms of accuracy improvement compared to other methods. The results allow us to offer our views on the outlook for vessel segmentation and classification for fundus camera images.
Collapse
Affiliation(s)
| | - Stephen Hogg
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| | - Tom J MacGillivray
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh EH16 4SB, UK
| | - Vijayaraghavan Prathiba
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Rajendra Pradeepa
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Viswanathan Mohan
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Ranjit Mohan Anjana
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Alexander S Doney
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Colin N A Palmer
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Emanuele Trucco
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| |
Collapse
|
37
|
Kang H, Gao Y, Guo S, Xu X, Li T, Wang K. AVNet: A retinal artery/vein classification network with category-attention weighted fusion. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 195:105629. [PMID: 32634648 DOI: 10.1016/j.cmpb.2020.105629] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Accepted: 06/21/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic artery/vein (A/V) classification in retinal images is of great importance in detecting vascular abnormalities, which may provide biomarkers for early diagnosis of many systemic diseases. It is intuitive to apply popular deep semantic segmentation network for A/V classification. However, the model is required to provide powerful representation ability since vessel is much more complex than general objects. Moreover, deep network may lead to inconsistent classification results for the same vessel due to the lack of structured optimization objective. METHODS In this paper, we propose a novel segmentation network named AVNet, which effectively enhances the classification ability of the model by integrating category-attention weighted fusion (CWF) module, significantly improving the pixel-level A/V classification results. Then, a graph based vascular structure reconstruction (VSR) algorithm is employed to reduce the segment-wise inconsistency, verifying the effect of the graph model on noisy vessel segmentation results. RESULTS The proposed method has been verified on three datasets, i.e. DRIVE, LES-AV and WIDE. AVNet achieves pixel-level accuracies of 90.62%, 90.34%, and 93.16%, respectively, and VSR further improves the performance by 0.19%, 1.85% and 0.64%, achieving the state-of-the-art results on these three datasets. CONCLUSION The proposed method achieves competitive performance in A/V classification task.
Collapse
Affiliation(s)
- Hong Kang
- College of Computer Science, Nankai University, Tianjin, China; Beijing Shanggong Medical Technology Co. Ltd., China
| | - Yingqi Gao
- College of Computer Science, Nankai University, Tianjin, China
| | - Song Guo
- College of Computer Science, Nankai University, Tianjin, China
| | - Xia Xu
- College of Computer Science, Nankai University, Tianjin, China
| | - Tao Li
- College of Computer Science, Nankai University, Tianjin, China; State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Science, Beijing 100190, China
| | - Kai Wang
- College of Computer Science, Nankai University, Tianjin, China; Key Laboratory for Medical Data Analysis and Statistical Research of Tianjin, China.
| |
Collapse
|
38
|
Wang Z, Jiang X, Liu J, Cheng KT, Yang X. Multi-Task Siamese Network for Retinal Artery/Vein Separation via Deep Convolution Along Vessel. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2904-2919. [PMID: 32167888 DOI: 10.1109/tmi.2020.2980117] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Vascular tree disentanglement and vessel type classification are two crucial steps of the graph-based method for retinal artery-vein (A/V) separation. Existing approaches treat them as two independent tasks and mostly rely on ad hoc rules (e.g. change of vessel directions) and hand-crafted features (e.g. color, thickness) to handle them respectively. However, we argue that the two tasks are highly correlated and should be handled jointly since knowing the A/V type can unravel those highly entangled vascular trees, which in turn helps to infer the types of connected vessels that are hard to classify based on only appearance. Therefore, designing features and models isolatedly for the two tasks often leads to a suboptimal solution of A/V separation. In view of this, this paper proposes a multi-task siamese network which aims to learn the two tasks jointly and thus yields more robust deep features for accurate A/V separation. Specifically, we first introduce Convolution Along Vessel (CAV) to extract the visual features by convolving a fundus image along vessel segments, and the geometric features by tracking the directions of blood flow in vessels. The siamese network is then trained to learn multiple tasks: i) classifying A/V types of vessel segments using visual features only, and ii) estimating the similarity of every two connected segments by comparing their visual and geometric features in order to disentangle the vasculature into individual vessel trees. Finally, the results of two tasks mutually correct each other to accomplish final A/V separation. Experimental results demonstrate that our method can achieve accuracy values of 94.7%, 96.9%, and 94.5% on three major databases (DRIVE, INSPIRE, WIDE) respectively, which outperforms recent state-of-the-arts.
Collapse
|
39
|
Khanal A, Estrada R. Dynamic Deep Networks for Retinal Vessel Segmentation. FRONTIERS IN COMPUTER SCIENCE 2020. [DOI: 10.3389/fcomp.2020.00035] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
40
|
Huang F, Tan T, Dashtbozorg B, Zhou Y, Romeny BMTH. From Local to Global: A Graph Framework for Retinal Artery/Vein Classification. IEEE Trans Nanobioscience 2020; 19:589-597. [PMID: 32746331 DOI: 10.1109/tnb.2020.3004481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Fundus photography has been widely used for inspecting eye disorders by ophthalmologists or computer algorithms. Biomarkers related to retinal vessels plays an essential role to detect early diabetes. To quantify vascular biomarkers or the corresponding changes, an accurate artery and vein classification is necessary. In this work, we propose a new framework to boost local vessel classification with a global vascular network model using graph convolution. We compare our proposed method with two traditional state-of-the-art methods on a testing dataset of 750 images from the Maastricht Study. After incorporating global information, our model achieves the best accuracy of 86.45% compared to 85.5% from convolutional neural networks (CNN) and 82.9% from handcrafted pixel feature classification (HPFC). Our model also obtains the best area under receiver operating characteristic curve (AUC) of 0.95, compared to 0.93 from CNN and 0.90 from HPFC. The new classification framework has the advantage of easy deployment on top of local classification features. It corrects the local classification error by minimizing global classification error and it brings free additional classification performance.
Collapse
|
41
|
Sun G, Liu X, Gong J, Gao L. Artery-venous classification in fluorescein angiograms based on region growing with sequential and structural features. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 190:105340. [PMID: 32023506 DOI: 10.1016/j.cmpb.2020.105340] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2019] [Revised: 01/03/2020] [Accepted: 01/14/2020] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVES Fluorescein angiography (FA) is widely used in ophthalmology for examining retinal hemodynamics and vascular morphology. Artery-venous classification is an important step in FA image processing for measurement of feature parameters, such as arterio-venous passage time (AVP) and arterio-venous width ratio (AVR) that are proven useful in clinical assessment of circulation disturbance and vessel abnormalities. However, manual artery-venous classification needs expertise and is rather time consuming, and little effort has been devoted to develop automatic classification methods. In order to solve this problem, we propose a novel artery-venous classification method using region growing strategy with sequential and structural features (RGSS). METHODS The main procedures of our proposed RGSS method include: (i) registration of FA image sequence by mutual-information method; (ii) extraction of sequential features of the dye perfusion process from the registrated FA images; (iii) extraction of vessel structural features from vascular centerline map; (iv) based on the obtained features, seeds of arteries and veins within initial growing region (here optic disk) are generated and then propagated in the entire vessel network using region growing strategy. The RGSS method was tested on our own dataset and public Duke dataset, and its performance was evaluated quantitatively. RESULTS Tests show that RGSS method is able to classify arteries and veins from the complicated vessel network in FA images, with high classification accuracy of 0.91 ± 0.04 on Duke dataset and 0.92 ± 0.03 on our dataset. The employed sequential and structural features are demonstrated to be effective in classifying thin arteries and veins at vessel crossings. CONCLUSIONS Automatic artery-venous classification can be accomplished using our proposed RGSS method with high accuracy. The RGSS method not only emancipates ophthalmologists from hard work of manual marking of arteries and veins, but also helps in measuring important parameters (such as AVP and AVR) for clinical assessment of circulation disturbance and vessel abnormalities.
Collapse
Affiliation(s)
- Gang Sun
- College of Electrical & Information Engineering, Hunan University, Changsha, Hunan Province, 410082, China; Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing, Changsha, Hunan Province, 410082, China; National Engineering Laboratory for Robot Visual Perception & Control Technology, Changsha, Hunan Province, 410082, China
| | - Xiaoyan Liu
- College of Electrical & Information Engineering, Hunan University, Changsha, Hunan Province, 410082, China; Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing, Changsha, Hunan Province, 410082, China; National Engineering Laboratory for Robot Visual Perception & Control Technology, Changsha, Hunan Province, 410082, China.
| | - Junhui Gong
- College of Electrical & Information Engineering, Hunan University, Changsha, Hunan Province, 410082, China
| | - Ling Gao
- Central South University, the Second Xiangya Hospital, Department of Ophthalmology, Changsha, Hunan Province, 410011, China.
| |
Collapse
|
42
|
Zhao Y, Xie J, Zhang H, Zheng Y, Zhao Y, Qi H, Zhao Y, Su P, Liu J, Liu Y. Retinal Vascular Network Topology Reconstruction and Artery/Vein Classification via Dominant Set Clustering. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:341-356. [PMID: 31283498 DOI: 10.1109/tmi.2019.2926492] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The estimation of vascular network topology in complex networks is important in understanding the relationship between vascular changes and a wide spectrum of diseases. Automatic classification of the retinal vascular trees into arteries and veins is of direct assistance to the ophthalmologist in terms of diagnosis and treatment of eye disease. However, it is challenging due to their projective ambiguity and subtle changes in appearance, contrast, and geometry in the imaging process. In this paper, we propose a novel method that is capable of making the artery/vein (A/V) distinction in retinal color fundus images based on vascular network topological properties. To this end, we adapt the concept of dominant set clustering and formalize the retinal blood vessel topology estimation and the A/V classification as a pairwise clustering problem. The graph is constructed through image segmentation, skeletonization, and identification of significant nodes. The edge weight is defined as the inverse Euclidean distance between its two end points in the feature space of intensity, orientation, curvature, diameter, and entropy. The reconstructed vascular network is classified into arteries and veins based on their intensity and morphology. The proposed approach has been applied to five public databases, namely INSPIRE, IOSTAR, VICAVR, DRIVE, and WIDE, and achieved high accuracies of 95.1%, 94.2%, 93.8%, 91.1%, and 91.0%, respectively. Furthermore, we have made manual annotations of the blood vessel topologies for INSPIRE, IOSTAR, VICAVR, and DRIVE datasets, and these annotations are released for public access so as to facilitate researchers in the community.
Collapse
|
43
|
Yin XX, Irshad S, Zhang Y. Artery/vein classification of retinal vessels using classifiers fusion. Health Inf Sci Syst 2019; 7:26. [PMID: 31749960 DOI: 10.1007/s13755-019-0090-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2019] [Accepted: 10/28/2019] [Indexed: 11/28/2022] Open
Abstract
The morphological changes in retinal blood vessels indicate cardiovascular diseases and consequently those diseases lead to ocular complications such as Hypertensive Retinopathy. One of the significant clinical findings related to this ocular abnormality is alteration of width of vessel. The classification of retinal vessels into arteries and veins in eye fundus images is a relevant task for the automatic assessment of vascular changes. This paper presents an important approach to solve this problem by means of feature ranking strategies and multiple classifiers decision-combination scheme that is specifically adapted for artery/vein classification. For this, three databases are used with a local dataset of 44 images and two publically available databases, INSPIRE-AVR containing 40 images and VICAVR containing 58 images. The local database also contains images with pathologically diseased structures. The performance of the proposed system is assessed by comparing the experimental results with the gold standard estimations as well as with the results of previous methodologies, achieving promising classification performance, with an over all accuracy of 90.45%, 93.90% and 87.82%, in retinal blood vessel separation for Local, INSPIRE-AVR and VICAVR dataset, respectively.
Collapse
Affiliation(s)
- Xiao-Xia Yin
- 1Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou, 510006 China
| | - Samra Irshad
- 2Institute for Sustainable Industries and Liveable Cities, Victoria University, Melbourne, Australia
| | - Yanchun Zhang
- 2Institute for Sustainable Industries and Liveable Cities, Victoria University, Melbourne, Australia
| |
Collapse
|
44
|
Multiloss Function Based Deep Convolutional Neural Network for Segmentation of Retinal Vasculature into Arterioles and Venules. BIOMED RESEARCH INTERNATIONAL 2019; 2019:4747230. [PMID: 31111055 PMCID: PMC6487175 DOI: 10.1155/2019/4747230] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Revised: 02/20/2019] [Accepted: 03/20/2019] [Indexed: 02/02/2023]
Abstract
The arterioles and venules (AV) classification of retinal vasculature is considered as the first step in the development of an automated system for analysing the vasculature biomarker association with disease prognosis. Most of the existing AV classification methods depend on the accurate segmentation of retinal blood vessels. Moreover, the unavailability of large-scale annotated data is a major hindrance in the application of deep learning techniques for AV classification. This paper presents an encoder-decoder based fully convolutional neural network for classification of retinal vasculature into arterioles and venules, without requiring the preliminary step of vessel segmentation. An optimized multiloss function is used to learn the pixel-wise and segment-wise retinal vessel labels. The proposed method is trained and evaluated on DRIVE, AVRDB, and a newly created AV classification dataset; and it attains 96%, 98%, and 97% accuracy, respectively. The new AV classification dataset is comprised of 700 annotated retinal images, which will offer the researchers a benchmark to compare their AV classification results.
Collapse
|
45
|
Bhuiyan A, Hussain MA, Wong TY, Klein R. Retinal Artery and Vein Classification for Automatic Vessel Caliber Grading. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:870-873. [PMID: 30440529 DOI: 10.1109/embc.2018.8512287] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Automated retinal artery and vein identification is a necessity to measure their caliber automatically and to achieve high efficiency and repeatability for a large number of images. In this paper, a novel framework for retinal artery and vein classification is provided. The proposed method utilizes the vessel crossover and color intensity profile which are the most significant features for artery and vein classification. The method first extracts retinal vascular network and then identify individual blood vessels for further classification as artery or vein. We apply deep learning algorithm based segmentation method to extract the retinal vascular network. We then identify each blood vessels to measure caliber that will be used for computing the Central Retinal Artery Equivalent (CRAE) and Central Retinal Vein Equivalent (CRVE). We map the vessel network and use the individual vessel crossover information, vessel color and intensity profile to identify individual vessel segment as artery and vein. We compared automatically classified artery and vein results with a human grader which showed an accuracy of 95%. We compare our results of caliber grading against an established semi-automated caliber grading system and protocol which showed a very high correlation of 0.85 and 0.92, for CRAE and CRVE respectively.
Collapse
|
46
|
Amil P, Reyes-Manzano CF, Guzmán-Vargas L, Sendiña-Nadal I, Masoller C. Network-based features for retinal fundus vessel structure analysis. PLoS One 2019; 14:e0220132. [PMID: 31344132 PMCID: PMC6658152 DOI: 10.1371/journal.pone.0220132] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Accepted: 07/09/2019] [Indexed: 12/03/2022] Open
Abstract
Retinal fundus imaging is a non-invasive method that allows visualizing the structure of the blood vessels in the retina whose features may indicate the presence of diseases such as diabetic retinopathy (DR) and glaucoma. Here we present a novel method to analyze and quantify changes in the retinal blood vessel structure in patients diagnosed with glaucoma or with DR. First, we use an automatic unsupervised segmentation algorithm to extract a tree-like graph from the retina blood vessel structure. The nodes of the graph represent branching (bifurcation) points and endpoints, while the links represent vessel segments that connect the nodes. Then, we quantify structural differences between the graphs extracted from the groups of healthy and non-healthy patients. We also use fractal analysis to characterize the extracted graphs. Applying these techniques to three retina fundus image databases we find significant differences between the healthy and non-healthy groups (p-values lower than 0.005 or 0.001 depending on the method and on the database). The results are sensitive to the segmentation method (manual or automatic) and to the resolution of the images.
Collapse
Affiliation(s)
- Pablo Amil
- Nonlinear Dynamics, Nonlinear Optics and Lasers, Universitat Politècnica de Catalunya, Terrassa, Spain
- * E-mail:
| | - Cesar F. Reyes-Manzano
- Unidad Profesional Interdisciplinaria en Ingeniería y Tecnologías Avanzadas, Instituto Politécnico Nacional, Gustavo A. Madero, Ciudad de México, México
| | - Lev Guzmán-Vargas
- Unidad Profesional Interdisciplinaria en Ingeniería y Tecnologías Avanzadas, Instituto Politécnico Nacional, Gustavo A. Madero, Ciudad de México, México
| | - Irene Sendiña-Nadal
- Complex Systems Group & GISC, Universidad Rey Juan Carlos, Madrid, Spain
- Center for Biomedical Technology, Universidad Politécnica de Madrid, Madrid, Spain
| | - Cristina Masoller
- Nonlinear Dynamics, Nonlinear Optics and Lasers, Universitat Politècnica de Catalunya, Terrassa, Spain
| |
Collapse
|
47
|
Hemelings R, Elen B, Stalmans I, Van Keer K, De Boever P, Blaschko MB. Artery-vein segmentation in fundus images using a fully convolutional network. Comput Med Imaging Graph 2019; 76:101636. [PMID: 31288217 DOI: 10.1016/j.compmedimag.2019.05.004] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Revised: 05/18/2019] [Accepted: 05/24/2019] [Indexed: 10/26/2022]
Abstract
Epidemiological studies demonstrate that dimensions of retinal vessels change with ocular diseases, coronary heart disease and stroke. Different metrics have been described to quantify these changes in fundus images, with arteriolar and venular calibers among the most widely used. The analysis often includes a manual procedure during which a trained grader differentiates between arterioles and venules. This step can be time-consuming and can introduce variability, especially when large volumes of images need to be analyzed. In light of the recent successes of fully convolutional networks (FCNs) applied to biomedical image segmentation, we assess its potential in the context of retinal artery-vein (A/V) discrimination. To the best of our knowledge, a deep learning (DL) architecture for simultaneous vessel extraction and A/V discrimination has not been previously employed. With the aim of improving the automation of vessel analysis, a novel application of the U-Net semantic segmentation architecture (based on FCNs) on the discrimination of arteries and veins in fundus images is presented. By utilizing DL, results are obtained that exceed accuracies reported in the literature. Our model was trained and tested on the public DRIVE and HRF datasets. For DRIVE, measuring performance on vessels wider than two pixels, the FCN achieved accuracies of 94.42% and 94.11% on arteries and veins, respectively. This represents a decrease in error of 25% over the previous state of the art reported by Xu et al. (2017). Additionally, we introduce the HRF A/V ground truth, on which our model achieves 96.98% accuracy on all discovered centerline pixels. HRF A/V ground truth validated by an ophthalmologist, predicted A/V annotations and evaluation code are available at https://github.com/rubenhx/av-segmentation.
Collapse
Affiliation(s)
- Ruben Hemelings
- Research Group Ophthalmology, KU Leuven, Kapucijnenvoer 33, 3000 Leuven, Belgium; ESAT-PSI, KU Leuven, Kasteelpark Arenberg 10, 3001 Leuven, Belgium; VITO NV, Boeretang 200, 2400 Mol, Belgium
| | - Bart Elen
- VITO NV, Boeretang 200, 2400 Mol, Belgium
| | - Ingeborg Stalmans
- Research Group Ophthalmology, KU Leuven, Kapucijnenvoer 33, 3000 Leuven, Belgium
| | - Karel Van Keer
- Research Group Ophthalmology, KU Leuven, Kapucijnenvoer 33, 3000 Leuven, Belgium
| | - Patrick De Boever
- Hasselt University, Agoralaan building D, 3590 Diepenbeek, Belgium; VITO NV, Boeretang 200, 2400 Mol, Belgium.
| | | |
Collapse
|
48
|
Girard F, Kavalec C, Cheriet F. Joint segmentation and classification of retinal arteries/veins from fundus images. Artif Intell Med 2019; 94:96-109. [DOI: 10.1016/j.artmed.2019.02.004] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2017] [Revised: 08/09/2018] [Accepted: 02/17/2019] [Indexed: 11/17/2022]
|
49
|
Srinidhi CL, P A, Rajan J. Automated Method for Retinal Artery/Vein Separation via Graph Search Metaheuristic Approach. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:2705-2718. [PMID: 30605099 DOI: 10.1109/tip.2018.2889534] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Separation of the vascular tree into arteries and veins is a fundamental prerequisite in the automatic diagnosis of retinal biomarkers associated with systemic and neurodegenerative diseases. In this paper, we present a novel graph search metaheuristic approach for automatic separation of arteries/veins (A/V) from color fundus images. Our method exploits local information to disentangle the complex vascular tree into multiple subtrees, and global information to label these vessel subtrees into arteries and veins. Given a binary vessel map, a graph representation of the vascular network is constructed representing the topological and spatial connectivity of the vascular structures. Based on the anatomical uniqueness at vessel crossing and branching points, the vascular tree is split into multiple subtrees containing arteries and veins. Finally, the identified vessel subtrees are labeled with A/V based on a set of handcrafted features trained with random forest classifier. The proposed method has been tested on four different publicly available retinal datasets with an average accuracy of 94.7%, 93.2%, 96.8% and 90.2% across AV-DRIVE, CT-DRIVE. INSPIRE-AVR and WIDE datasets, respectively. These results demonstrate the superiority of our proposed approach in outperforming state-ofthe- art methods for A/V separation.
Collapse
|
50
|
Xu X, Tan T, Xu F. An Improved U-Net Architecture for Simultaneous Arteriole and Venule Segmentation in Fundus Image. ACTA ACUST UNITED AC 2018. [DOI: 10.1007/978-3-319-95921-4_31] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|