1
|
Chen Q, Peng J, Zhao S, Liu W. Automatic artery/vein classification methods for retinal blood vessel: A review. Comput Med Imaging Graph 2024; 113:102355. [PMID: 38377630 DOI: 10.1016/j.compmedimag.2024.102355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 02/06/2024] [Accepted: 02/06/2024] [Indexed: 02/22/2024]
Abstract
Automatic retinal arteriovenous classification can assist ophthalmologists in disease early diagnosis. Deep learning-based methods and topological graph-based methods have become the main solutions for retinal arteriovenous classification in recent years. This paper reviews the automatic retinal arteriovenous classification methods from 2003 to 2022. Firstly, we compare different methods and provide comparison tables of the summary results. Secondly, we complete the classification of the public arteriovenous classification datasets and provide the annotation development tables of different datasets. Finally, we sort out the challenges of evaluation methods and provide a comprehensive evaluation system. Quantitative and qualitative analysis shows the changes in research hotspots over time, Quantitative and qualitative analyses reveal the evolution of research hotspots over time, highlighting the significance of exploring the integration of deep learning with topological information in future research.
Collapse
Affiliation(s)
- Qihan Chen
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
| | - Jianqing Peng
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China; Guangdong Provincial Key Laboratory of Fire Science and Technology, Guangzhou 510006, China.
| | - Shen Zhao
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China.
| | - Wanquan Liu
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
| |
Collapse
|
2
|
Zhou Y, Xu M, Hu Y, Blumberg SB, Zhao A, Wagner SK, Keane PA, Alexander DC. CF-Loss: Clinically-relevant feature optimised loss function for retinal multi-class vessel segmentation and vascular feature measurement. Med Image Anal 2024; 93:103098. [PMID: 38320370 DOI: 10.1016/j.media.2024.103098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 05/22/2023] [Accepted: 01/30/2024] [Indexed: 02/08/2024]
Abstract
Characterising clinically-relevant vascular features, such as vessel density and fractal dimension, can benefit biomarker discovery and disease diagnosis for both ophthalmic and systemic diseases. In this work, we explicitly encode vascular features into an end-to-end loss function for multi-class vessel segmentation, categorising pixels into artery, vein, uncertain pixels, and background. This clinically-relevant feature optimised loss function (CF-Loss) regulates networks to segment accurate multi-class vessel maps that produce precise vascular features. Our experiments first verify that CF-Loss significantly improves both multi-class vessel segmentation and vascular feature estimation, with two standard segmentation networks, on three publicly available datasets. We reveal that pixel-based segmentation performance is not always positively correlated with accuracy of vascular features, thus highlighting the importance of optimising vascular features directly via CF-Loss. Finally, we show that improved vascular features from CF-Loss, as biomarkers, can yield quantitative improvements in the prediction of ischaemic stroke, a real-world clinical downstream task. The code is available at https://github.com/rmaphoh/feature-loss.
Collapse
Affiliation(s)
- Yukun Zhou
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London EC1V 9EL, UK; Institute of Ophthalmology, University College London, London EC1V 9EL, UK.
| | - MouCheng Xu
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TS, UK
| | - Stefano B Blumberg
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Computer Science, University College London, London WC1E 6BT, UK
| | - An Zhao
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Computer Science, University College London, London WC1E 6BT, UK
| | - Siegfried K Wagner
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London EC1V 9EL, UK; Institute of Ophthalmology, University College London, London EC1V 9EL, UK
| | - Pearse A Keane
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London EC1V 9EL, UK; Institute of Ophthalmology, University College London, London EC1V 9EL, UK
| | - Daniel C Alexander
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Computer Science, University College London, London WC1E 6BT, UK
| |
Collapse
|
3
|
Hu J, Qiu L, Wang H, Zhang J. Semi-supervised point consistency network for retinal artery/vein classification. Comput Biol Med 2024; 168:107633. [PMID: 37992471 DOI: 10.1016/j.compbiomed.2023.107633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 10/02/2023] [Accepted: 10/23/2023] [Indexed: 11/24/2023]
Abstract
Recent deep learning methods with convolutional neural networks (CNNs) have boosted advance prosperity of medical image analysis and expedited the automatic retinal artery/vein (A/V) classification. However, it is challenging for these CNN-based approaches in two aspects: (1) specific tubular structures and subtle variations in appearance, contrast, and geometry, which tend to be ignored in CNNs with network layer increasing; (2) limited well-labeled data for supervised segmentation of retinal vessels, which may hinder the effectiveness of deep learning methods. To address these issues, we propose a novel semi-supervised point consistency network (SPC-Net) for retinal A/V classification. SPC-Net consists of an A/V classification (AVC) module and a multi-class point consistency (MPC) module. The AVC module adopts an encoder-decoder segmentation network to generate the prediction probability map of A/V for supervised learning. The MPC module introduces point set representations to adaptively generate point set classification maps of the arteriovenous skeleton, which enjoys its prediction flexibility and consistency (i.e. point consistency) to effectively alleviate arteriovenous confusion. In addition, we propose a consistency regularization between the predicted A/V classification probability maps and point set representations maps for unlabeled data to explore the inherent segmentation perturbation of the point consistency, reducing the need for annotated data. We validate our method on two typical public datasets (DRIVE, HRF) and a private dataset (TR280) with different resolutions. Extensive qualitative and quantitative experimental results demonstrate the effectiveness of our proposed method for supervised and semi-supervised learning.
Collapse
Affiliation(s)
- Jingfei Hu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, Anhui, China
| | - Linwei Qiu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, Anhui, China
| | - Hua Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, Anhui, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, Anhui, China; Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, 100083, China.
| |
Collapse
|
4
|
Suman S, Tiwari AK, Singh K. Computer-aided diagnostic system for hypertensive retinopathy: A review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107627. [PMID: 37320942 DOI: 10.1016/j.cmpb.2023.107627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 05/03/2023] [Accepted: 05/27/2023] [Indexed: 06/17/2023]
Abstract
Hypertensive Retinopathy (HR) is a retinal disease caused by elevated blood pressure for a prolonged period. There are no obvious signs in the early stages of high blood pressure, but it affects various body parts over time, including the eyes. HR is a biomarker for several illnesses, including retinal diseases, atherosclerosis, strokes, kidney disease, and cardiovascular risks. Early microcirculation abnormalities in chronic diseases can be diagnosed through retinal examination prior to the onset of major clinical consequences. Computer-aided diagnosis (CAD) plays a vital role in the early identification of HR with improved diagnostic accuracy, which is time-efficient and demands fewer resources. Recently, numerous studies have been reported on the automatic identification of HR. This paper provides a comprehensive review of the automated tasks of Artery-Vein (A/V) classification, Arteriovenous ratio (AVR) computation, HR detection (Binary classification), and HR severity grading. The review is conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol. The paper discusses the clinical features of HR, the availability of datasets, existing methods used for A/V classification, AVR computation, HR detection, and severity grading, and performance evaluation metrics. The reviewed articles are summarized with classifiers details, adoption of different kinds of methodologies, performance comparisons, datasets details, their pros and cons, and computational platform. For each task, a summary and critical in-depth analysis are provided, as well as common research issues and challenges in the existing studies. Finally, the paper proposes future research directions to overcome challenges associated with data set availability, HR detection, and severity grading.
Collapse
Affiliation(s)
- Supriya Suman
- Interdisciplinary Research Platform (IDRP): Smart Healthcare, Indian Institute of Technology, N.H. 62, Nagaur Road, Karwar, Jodhpur, Rajasthan 342030, India.
| | - Anil Kumar Tiwari
- Department of Electrical Engineering, Indian Institute of Technology, N.H. 62, Nagaur Road, Karwar, Jodhpur, Rajasthan 342030, India
| | - Kuldeep Singh
- Department of Pediatrics, All India Institute of Medical Sciences, Basni Industrial Area Phase-2, Jodhpur, Rajasthan 342005, India
| |
Collapse
|
5
|
Kv R, Prasad K, Peralam Yegneswaran P. Segmentation and Classification Approaches of Clinically Relevant Curvilinear Structures: A Review. J Med Syst 2023; 47:40. [PMID: 36971852 PMCID: PMC10042761 DOI: 10.1007/s10916-023-01927-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 02/25/2023] [Indexed: 03/29/2023]
Abstract
Detection of curvilinear structures from microscopic images, which help the clinicians to make an unambiguous diagnosis is assuming paramount importance in recent clinical practice. Appearance and size of dermatophytic hyphae, keratitic fungi, corneal and retinal vessels vary widely making their automated detection cumbersome. Automated deep learning methods, endowed with superior self-learning capacity, have superseded the traditional machine learning methods, especially in complex images with challenging background. Automatic feature learning ability using large input data with better generalization and recognition capability, but devoid of human interference and excessive pre-processing, is highly beneficial in the above context. Varied attempts have been made by researchers to overcome challenges such as thin vessels, bifurcations and obstructive lesions in retinal vessel detection as revealed through several publications reviewed here. Revelations of diabetic neuropathic complications such as tortuosity, changes in the density and angles of the corneal fibers have been successfully sorted in many publications reviewed here. Since artifacts complicate the images and affect the quality of analysis, methods addressing these challenges have been described. Traditional and deep learning methods, that have been adapted and published between 2015 and 2021 covering retinal vessels, corneal nerves and filamentous fungi have been summarized in this review. We find several novel and meritorious ideas and techniques being put to use in the case of retinal vessel segmentation and classification, which by way of cross-domain adaptation can be utilized in the case of corneal and filamentous fungi also, making suitable adaptations to the challenges to be addressed.
Collapse
Affiliation(s)
- Rajitha Kv
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India.
| | - Prakash Peralam Yegneswaran
- Department of Microbiology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India
| |
Collapse
|
6
|
End-to-End Automatic Classification of Retinal Vessel Based on Generative Adversarial Networks with Improved U-Net. Diagnostics (Basel) 2023; 13:diagnostics13061148. [PMID: 36980456 PMCID: PMC10047448 DOI: 10.3390/diagnostics13061148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/07/2023] [Accepted: 03/13/2023] [Indexed: 03/19/2023] Open
Abstract
The retinal vessels in the human body are the only ones that can be observed directly by non-invasive imaging techniques. Retinal vessel morphology and structure are the important objects of concern for physicians in the early diagnosis and treatment of related diseases. The classification of retinal vessels has important guiding significance in the basic stage of diagnostic treatment. This paper proposes a novel method based on generative adversarial networks with improved U-Net, which can achieve synchronous automatic segmentation and classification of blood vessels by an end-to-end network. The proposed method avoids the dependency of the segmentation results in the multiple classification tasks. Moreover, the proposed method builds on an accurate classification of arteries and veins while also classifying arteriovenous crossings. The validity of the proposed method is evaluated on the RITE dataset: the accuracy of image comprehensive classification reaches 96.87%. The sensitivity and specificity of arteriovenous classification reach 91.78% and 97.25%. The results verify the effectiveness of the proposed method and show the competitive classification performance.
Collapse
|
7
|
Xu X, Yang P, Wang H, Xiao Z, Xing G, Zhang X, Wang W, Xu F, Zhang J, Lei J. AV-casNet: Fully Automatic Arteriole-Venule Segmentation and Differentiation in OCT Angiography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:481-492. [PMID: 36227826 DOI: 10.1109/tmi.2022.3214291] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Automatic segmentation and differentiation of retinal arteriole and venule (AV), defined as small blood vessels directly before and after the capillary plexus, are of great importance for the diagnosis of various eye diseases and systemic diseases, such as diabetic retinopathy, hypertension, and cardiovascular diseases. Optical coherence tomography angiography (OCTA) is a recent imaging modality that provides capillary-level blood flow information. However, OCTA does not have the colorimetric and geometric differences between AV as the fundus photography does. Various methods have been proposed to differentiate AV in OCTA, which typically needs the guidance of other imaging modalities. In this study, we propose a cascaded neural network to automatically segment and differentiate AV solely based on OCTA. A convolutional neural network (CNN) module is first applied to generate an initial segmentation, followed by a graph neural network (GNN) to improve the connectivity of the initial segmentation. Various CNN and GNN architectures are employed and compared. The proposed method is evaluated on multi-center clinical datasets, including 3 ×3 mm2 and 6 ×6 mm2 OCTA. The proposed method holds the potential to enrich OCTA image information for the diagnosis of various diseases.
Collapse
|
8
|
Toptaş B, Hanbay D. Separation of arteries and veins in retinal fundus images with a new CNN architecture. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2151066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Buket Toptaş
- Computer Engineering Department, Engineering and Natural Science Faculty, Bandırma Onyedi Eylül University, Balıkesir, Turkey
| | - Davut Hanbay
- Computer Engineering Department, Engineering Faculty, Inonu University, Malatya, Turkey
| |
Collapse
|
9
|
Hu J, Wang H, Wu G, Cao Z, Mou L, Zhao Y, Zhang J. Multi-scale Interactive Network with Artery/Vein Discriminator for Retinal Vessel Classification. IEEE J Biomed Health Inform 2022; 26:3896-3905. [PMID: 35394918 DOI: 10.1109/jbhi.2022.3165867] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Automatic classification of retinal arteries and veins plays an important role in assisting clinicians to diagnosis cardiovascular and eye-related diseases. However, due to the high degree of anatomical variation across the population, and the presence of inconsistent labels by the subjective judgment of annotators in available training data, most of existing methods generally suffer from blood vessel discontinuity and arteriovenous confusion, the artery/vein (A/V) classification task still faces great challenges. In this work, we propose a multi-scale interactive network with A/V discriminator for retinal artery and vein recognition, which can reduce the arteriovenous confusion and alleviate the disturbance of noisy label. A multi-scale interaction (MI) module is designed in encoder for realizing the cross-space multi-scale features interaction of fundus images, effectively integrate high-level and low-level context information. In particular, we design an ingenious A/V discriminator (AVD) that utilizes the independent and shared information between arteries and veins, and combine with topology loss, to further strengthen the learning ability of model to resolve the arteriovenous confusion. In addition, we adopt a sample re-weighting (SW) strategy to effectively alleviate the disturbance from data labeling errors. The proposed model is verified on three publicly available fundus image datasets (AV-DRIVE, HRF, LES-AV) and a private dataset. We achieve the accuracy of 97.47%, 96.91%, 97.79%, and 98.18% respectively on these four datasets. Extensive experimental results demonstrate that our method achieves competitive performance compared with state-of-the-art methods for A/V classification. To address the problem of training data scarcity, we publicly release 100 fundus images with A/V annotations to promote relevant research in the community.
Collapse
|
10
|
TW-GAN: Topology and width aware GAN for retinal artery/vein classification. Med Image Anal 2022; 77:102340. [DOI: 10.1016/j.media.2021.102340] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 12/18/2021] [Accepted: 12/20/2021] [Indexed: 11/20/2022]
|
11
|
Binh NT, Hien NM, Tin DT. Improving U-Net architecture and graph cuts optimization to classify arterioles and venules in retina fundus images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-212259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The central retinal artery and its branches supply blood to the inner retina. Vascular manifestations in the retina indirectly reflect the vascular changes and damage in organs such as the heart, kidneys, and brain because of the similar vascular structure of these organs. The diabetic retinopathy and risk of stroke are caused by increased venular caliber. The degrees of these diseases depend on the changes of arterioles and venules. The ratio between the calibers of arterioles and venules (AVR) is various. AVR is considered as the useful diagnostic indicator of different associated health problems. However, the task is not easy because of the lack of information of the features being used to classify the retinal vessels as arterioles and venules. This paper proposed a method to classify the retinal vessels into the arterioles and venules based on improving U-Net architecture and graph cuts. The accuracy of the proposed method is about 97.6%. The results of the proposed method are better than the other methods in RITE dataset and AVRDB dataset.
Collapse
Affiliation(s)
- Nguyen Thanh Binh
- Department of Information Systems, Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Ho Chi Minh City, Vietnam
- Vietnam National University Ho Chi Minh City, Linh Trung Ward, Thu Duc District, Ho Chi Minh City, Vietnam
| | - Nguyen Mong Hien
- Department of Information Systems, Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Ho Chi Minh City, Vietnam
- Tra Vinh University, Vietnam
| | - Dang Thanh Tin
- Vietnam National University Ho Chi Minh City, Linh Trung Ward, Thu Duc District, Ho Chi Minh City, Vietnam
- Information Systems Engineering Laboratory, Faculty of Electrical and Electronics Engineering, Ho Chi Minh City University of Technology (HCMUT), Ho Chi Minh City, Vietnam
| |
Collapse
|
12
|
Networks behind the morphology and structural design of living systems. Phys Life Rev 2022; 41:1-21. [DOI: 10.1016/j.plrev.2022.03.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 03/04/2022] [Indexed: 01/06/2023]
|
13
|
Review of Machine Learning Applications Using Retinal Fundus Images. Diagnostics (Basel) 2022; 12:diagnostics12010134. [PMID: 35054301 PMCID: PMC8774893 DOI: 10.3390/diagnostics12010134] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 02/04/2023] Open
Abstract
Automating screening and diagnosis in the medical field saves time and reduces the chances of misdiagnosis while saving on labor and cost for physicians. With the feasibility and development of deep learning methods, machines are now able to interpret complex features in medical data, which leads to rapid advancements in automation. Such efforts have been made in ophthalmology to analyze retinal images and build frameworks based on analysis for the identification of retinopathy and the assessment of its severity. This paper reviews recent state-of-the-art works utilizing the color fundus image taken from one of the imaging modalities used in ophthalmology. Specifically, the deep learning methods of automated screening and diagnosis for diabetic retinopathy (DR), age-related macular degeneration (AMD), and glaucoma are investigated. In addition, the machine learning techniques applied to the retinal vasculature extraction from the fundus image are covered. The challenges in developing these systems are also discussed.
Collapse
|
14
|
Mishra S, Wang YX, Wei CC, Chen DZ, Hu XS. VTG-Net: A CNN Based Vessel Topology Graph Network for Retinal Artery/Vein Classification. Front Med (Lausanne) 2021; 8:750396. [PMID: 34820394 PMCID: PMC8606556 DOI: 10.3389/fmed.2021.750396] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 10/14/2021] [Indexed: 11/19/2022] Open
Abstract
From diagnosing cardiovascular diseases to analyzing the progression of diabetic retinopathy, accurate retinal artery/vein (A/V) classification is critical. Promising approaches for A/V classification, ranging from conventional graph based methods to recent convolutional neural network (CNN) based models, have been known. However, the inability of traditional graph based methods to utilize deep hierarchical features extracted by CNNs and the limitations of current CNN based methods to incorporate vessel topology information hinder their effectiveness. In this paper, we propose a new CNN based framework, VTG-Net (vessel topology graph network), for retinal A/V classification by incorporating vessel topology information. VTG-Net exploits retinal vessel topology along with CNN features to improve A/V classification accuracy. Specifically, we transform vessel features extracted by CNN in the image domain into a graph representation preserving the vessel topology. Then by exploiting a graph convolutional network (GCN), we enable our model to learn both CNN features and vessel topological features simultaneously. The final predication is attained by fusing the CNN and GCN outputs. Using a publicly available AV-DRIVE dataset and an in-house dataset, we verify the high performance of our VTG-Net for retinal A/V classification over state-of-the-art methods (with ~2% improvement in accuracy on the AV-DRIVE dataset).
Collapse
Affiliation(s)
- Suraj Mishra
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Ya Xing Wang
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chuan Chuan Wei
- Department of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Danny Z. Chen
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - X. Sharon Hu
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| |
Collapse
|
15
|
Jiang Y, Chen W, Liu M, Wang Y, Meijering E. DeepRayburst for Automatic Shape Analysis of Tree-Like Structures in Biomedical Images. IEEE J Biomed Health Inform 2021; 26:2204-2215. [PMID: 34727041 DOI: 10.1109/jbhi.2021.3124514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Precise quantification of tree-like structures from biomedical images, such as neuronal shape reconstruction and retinal blood vessel caliber estimation, is increasingly important in understanding normal function and pathologic processes in biology. Some handcrafted methods have been proposed for this purpose in recent years. However, they are designed only for a specific application. In this paper, we propose a shape analysis algorithm, DeepRayburst, that can be applied to many different applications based on a Multi-Feature Rayburst Sampling (MFRS) and a Dual Channel Temporal Convolutional Network (DC-TCN). Specifically, we first generate a Rayburst Sampling (RS) core containing a set of multidirectional rays. Then the MFRS is designed by extending each ray of the RS to multiple parallel rays which extract a set of feature sequences. A Gaussian kernel is then used to fuse these feature sequences and outputs one feature sequence. Furthermore, we design a DC-TCN to make the rays terminate on the surface of tree-like structures according to the fused feature sequence. Finally, by analyzing the distribution patterns of the terminated rays, the algorithm can serve multiple shape analysis applications of tree-like structures. Experiments on three different applications, including soma shape reconstruction, neuronal shape reconstruction, and vessel caliber estimation, confirm that the proposed method outperforms other state-of-the-art shape analysis methods, which demonstrate its flexibility and robustness.
Collapse
|
16
|
Hu J, Wang H, Cao Z, Wu G, Jonas JB, Wang YX, Zhang J. Automatic Artery/Vein Classification Using a Vessel-Constraint Network for Multicenter Fundus Images. Front Cell Dev Biol 2021; 9:659941. [PMID: 34178986 PMCID: PMC8226261 DOI: 10.3389/fcell.2021.659941] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 04/19/2021] [Indexed: 11/24/2022] Open
Abstract
Retinal blood vessel morphological abnormalities are generally associated with cardiovascular, cerebrovascular, and systemic diseases, automatic artery/vein (A/V) classification is particularly important for medical image analysis and clinical decision making. However, the current method still has some limitations in A/V classification, especially the blood vessel edge and end error problems caused by the single scale and the blurred boundary of the A/V. To alleviate these problems, in this work, we propose a vessel-constraint network (VC-Net) that utilizes the information of vessel distribution and edge to enhance A/V classification, which is a high-precision A/V classification model based on data fusion. Particularly, the VC-Net introduces a vessel-constraint (VC) module that combines local and global vessel information to generate a weight map to constrain the A/V features, which suppresses the background-prone features and enhances the edge and end features of blood vessels. In addition, the VC-Net employs a multiscale feature (MSF) module to extract blood vessel information with different scales to improve the feature extraction capability and robustness of the model. And the VC-Net can get vessel segmentation results simultaneously. The proposed method is tested on publicly available fundus image datasets with different scales, namely, DRIVE, LES, and HRF, and validated on two newly created multicenter datasets: Tongren and Kailuan. We achieve a balance accuracy of 0.9554 and F1 scores of 0.7616 and 0.7971 for the arteries and veins, respectively, on the DRIVE dataset. The experimental results prove that the proposed model achieves competitive performance in A/V classification and vessel segmentation tasks compared with state-of-the-art methods. Finally, we test the Kailuan dataset with other trained fusion datasets, the results also show good robustness. To promote research in this area, the Tongren dataset and source code will be made publicly available. The dataset and code will be made available at https://github.com/huawang123/VC-Net.
Collapse
Affiliation(s)
- Jingfei Hu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Hua Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Zhaohui Cao
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Guang Wu
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Jost B Jonas
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China.,Department of Ophthalmology, Medical Faculty Mannheim of the Ruprecht-Karls-University Heidelberg, Mannheim, Germany
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China.,Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| |
Collapse
|
17
|
Guan C, Yi M, Du Q, Xiong H, Tan H, Wang M, Zeng Y. Full-field optical multi-functional angiography based on endogenous hemodynamic characteristics. JOURNAL OF BIOPHOTONICS 2021; 14:e202000411. [PMID: 33449425 DOI: 10.1002/jbio.202000411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Revised: 12/21/2020] [Accepted: 01/12/2021] [Indexed: 06/12/2023]
Abstract
Blood flow functional imaging is widely applied in biological research to provide vascular morphological and statistical parameters. It relies on the absorption difference and is, therefore, easily affected by complex biological structures, and it cannot accommodate abundant functional information. We propose a full-field multi-functional angiography method to classify arteriovenous vessels and to display flow velocity and vascular diameter distribution simultaneously. Unlike previous methods, an under-sampled laser Doppler acquisition mode is used to record the low-coherence speckle, and multi-functional angiography is achieved by modulating the endogenous hemodynamic characteristics from low-coherence speckle. To demonstrate the combination of classified angiography, blood flow velocity measurement, and vascular diameter measurement realized using our method, we performed experiments on the flow phantom and living chicken embryos and generated multi-functional angiograms. The proposed method can be used as a label-free multi-functional angiography technique in which red blood cells provide a strong endogenous source of naturally hemodynamic characteristics.
Collapse
Affiliation(s)
- Caizhong Guan
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Min Yi
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Qianyi Du
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Honglian Xiong
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Haishu Tan
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Mingyi Wang
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Yaguang Zeng
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| |
Collapse
|
18
|
Zhou Y, Chen Z, Shen H, Zheng X, Zhao R, Duan X. A refined equilibrium generative adversarial network for retinal vessel segmentation. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.06.143] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
19
|
Li X, Jiang Y, Li M, Yin S. Lightweight Attention Convolutional Neural Network for Retinal Vessel Image Segmentation. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS 2021; 17:1958-1967. [DOI: 10.1109/tii.2020.2993842] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
20
|
Mookiah MRK, Hogg S, MacGillivray TJ, Prathiba V, Pradeepa R, Mohan V, Anjana RM, Doney AS, Palmer CNA, Trucco E. A review of machine learning methods for retinal blood vessel segmentation and artery/vein classification. Med Image Anal 2020; 68:101905. [PMID: 33385700 DOI: 10.1016/j.media.2020.101905] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 11/10/2020] [Accepted: 11/11/2020] [Indexed: 12/20/2022]
Abstract
The eye affords a unique opportunity to inspect a rich part of the human microvasculature non-invasively via retinal imaging. Retinal blood vessel segmentation and classification are prime steps for the diagnosis and risk assessment of microvascular and systemic diseases. A high volume of techniques based on deep learning have been published in recent years. In this context, we review 158 papers published between 2012 and 2020, focussing on methods based on machine and deep learning (DL) for automatic vessel segmentation and classification for fundus camera images. We divide the methods into various classes by task (segmentation or artery-vein classification), technique (supervised or unsupervised, deep and non-deep learning, hand-crafted methods) and more specific algorithms (e.g. multiscale, morphology). We discuss advantages and limitations, and include tables summarising results at-a-glance. Finally, we attempt to assess the quantitative merit of DL methods in terms of accuracy improvement compared to other methods. The results allow us to offer our views on the outlook for vessel segmentation and classification for fundus camera images.
Collapse
Affiliation(s)
| | - Stephen Hogg
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| | - Tom J MacGillivray
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh EH16 4SB, UK
| | - Vijayaraghavan Prathiba
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Rajendra Pradeepa
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Viswanathan Mohan
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Ranjit Mohan Anjana
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Alexander S Doney
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Colin N A Palmer
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Emanuele Trucco
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| |
Collapse
|
21
|
Li H, Wang Y, Wan C, Shen J, Chen Z, Ye H, Yu Q. MAU-Net: A Retinal Vessels Segmentation Method. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1958-1961. [PMID: 33018386 DOI: 10.1109/embc44109.2020.9176093] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Detailed extraction of retinal vessel morphology is of great significance in many clinical applications. In this paper, we propose a retinal image segmentation method, called MAU-Net, which is based on the U-net structure and takes advantages of both modulated deformable convolution and dual attention modules to realize vessels segmentation. Specifically, based on the classic U-shaped architecture, our network introduces the Modulated Deformable Convolutional (MDC) block as encoding and decoding unit to model vessels with various shapes and deformations. In addition, in order to obtain better feature presentations, we aggregate the outputs of dual attention modules: the position attention module (PAM) and channel attention module (CAM). On three publicly available datasets: DRIVE, STARE and CHASEDB1, we have achieved superior performance to other algorithms. Quantitative and qualitative experimental results show that our MAU-Net can effectively and accurately accomplish the retinal vessels segmentation task.
Collapse
|
22
|
Kang H, Gao Y, Guo S, Xu X, Li T, Wang K. AVNet: A retinal artery/vein classification network with category-attention weighted fusion. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 195:105629. [PMID: 32634648 DOI: 10.1016/j.cmpb.2020.105629] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Accepted: 06/21/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic artery/vein (A/V) classification in retinal images is of great importance in detecting vascular abnormalities, which may provide biomarkers for early diagnosis of many systemic diseases. It is intuitive to apply popular deep semantic segmentation network for A/V classification. However, the model is required to provide powerful representation ability since vessel is much more complex than general objects. Moreover, deep network may lead to inconsistent classification results for the same vessel due to the lack of structured optimization objective. METHODS In this paper, we propose a novel segmentation network named AVNet, which effectively enhances the classification ability of the model by integrating category-attention weighted fusion (CWF) module, significantly improving the pixel-level A/V classification results. Then, a graph based vascular structure reconstruction (VSR) algorithm is employed to reduce the segment-wise inconsistency, verifying the effect of the graph model on noisy vessel segmentation results. RESULTS The proposed method has been verified on three datasets, i.e. DRIVE, LES-AV and WIDE. AVNet achieves pixel-level accuracies of 90.62%, 90.34%, and 93.16%, respectively, and VSR further improves the performance by 0.19%, 1.85% and 0.64%, achieving the state-of-the-art results on these three datasets. CONCLUSION The proposed method achieves competitive performance in A/V classification task.
Collapse
Affiliation(s)
- Hong Kang
- College of Computer Science, Nankai University, Tianjin, China; Beijing Shanggong Medical Technology Co. Ltd., China
| | - Yingqi Gao
- College of Computer Science, Nankai University, Tianjin, China
| | - Song Guo
- College of Computer Science, Nankai University, Tianjin, China
| | - Xia Xu
- College of Computer Science, Nankai University, Tianjin, China
| | - Tao Li
- College of Computer Science, Nankai University, Tianjin, China; State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Science, Beijing 100190, China
| | - Kai Wang
- College of Computer Science, Nankai University, Tianjin, China; Key Laboratory for Medical Data Analysis and Statistical Research of Tianjin, China.
| |
Collapse
|
23
|
Wang Z, Jiang X, Liu J, Cheng KT, Yang X. Multi-Task Siamese Network for Retinal Artery/Vein Separation via Deep Convolution Along Vessel. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2904-2919. [PMID: 32167888 DOI: 10.1109/tmi.2020.2980117] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Vascular tree disentanglement and vessel type classification are two crucial steps of the graph-based method for retinal artery-vein (A/V) separation. Existing approaches treat them as two independent tasks and mostly rely on ad hoc rules (e.g. change of vessel directions) and hand-crafted features (e.g. color, thickness) to handle them respectively. However, we argue that the two tasks are highly correlated and should be handled jointly since knowing the A/V type can unravel those highly entangled vascular trees, which in turn helps to infer the types of connected vessels that are hard to classify based on only appearance. Therefore, designing features and models isolatedly for the two tasks often leads to a suboptimal solution of A/V separation. In view of this, this paper proposes a multi-task siamese network which aims to learn the two tasks jointly and thus yields more robust deep features for accurate A/V separation. Specifically, we first introduce Convolution Along Vessel (CAV) to extract the visual features by convolving a fundus image along vessel segments, and the geometric features by tracking the directions of blood flow in vessels. The siamese network is then trained to learn multiple tasks: i) classifying A/V types of vessel segments using visual features only, and ii) estimating the similarity of every two connected segments by comparing their visual and geometric features in order to disentangle the vasculature into individual vessel trees. Finally, the results of two tasks mutually correct each other to accomplish final A/V separation. Experimental results demonstrate that our method can achieve accuracy values of 94.7%, 96.9%, and 94.5% on three major databases (DRIVE, INSPIRE, WIDE) respectively, which outperforms recent state-of-the-arts.
Collapse
|