101
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
102
|
Ye Y, Pan C, Wu Y, Wang S, Xia Y. MFI-Net: Multiscale Feature Interaction Network for Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2022; 26:4551-4562. [PMID: 35696471 DOI: 10.1109/jbhi.2022.3182471] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Segmentation of retinal vessels on fundus images plays a critical role in the diagnosis of micro-vascular and ophthalmological diseases. Although being extensively studied, this task remains challenging due to many factors including the highly variable vessel width and poor vessel-background contrast. In this paper, we propose a multiscale feature interaction network (MFI-Net) for retinal vessel segmentation, which is a U-shaped convolutional neural network equipped with the pyramid squeeze-and-excitation (PSE) module, coarse-to-fine (C2F) module, deep supervision, and feature fusion. We extend the SE operator to multiscale features, resulting in the PSE module, which uses the channel attention learned at multiple scales to enhance multiscale features and enables the network to handle the vessels with variable width. We further design the C2F module to generate and re-process the residual feature maps, aiming to preserve more vessel details during the decoding process. The proposed MFI-Net has been evaluated against several public models on the DRIVE, STARE, CHASE_DB1, and HRF datasets. Our results suggest that both PSE and C2F modules are effective in improving the accuracy of MFI-Net, and also indicate that our model has superior segmentation performance and generalization ability over existing models on four public datasets.
Collapse
|
103
|
Mishra S, Zhang Y, Chen DZ, Hu XS. Data-Driven Deep Supervision for Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1560-1574. [PMID: 35030076 DOI: 10.1109/tmi.2022.3143371] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Medical image segmentation plays a vital role in disease diagnosis and analysis. However, data-dependent difficulties such as low image contrast, noisy background, and complicated objects of interest render the segmentation problem challenging. These difficulties diminish dense prediction and make it tough for known approaches to explore data-specific attributes for robust feature extraction. In this paper, we study medical image segmentation by focusing on robust data-specific feature extraction to achieve improved dense prediction. We propose a new deep convolutional neural network (CNN), which exploits specific attributes of input datasets to utilize deep supervision for enhanced feature extraction. In particular, we strategically locate and deploy auxiliary supervision, by matching the object perceptive field (OPF) (which we define and compute) with the layer-wise effective receptive fields (LERF) of the network. This helps the model pay close attention to some distinct input data dependent features, which the network might otherwise 'ignore' during training. Further, to achieve better target localization and refined dense prediction, we propose the densely decoded networks (DDN), by selectively introducing additional network connections (the 'crutch' connections). Using five public datasets (two retinal vessel, melanoma, optic disc/cup, and spleen segmentation) and two in-house datasets (lymph node and fungus segmentation), we verify the effectiveness of our proposed approach in 2D and 3D segmentation.
Collapse
|
104
|
Computational Approach for Detection of Diabetes from Ocular Scans. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:5066147. [PMID: 35607469 PMCID: PMC9124089 DOI: 10.1155/2022/5066147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 04/01/2022] [Accepted: 04/22/2022] [Indexed: 11/18/2022]
Abstract
The estimated 30 million children and adults are suffering with diabetes across the world. A person with diabetes can recognize several symptoms, and it can also be tested using retina image as diabetes also affects the human eye. The doctor is usually able to detect retinal changes quickly and can help prevent vision loss. Therefore, regular eye examinations are very important. Diabetes is a chronic disease that affects various parts of the human body including the retina. It can also be considered as major cause for blindness in developed countries. This paper deals with classification of retinal image into diabetes or not with the help of deep learning algorithms and architecture. Hence, deep learning is beneficial for classification of medical images specifically such a complex image of human retina. A large number of image data are considered throughout the project on which classification is performed by using binary classifier. On applying certain deep learning algorithms, model results into the training accuracy of 96.68% and validation accuracy of 66.82%. Diabetic retinopathy can be considered as an effective and efficient method for diabetes detection.
Collapse
|
105
|
Hussain S, Guo F, Li W, Shen Z. DilUnet: A U-net based architecture for blood vessels segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 218:106732. [PMID: 35279601 DOI: 10.1016/j.cmpb.2022.106732] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 02/24/2022] [Accepted: 03/03/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinal image segmentation can help clinicians detect pathological disorders by studying changes in retinal blood vessels. This early detection can help prevent blindness and many other vision impairments. So far, several supervised and unsupervised methods have been proposed for the task of automatic blood vessel segmentation. However, the sensitivity and the robustness of these methods can be improved by correctly classifying more vessel pixels. METHOD We proposed an automatic, retinal blood vessel segmentation method based on the U-net architecture. This end-to-end framework utilizes preprocessing and a data augmentation pipeline for training. The architecture utilizes multiscale input and multioutput modules with improved skip connections and the correct use of dilated convolutions for effective feature extraction. In multiscale input, the input image is scaled down and concatenated with the output of convolutional blocks at different points in the encoder path to ensure the feature transfer of the original image. The multioutput module obtains upsampled outputs from each decoder block that are combined to obtain the final output. Skip paths connect each encoder block with the corresponding decoder block, and the whole architecture utilizes different dilation rates to improve the overall feature extraction. RESULTS The proposed method achieved an accuracy: of 0.9680, 0.9694, and 0.9701; sensitivity of 0.8837, 0.8263, and 0.8713; and Intersection Over Union (IOU) of 0.8698, 0.7951, and 0.8184 with three publicly available datasets: DRIVE, STARE, and CHASE, respectively. An ablation study is performed to show the contribution of each proposed module and technique. CONCLUSION The evaluation metrics revealed that the performance of the proposed method is higher than that of the original U-net and other U-net-based architectures, as well as many other state-of-the-art segmentation techniques, and that the proposed method is robust to noise.
Collapse
Affiliation(s)
- Snawar Hussain
- School of Automation, Central South University, Changsha, Hunan 410083, China
| | - Fan Guo
- School of Automation, Central South University, Changsha, Hunan 410083, China.
| | - Weiqing Li
- School of Automation, Central South University, Changsha, Hunan 410083, China
| | - Ziqi Shen
- School of Automation, Central South University, Changsha, Hunan 410083, China
| |
Collapse
|
106
|
Hofer D, Schmidt-Erfurth U, Orlando JI, Goldbach F, Gerendas BS, Seeböck P. Improving foveal avascular zone segmentation in fluorescein angiograms by leveraging manual vessel labels from public color fundus pictures. BIOMEDICAL OPTICS EXPRESS 2022; 13:2566-2580. [PMID: 35774310 PMCID: PMC9203117 DOI: 10.1364/boe.452873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 03/11/2022] [Accepted: 03/24/2022] [Indexed: 06/15/2023]
Abstract
In clinical routine, ophthalmologists frequently analyze the shape and size of the foveal avascular zone (FAZ) to detect and monitor retinal diseases. In order to extract those parameters, the contours of the FAZ need to be segmented, which is normally achieved by analyzing the retinal vasculature (RV) around the macula in fluorescein angiograms (FA). Computer-aided segmentation methods based on deep learning (DL) can automate this task. However, current approaches for segmenting the FAZ are often tailored to a specific dataset or require manual initialization. Furthermore, they do not take the variability and challenges of clinical FA into account, which are often of low quality and difficult to analyze. In this paper we propose a DL-based framework to automatically segment the FAZ in challenging FA scans from clinical routine. Our approach mimics the workflow of retinal experts by using additional RV labels as a guidance during training. Hence, our model is able to produce RV segmentations simultaneously. We minimize the annotation work by using a multi-modal approach that leverages already available public datasets of color fundus pictures (CFPs) and their respective manual RV labels. Our experimental evaluation on two datasets with FA from 1) clinical routine and 2) large multicenter clinical trials shows that the addition of weak RV labels as a guidance during training improves the FAZ segmentation significantly with respect to using only manual FAZ annotations.
Collapse
Affiliation(s)
- Dominik Hofer
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - José Ignacio Orlando
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
- Yatiris Group, PLADEMA Institute, CON-ICET, Universidad Nacional del Centro de la Provincia de Buenos Aires, Gral. Pinto 399, Tandil, Buenos Aires, Argentina
| | - Felix Goldbach
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - Bianca S. Gerendas
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - Philipp Seeböck
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| |
Collapse
|
107
|
State-of-the-art retinal vessel segmentation with minimalistic models. Sci Rep 2022; 12:6174. [PMID: 35418576 PMCID: PMC9007957 DOI: 10.1038/s41598-022-09675-y] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 03/10/2022] [Indexed: 01/03/2023] Open
Abstract
The segmentation of retinal vasculature from eye fundus images is a fundamental task in retinal image analysis. Over recent years, increasingly complex approaches based on sophisticated Convolutional Neural Network architectures have been pushing performance on well-established benchmark datasets. In this paper, we take a step back and analyze the real need of such complexity. We first compile and review the performance of 20 different techniques on some popular databases, and we demonstrate that a minimalistic version of a standard U-Net with several orders of magnitude less parameters, carefully trained and rigorously evaluated, closely approximates the performance of current best techniques. We then show that a cascaded extension (W-Net) reaches outstanding performance on several popular datasets, still using orders of magnitude less learnable weights than any previously published work. Furthermore, we provide the most comprehensive cross-dataset performance analysis to date, involving up to 10 different databases. Our analysis demonstrates that the retinal vessel segmentation is far from solved when considering test images that differ substantially from the training data, and that this task represents an ideal scenario for the exploration of domain adaptation techniques. In this context, we experiment with a simple self-labeling strategy that enables moderate enhancement of cross-dataset performance, indicating that there is still much room for improvement in this area. Finally, we test our approach on Artery/Vein and vessel segmentation from OCTA imaging problems, where we again achieve results well-aligned with the state-of-the-art, at a fraction of the model complexity available in recent literature. Code to reproduce the results in this paper is released.
Collapse
|
108
|
Shen X, Xu J, Jia H, Fan P, Dong F, Yu B, Ren S. Self-attentional microvessel segmentation via squeeze-excitation transformer Unet. Comput Med Imaging Graph 2022; 97:102055. [DOI: 10.1016/j.compmedimag.2022.102055] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 02/17/2022] [Accepted: 03/12/2022] [Indexed: 11/27/2022]
|
109
|
Xu Y, Fan Y. Dual-channel asymmetric convolutional neural network for an efficient retinal blood vessel segmentation in eye fundus images. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.05.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
110
|
Hatamizadeh A, Hosseini H, Patel N, Choi J, Pole CC, Hoeferlin CM, Schwartz SD, Terzopoulos D. RAVIR: A Dataset and Methodology for the Semantic Segmentation and Quantitative Analysis of Retinal Arteries and Veins in Infrared Reflectance Imaging. IEEE J Biomed Health Inform 2022; 26:3272-3283. [PMID: 35349464 DOI: 10.1109/jbhi.2022.3163352] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The retinal vasculature provides important clues in the diagnosis and monitoring of systemic diseases including hypertension and diabetes. The microvascular system is of primary involvement in such conditions, and the retina is the only anatomical site where the microvasculature can be directly observed. The objective assessment of retinal vessels has long been considered a surrogate biomarker for systemic vascular diseases, and with recent advancements in retinal imaging and computer vision technologies, this topic has become the subject of renewed attention. In this paper, we present a novel dataset, dubbed RAVIR, for the semantic segmentation of Retinal Arteries and Veins in Infrared Reflectance (IR) imaging. It enables the creation of deep learning-based models that distinguish extracted vessel type without extensive post-processing. We propose a novel deep learning-based methodology, denoted as SegRAVIR, for the semantic segmentation of retinal arteries and veins and the quantitative measurement of the widths of segmented vessels. Our extensive experiments validate the effectiveness of SegRAVIR and demonstrate its superior performance in comparison to state-of-the-art models. Additionally, we propose a knowledge distillation framework for the domain adaptation of RAVIR pretrained networks on color images. We demonstrate that our pretraining procedure yields new state-of-the-art benchmarks on the DRIVE, STARE, and CHASE\_DB1 datasets. Dataset link: https://ravirdataset.github.io/data.
Collapse
|
111
|
Shi D, Lin Z, Wang W, Tan Z, Shang X, Zhang X, Meng W, Ge Z, He M. A Deep Learning System for Fully Automated Retinal Vessel Measurement in High Throughput Image Analysis. Front Cardiovasc Med 2022; 9:823436. [PMID: 35391847 PMCID: PMC8980780 DOI: 10.3389/fcvm.2022.823436] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Accepted: 02/22/2022] [Indexed: 11/27/2022] Open
Abstract
Motivation Retinal microvasculature is a unique window for predicting and monitoring major cardiovascular diseases, but high throughput tools based on deep learning for in-detail retinal vessel analysis are lacking. As such, we aim to develop and validate an artificial intelligence system (Retina-based Microvascular Health Assessment System, RMHAS) for fully automated vessel segmentation and quantification of the retinal microvasculature. Results RMHAS achieved good segmentation accuracy across datasets with diverse eye conditions and image resolutions, having AUCs of 0.91, 0.88, 0.95, 0.93, 0.97, 0.95, 0.94 for artery segmentation and 0.92, 0.90, 0.96, 0.95, 0.97, 0.95, 0.96 for vein segmentation on the AV-WIDE, AVRDB, HRF, IOSTAR, LES-AV, RITE, and our internal datasets. Agreement and repeatability analysis supported the robustness of the algorithm. For vessel analysis in quantity, less than 2 s were needed to complete all required analysis.
Collapse
Affiliation(s)
- Danli Shi
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhihong Lin
- Faculty of Engineering, Monash University, Melbourne, VIC, Australia
| | - Wei Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zachary Tan
- Centre for Eye Research Australia, East Melbourne, VIC, Australia
| | - Xianwen Shang
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Xueli Zhang
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wei Meng
- Guangzhou Vision Tech Medical Technology Co., Ltd., Guangzhou, China
| | - Zongyuan Ge
- Research Center and Faculty of Engineering, Monash University, Melbourne, VIC, Australia
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Centre for Eye Research Australia, East Melbourne, VIC, Australia
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
- *Correspondence: Mingguang He
| |
Collapse
|
112
|
Shi T, Boutry N, Xu Y, Geraud T. Local Intensity Order Transformation for Robust Curvilinear Object Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:2557-2569. [PMID: 35275816 DOI: 10.1109/tip.2022.3155954] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Segmentation of curvilinear structures is important in many applications, such as retinal blood vessel segmentation for early detection of vessel diseases and pavement crack segmentation for road condition evaluation and maintenance. Currently, deep learning-based methods have achieved impressive performance on these tasks. Yet, most of them mainly focus on finding powerful deep architectures but ignore capturing the inherent curvilinear structure feature (e.g., the curvilinear structure is darker than the context) for a more robust representation. In consequence, the performance usually drops a lot on cross-datasets, which poses great challenges in practice. In this paper, we aim to improve the generalizability by introducing a novel local intensity order transformation (LIOT). Specifically, we transfer a gray-scale image into a contrast-invariant four-channel image based on the intensity order between each pixel and its nearby pixels along with the four (horizontal and vertical) directions. This results in a representation that preserves the inherent characteristic of the curvilinear structure while being robust to contrast changes. Cross-dataset evaluation on three retinal blood vessel segmentation datasets demonstrates that LIOT improves the generalizability of some state-of-the-art methods. Additionally, the cross-dataset evaluation between retinal blood vessel segmentation and pavement crack segmentation shows that LIOT is able to preserve the inherent characteristic of curvilinear structure with large appearance gaps. An implementation of the proposed method is available at https://github.com/TY-Shi/LIOT.
Collapse
|
113
|
Li X, Ding J, Tang J, Guo F. Res2Unet: A multi-scale channel attention network for retinal vessel segmentation. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07086-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
114
|
Xu J, Shen J, Wan C, Jiang Q, Yan Z, Yang W. A Few-Shot Learning-Based Retinal Vessel Segmentation Method for Assisting in the Central Serous Chorioretinopathy Laser Surgery. Front Med (Lausanne) 2022; 9:821565. [PMID: 35308538 PMCID: PMC8927682 DOI: 10.3389/fmed.2022.821565] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 01/28/2022] [Indexed: 12/05/2022] Open
Abstract
BACKGROUND The location of retinal vessels is an important prerequisite for Central Serous Chorioretinopathy (CSC) Laser Surgery, which does not only assist the ophthalmologist in marking the location of the leakage point (LP) on the fundus color image but also avoids the damage of the laser spot to the vessel tissue, as well as the low efficiency of the surgery caused by the absorption of laser energy by retinal vessels. In acquiring an excellent intra- and cross-domain adaptability, the existing deep learning (DL)-based vessel segmentation scheme must be driven by big data, which makes the densely annotated work tedious and costly. METHODS This paper aims to explore a new vessel segmentation method with a few samples and annotations to alleviate the above problems. Firstly, a key solution is presented to transform the vessel segmentation scene into the few-shot learning task, which lays a foundation for the vessel segmentation task with a few samples and annotations. Then, we improve the existing few-shot learning framework as our baseline model to adapt to the vessel segmentation scenario. Next, the baseline model is upgraded from the following three aspects: (1) A multi-scale class prototype extraction technique is designed to obtain more sufficient vessel features for better utilizing the information from the support images; (2) The multi-scale vessel features of the query images, inferred by the support image class prototype information, are gradually fused to provide more effective guidance for the vessel extraction tasks; and (3) A multi-scale attention module is proposed to promote the consideration of the global information in the upgraded model to assist vessel localization. Concurrently, the integrated framework is further conceived to appropriately alleviate the low performance of a single model in the cross-domain vessel segmentation scene, enabling to boost the domain adaptabilities of both the baseline and the upgraded models. RESULTS Extensive experiments showed that the upgraded operation could further improve the performance of vessel segmentation significantly. Compared with the listed methods, both the baseline and the upgraded models achieved competitive results on the three public retinal image datasets (i.e., CHASE_DB, DRIVE, and STARE). In the practical application of private CSC datasets, the integrated scheme partially enhanced the domain adaptabilities of the two proposed models.
Collapse
Affiliation(s)
- Jianguo Xu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jianxin Shen
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Cheng Wan
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Zhipeng Yan
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Weihua Yang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
115
|
Huang J, Lin Z, Chen Y, Zhang X, Zhao W, Zhang J, Li Y, He X, Zhan M, Lu L, Jiang X, Peng Y. DBFU-Net: Double branch fusion U-Net with hard example weighting train strategy to segment retinal vessel. PeerJ Comput Sci 2022; 8:e871. [PMID: 35494791 PMCID: PMC9044242 DOI: 10.7717/peerj-cs.871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 01/10/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Many fundus imaging modalities measure ocular changes. Automatic retinal vessel segmentation (RVS) is a significant fundus image-based method for the diagnosis of ophthalmologic diseases. However, precise vessel segmentation is a challenging task when detecting micro-changes in fundus images, e.g., tiny vessels, vessel edges, vessel lesions and optic disc edges. METHODS In this paper, we will introduce a novel double branch fusion U-Net model that allows one of the branches to be trained by a weighting scheme that emphasizes harder examples to improve the overall segmentation performance. A new mask, we call a hard example mask, is needed for those examples that include a weighting strategy that is different from other methods. The method we propose extracts the hard example mask by morphology, meaning that the hard example mask does not need any rough segmentation model. To alleviate overfitting, we propose a random channel attention mechanism that is better than the drop-out method or the L2-regularization method in RVS. RESULTS We have verified the proposed approach on the DRIVE, STARE and CHASE datasets to quantify the performance metrics. Compared to other existing approaches, using those dataset platforms, the proposed approach has competitive performance metrics. (DRIVE: F1-Score = 0.8289, G-Mean = 0.8995, AUC = 0.9811; STARE: F1-Score = 0.8501, G-Mean = 0.9198, AUC = 0.9892; CHASE: F1-Score = 0.8375, G-Mean = 0.9138, AUC = 0.9879). DISCUSSION The segmentation results showed that DBFU-Net with RCA achieves competitive performance in three RVS datasets. Additionally, the proposed morphological-based extraction method for hard examples can reduce the computational cost. Finally, the random channel attention mechanism proposed in this paper has proven to be more effective than other regularization methods in the RVS task.
Collapse
Affiliation(s)
- Jianping Huang
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Zefang Lin
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Yingyin Chen
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Xiao Zhang
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Wei Zhao
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Jie Zhang
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Department of Nuclear Medicine, Zhuhai, China
| | - Yong Li
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Xu He
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Meixiao Zhan
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Ligong Lu
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Xiaofei Jiang
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Department of cardiology, Zhuhai, China
| | - Yongjun Peng
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Department of Nuclear Medicine, Zhuhai, China
| |
Collapse
|
116
|
Li X, Bala R, Monga V. Robust Deep 3D Blood Vessel Segmentation Using Structural Priors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1271-1284. [PMID: 34990361 DOI: 10.1109/tip.2021.3139241] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep learning has enabled significant improvements in the accuracy of 3D blood vessel segmentation. Open challenges remain in scenarios where labeled 3D segmentation maps for training are severely limited, as is often the case in practice, and in ensuring robustness to noise. Inspired by the observation that 3D vessel structures project onto 2D image slices with informative and unique edge profiles, we propose a novel deep 3D vessel segmentation network guided by edge profiles. Our network architecture comprises a shared encoder and two decoders that learn segmentation maps and edge profiles jointly. 3D context is mined in both the segmentation and edge prediction branches by employing bidirectional convolutional long-short term memory (BCLSTM) modules. 3D features from the two branches are concatenated to facilitate learning of the segmentation map. As a key contribution, we introduce new regularization terms that: a) capture the local homogeneity of 3D blood vessel volumes in the presence of biomarkers; and b) ensure performance robustness to domain-specific noise by suppressing false positive responses. Experiments on benchmark datasets with ground truth labels reveal that the proposed approach outperforms state-of-the-art techniques on standard measures such as DICE overlap and mean Intersection-over-Union. The performance gains of our method are even more pronounced when training is limited. Furthermore, the computational cost of our network inference is among the lowest compared with state-of-the-art.
Collapse
|
117
|
Gao Z, Wang L, Soroushmehr R, Wood A, Gryak J, Nallamothu B, Najarian K. Vessel segmentation for X-ray coronary angiography using ensemble methods with deep learning and filter-based features. BMC Med Imaging 2022; 22:10. [PMID: 35045816 PMCID: PMC8767756 DOI: 10.1186/s12880-022-00734-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 01/04/2022] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND Automated segmentation of coronary arteries is a crucial step for computer-aided coronary artery disease (CAD) diagnosis and treatment planning. Correct delineation of the coronary artery is challenging in X-ray coronary angiography (XCA) due to the low signal-to-noise ratio and confounding background structures. METHODS A novel ensemble framework for coronary artery segmentation in XCA images is proposed, which utilizes deep learning and filter-based features to construct models using the gradient boosting decision tree (GBDT) and deep forest classifiers. The proposed method was trained and tested on 130 XCA images. For each pixel of interest in the XCA images, a 37-dimensional feature vector was constructed based on (1) the statistics of multi-scale filtering responses in the morphological, spatial, and frequency domains; and (2) the feature maps obtained from trained deep neural networks. The performance of these models was compared with those of common deep neural networks on metrics including precision, sensitivity, specificity, F1 score, AUROC (the area under the receiver operating characteristic curve), and IoU (intersection over union). RESULTS With hybrid under-sampling methods, the best performing GBDT model achieved a mean F1 score of 0.874, AUROC of 0.947, sensitivity of 0.902, and specificity of 0.992; while the best performing deep forest model obtained a mean F1 score of 0.867, AUROC of 0.95, sensitivity of 0.867, and specificity of 0.993. Compared with the evaluated deep neural networks, both models had better or comparable performance for all evaluated metrics with lower standard deviations over the test images. CONCLUSIONS The proposed feature-based ensemble method outperformed common deep convolutional neural networks in most performance metrics while yielding more consistent results. Such a method can be used to facilitate the assessment of stenosis and improve the quality of care in patients with CAD.
Collapse
Affiliation(s)
- Zijun Gao
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, USA.
| | - Lu Wang
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, USA
| | - Reza Soroushmehr
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, USA
- Michigan Institute for Data Science (MIDAS), University of Michigan, Ann Arbor, USA
- Michigan Center for Integrative Research in Critical Care (MCIRCC), University of Michigan, Ann Arbor, USA
| | - Alexander Wood
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, USA
| | - Jonathan Gryak
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, USA
- Michigan Institute for Data Science (MIDAS), University of Michigan, Ann Arbor, USA
| | - Brahmajee Nallamothu
- Department of Internal Medicine, University of Michigan, Ann Arbor, USA
- Division of Cardiovascular Diseases, University of Michigan, Ann Arbor, USA
| | - Kayvan Najarian
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, USA
- Michigan Institute for Data Science (MIDAS), University of Michigan, Ann Arbor, USA
- Department of Emergency Medicine, University of Michigan, Ann Arbor, USA
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, USA
- Michigan Center for Integrative Research in Critical Care (MCIRCC), University of Michigan, Ann Arbor, USA
| |
Collapse
|
118
|
Wan C, Zhou X, You Q, Sun J, Shen J, Zhu S, Jiang Q, Yang W. Retinal Image Enhancement Using Cycle-Constraint Adversarial Network. Front Med (Lausanne) 2022; 8:793726. [PMID: 35096883 PMCID: PMC8789669 DOI: 10.3389/fmed.2021.793726] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 12/14/2021] [Indexed: 11/25/2022] Open
Abstract
Retinal images are the most intuitive medical images for the diagnosis of fundus diseases. Low-quality retinal images cause difficulties in computer-aided diagnosis systems and the clinical diagnosis of ophthalmologists. The high quality of retinal images is an important basis of precision medicine in ophthalmology. In this study, we propose a retinal image enhancement method based on deep learning to enhance multiple low-quality retinal images. A generative adversarial network is employed to build a symmetrical network, and a convolutional block attention module is introduced to improve the feature extraction capability. The retinal images in our dataset are sorted into two sets according to their quality: low and high quality. Generators and discriminators alternately learn the features of low/high-quality retinal images without the need for paired images. We analyze the proposed method both qualitatively and quantitatively on public datasets and a private dataset. The study results demonstrate that the proposed method is superior to other advanced algorithms, especially in enhancing color-distorted retinal images. It also performs well in the task of retinal vessel segmentation. The proposed network effectively enhances low-quality retinal images, aiding ophthalmologists and enabling computer-aided diagnosis in pathological analysis. Our method enhances multiple types of low-quality retinal images using a deep learning network.
Collapse
Affiliation(s)
- Cheng Wan
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Xueting Zhou
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Qijing You
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jing Sun
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jianxin Shen
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Shaojun Zhu
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Qin Jiang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Weihua Yang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
119
|
Zekavat SM, Raghu VK, Trinder M, Ye Y, Koyama S, Honigberg MC, Yu Z, Pampana A, Urbut S, Haidermota S, O’Regan DP, Zhao H, Ellinor PT, Segrè AV, Elze T, Wiggs JL, Martone J, Adelman RA, Zebardast N, Del Priore L, Wang JC, Natarajan P. Deep Learning of the Retina Enables Phenome- and Genome-Wide Analyses of the Microvasculature. Circulation 2022; 145:134-150. [PMID: 34743558 PMCID: PMC8746912 DOI: 10.1161/circulationaha.121.057709] [Citation(s) in RCA: 76] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 11/03/2021] [Indexed: 12/15/2022]
Abstract
BACKGROUND The microvasculature, the smallest blood vessels in the body, has key roles in maintenance of organ health and tumorigenesis. The retinal fundus is a window for human in vivo noninvasive assessment of the microvasculature. Large-scale complementary machine learning-based assessment of the retinal vasculature with phenome-wide and genome-wide analyses may yield new insights into human health and disease. METHODS We used 97 895 retinal fundus images from 54 813 UK Biobank participants. Using convolutional neural networks to segment the retinal microvasculature, we calculated vascular density and fractal dimension as a measure of vascular branching complexity. We associated these indices with 1866 incident International Classification of Diseases-based conditions (median 10-year follow-up) and 88 quantitative traits, adjusting for age, sex, smoking status, and ethnicity. RESULTS Low retinal vascular fractal dimension and density were significantly associated with higher risks for incident mortality, hypertension, congestive heart failure, renal failure, type 2 diabetes, sleep apnea, anemia, and multiple ocular conditions, as well as corresponding quantitative traits. Genome-wide association of vascular fractal dimension and density identified 7 and 13 novel loci, respectively, that were enriched for pathways linked to angiogenesis (eg, vascular endothelial growth factor, platelet-derived growth factor receptor, angiopoietin, and WNT signaling pathways) and inflammation (eg, interleukin, cytokine signaling). CONCLUSIONS Our results indicate that the retinal vasculature may serve as a biomarker for future cardiometabolic and ocular disease and provide insights into genes and biological pathways influencing microvascular indices. Moreover, such a framework highlights how deep learning of images can quantify an interpretable phenotype for integration with electronic health record, biomarker, and genetic data to inform risk prediction and risk modification.
Collapse
Affiliation(s)
- Seyedeh Maryam Zekavat
- Department of Ophthalmology and Visual Science, Yale School of Medicine, New Haven, CT (S.M.Z., J.M., R.A.A., L.D.P., J.C.W.)
- Computational Biology & Bioinformatics Program (S.M.Z., Y.Y., H.Z.), Yale University, New Haven, CT
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
| | - Vineet K. Raghu
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
- Cardiovascular Research Center (S.M.Z., V.K.R., M.C.H., S.U., S.H., P.T.E., P.N.), Massachusetts General Hospital, Harvard Medical School, Boston
- Cardiovascular Imaging Research Center (V.K.R.), Massachusetts General Hospital, Harvard Medical School, Boston
| | - Mark Trinder
- Centre for Heart Lung Innovation, University of British Columbia, Vancouver, Canada (M.T.)
| | - Yixuan Ye
- Computational Biology & Bioinformatics Program (S.M.Z., Y.Y., H.Z.), Yale University, New Haven, CT
| | - Satoshi Koyama
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
| | - Michael C. Honigberg
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
- Cardiovascular Research Center (S.M.Z., V.K.R., M.C.H., S.U., S.H., P.T.E., P.N.), Massachusetts General Hospital, Harvard Medical School, Boston
| | - Zhi Yu
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
| | - Akhil Pampana
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
| | - Sarah Urbut
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
- Cardiovascular Research Center (S.M.Z., V.K.R., M.C.H., S.U., S.H., P.T.E., P.N.), Massachusetts General Hospital, Harvard Medical School, Boston
| | - Sara Haidermota
- Cardiovascular Research Center (S.M.Z., V.K.R., M.C.H., S.U., S.H., P.T.E., P.N.), Massachusetts General Hospital, Harvard Medical School, Boston
| | - Declan P. O’Regan
- MRC London Institute of Medical Sciences, Imperial College London, UK (D.P.O.)
| | - Hongyu Zhao
- Computational Biology & Bioinformatics Program (S.M.Z., Y.Y., H.Z.), Yale University, New Haven, CT
- School of Public Health (H.Z.), Yale University, New Haven, CT
| | - Patrick T. Ellinor
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
- Cardiovascular Research Center (S.M.Z., V.K.R., M.C.H., S.U., S.H., P.T.E., P.N.), Massachusetts General Hospital, Harvard Medical School, Boston
| | - Ayellet V. Segrè
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston (A.V.S., T.E., J.L.W., N.Z.)
| | - Tobias Elze
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston (A.V.S., T.E., J.L.W., N.Z.)
| | - Janey L. Wiggs
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston (A.V.S., T.E., J.L.W., N.Z.)
| | - James Martone
- Department of Ophthalmology and Visual Science, Yale School of Medicine, New Haven, CT (S.M.Z., J.M., R.A.A., L.D.P., J.C.W.)
| | - Ron A. Adelman
- Department of Ophthalmology and Visual Science, Yale School of Medicine, New Haven, CT (S.M.Z., J.M., R.A.A., L.D.P., J.C.W.)
| | - Nazlee Zebardast
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston (A.V.S., T.E., J.L.W., N.Z.)
| | - Lucian Del Priore
- Department of Ophthalmology and Visual Science, Yale School of Medicine, New Haven, CT (S.M.Z., J.M., R.A.A., L.D.P., J.C.W.)
| | - Jay C. Wang
- Department of Ophthalmology and Visual Science, Yale School of Medicine, New Haven, CT (S.M.Z., J.M., R.A.A., L.D.P., J.C.W.)
| | - Pradeep Natarajan
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
- Cardiovascular Research Center (S.M.Z., V.K.R., M.C.H., S.U., S.H., P.T.E., P.N.), Massachusetts General Hospital, Harvard Medical School, Boston
| |
Collapse
|
120
|
MSC-Net: Multitask Learning Network for Retinal Vessel Segmentation and Centerline Extraction. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app12010403] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Automatic segmentation and centerline extraction of blood vessels from retinal fundus images is an essential step to measure the state of retinal blood vessels and achieve the goal of auxiliary diagnosis. Combining the information of blood vessel segments and centerline can help improve the continuity of results and performance. However, previous studies have usually treated these two tasks as separate research topics. Therefore, we propose a novel multitask learning network (MSC-Net) for retinal vessel segmentation and centerline extraction. The network uses a multibranch design to combine information between two tasks. Channel and atrous spatial fusion block (CAS-FB) is designed to fuse and correct the features of different branches and different scales. The clDice loss function is also used to constrain the topological continuity of blood vessel segments and centerline. Experimental results on different fundus blood vessel datasets (DRIVE, STARE, and CHASE) show that our method can obtain better segmentation and centerline extraction results at different scales and has better topological continuity than state-of-the-art methods.
Collapse
|
121
|
Arsalan M, Haider A, Choi J, Park KR. Diabetic and Hypertensive Retinopathy Screening in Fundus Images Using Artificially Intelligent Shallow Architectures. J Pers Med 2021; 12:jpm12010007. [PMID: 35055322 PMCID: PMC8777982 DOI: 10.3390/jpm12010007] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 12/20/2021] [Accepted: 12/20/2021] [Indexed: 12/25/2022] Open
Abstract
Retinal blood vessels are considered valuable biomarkers for the detection of diabetic retinopathy, hypertensive retinopathy, and other retinal disorders. Ophthalmologists analyze retinal vasculature by manual segmentation, which is a tedious task. Numerous studies have focused on automatic retinal vasculature segmentation using different methods for ophthalmic disease analysis. However, most of these methods are computationally expensive and lack robustness. This paper proposes two new shallow deep learning architectures: dual-stream fusion network (DSF-Net) and dual-stream aggregation network (DSA-Net) to accurately detect retinal vasculature. The proposed method uses semantic segmentation in raw color fundus images for the screening of diabetic and hypertensive retinopathies. The proposed method's performance is assessed using three publicly available fundus image datasets: Digital Retinal Images for Vessel Extraction (DRIVE), Structured Analysis of Retina (STARE), and Children Heart Health Study in England Database (CHASE-DB1). The experimental results revealed that the proposed method provided superior segmentation performance with accuracy (Acc), sensitivity (SE), specificity (SP), and area under the curve (AUC) of 96.93%, 82.68%, 98.30%, and 98.42% for DRIVE, 97.25%, 82.22%, 98.38%, and 98.15% for CHASE-DB1, and 97.00%, 86.07%, 98.00%, and 98.65% for STARE datasets, respectively. The experimental results also show that the proposed DSA-Net provides higher SE compared to the existing approaches. It means that the proposed method detected the minor vessels and provided the least false negatives, which is extremely important for diagnosis. The proposed method provides an automatic and accurate segmentation mask that can be used to highlight the vessel pixels. This detected vasculature can be utilized to compute the ratio between the vessel and the non-vessel pixels and distinguish between diabetic and hypertensive retinopathies, and morphology can be analyzed for related retinal disorders.
Collapse
|
122
|
Zhang J, Zhang Y, Qiu H, Xie W, Yao Z, Yuan H, Jia Q, Wang T, Shi Y, Huang M, Zhuang J, Xu X. Pyramid-Net: Intra-layer Pyramid-Scale Feature Aggregation Network for Retinal Vessel Segmentation. Front Med (Lausanne) 2021; 8:761050. [PMID: 34950679 PMCID: PMC8688400 DOI: 10.3389/fmed.2021.761050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 11/05/2021] [Indexed: 11/18/2022] Open
Abstract
Retinal vessel segmentation plays an important role in the diagnosis of eye-related diseases and biomarkers discovery. Existing works perform multi-scale feature aggregation in an inter-layer manner, namely inter-layer feature aggregation. However, such an approach only fuses features at either a lower scale or a higher scale, which may result in a limited segmentation performance, especially on thin vessels. This discovery motivates us to fuse multi-scale features in each layer, intra-layer feature aggregation, to mitigate the problem. Therefore, in this paper, we propose Pyramid-Net for accurate retinal vessel segmentation, which features intra-layer pyramid-scale aggregation blocks (IPABs). At each layer, IPABs generate two associated branches at a higher scale and a lower scale, respectively, and the two with the main branch at the current scale operate in a pyramid-scale manner. Three further enhancements including pyramid inputs enhancement, deep pyramid supervision, and pyramid skip connections are proposed to boost the performance. We have evaluated Pyramid-Net on three public retinal fundus photography datasets (DRIVE, STARE, and CHASE-DB1). The experimental results show that Pyramid-Net can effectively improve the segmentation performance especially on thin vessels, and outperforms the current state-of-the-art methods on all the adopted three datasets. In addition, our method is more efficient than existing methods with a large reduction in computational cost. We have released the source code at https://github.com/JerRuy/Pyramid-Net.
Collapse
Affiliation(s)
- Jiawei Zhang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
- Shanghai key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou, China
| | - Yanchun Zhang
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou, China
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou, China
- College of Engineering and Science, Victoria University, Melbourne, VIC, Australia
| | - Hailong Qiu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wen Xie
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Zeyang Yao
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Haiyun Yuan
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Qianjun Jia
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Tianchen Wang
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Yiyu Shi
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Meiping Huang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Jian Zhuang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Xiaowei Xu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| |
Collapse
|
123
|
Fundus Image Registration Technique Based on Local Feature of Retinal Vessels. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112311201] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Feature-based retinal fundus image registration (RIR) technique aligns fundus images according to geometrical transformations estimated between feature point correspondences. To ensure accurate registration, the feature points extracted must be from the retinal vessels and throughout the image. However, noises in the fundus image may resemble retinal vessels in local patches. Therefore, this paper introduces a feature extraction method based on a local feature of retinal vessels (CURVE) that incorporates retinal vessels and noises characteristics to accurately extract feature points on retinal vessels and throughout the fundus image. The CURVE performance is tested on CHASE, DRIVE, HRF and STARE datasets and compared with six feature extraction methods used in the existing feature-based RIR techniques. From the experiment, the feature extraction accuracy of CURVE (86.021%) significantly outperformed the existing feature extraction methods (p ≤ 0.001*). Then, CURVE is paired with a scale-invariant feature transform (SIFT) descriptor to test its registration capability on the fundus image registration (FIRE) dataset. Overall, CURVE-SIFT successfully registered 44.030% of the image pairs while the existing feature-based RIR techniques (GDB-ICP, Harris-PIIFD, Ghassabi’s-SIFT, H-M 16, H-M 17 and D-Saddle-HOG) only registered less than 27.612% of the image pairs. The one-way ANOVA analysis showed that CURVE-SIFT significantly outperformed GDB-ICP (p = 0.007*), Harris-PIIFD, Ghassabi’s-SIFT, H-M 16, H-M 17 and D-Saddle-HOG (p ≤ 0.001*).
Collapse
|
124
|
Kovács G, Fazekas A. A new baseline for retinal vessel segmentation: Numerical identification and correction of methodological inconsistencies affecting 100+ papers. Med Image Anal 2021; 75:102300. [PMID: 34814057 DOI: 10.1016/j.media.2021.102300] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 09/20/2021] [Accepted: 11/04/2021] [Indexed: 12/18/2022]
Abstract
In the last 15 years, the segmentation of vessels in retinal images has become an intensively researched problem in medical imaging, with hundreds of algorithms published. One of the de facto benchmarking data sets of vessel segmentation techniques is the DRIVE data set. Since DRIVE contains a predefined split of training and test images, the published performance results of the various segmentation techniques should provide a reliable ranking of the algorithms. Including more than 100 papers in the study, we performed a detailed numerical analysis of the coherence of the published performance scores. We found inconsistencies in the reported scores related to the use of the field of view (FoV), which has a significant impact on the performance scores. We attempted to eliminate the biases using numerical techniques to provide a more realistic picture of the state of the art. Based on the results, we have formulated several findings, most notably: despite the well-defined test set of DRIVE, most rankings in published papers are based on non-comparable figures; in contrast to the near-perfect accuracy scores reported in the literature, the highest accuracy score achieved to date is 0.9582 in the FoV region, which is 1% higher than that of human annotators. The methods we have developed for identifying and eliminating the evaluation biases can be easily applied to other domains where similar problems may arise.
Collapse
Affiliation(s)
- György Kovács
- Analytical Minds Ltd., Árpád street 5, Beregsurány 4933, Hungary.
| | - Attila Fazekas
- University of Debrecen, Faculty of Informatics, P.O.BOX 400, Debrecen 4002, Hungary.
| |
Collapse
|
125
|
Coronado I, Abdelkhaleq R, Yan J, Marioni SS, Jagolino-Cole A, Channa R, Pachade S, Sheth SA, Giancardo L. Towards Stroke Biomarkers on Fundus Retinal Imaging: A Comparison Between Vasculature Embeddings and General Purpose Convolutional Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3873-3876. [PMID: 34892078 PMCID: PMC8981508 DOI: 10.1109/embc46164.2021.9629856] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Fundus Retinal imaging is an easy-to-acquire modality typically used for monitoring eye health. Current evidence indicates that the retina, and its vasculature in particular, is associated with other disease processes making it an ideal candidate for biomarker discovery. The development of these biomarkers has typically relied on predefined measurements, which makes the development process slow. Recently, representation learning algorithms such as general purpose convolutional neural networks or vasculature embeddings have been proposed as an approach to learn imaging biomarkers directly from the data, hence greatly speeding up their discovery. In this work, we compare and contrast different state-of-the-art retina biomarker discovery methods to identify signs of past stroke in the retinas of a curated patient cohort of 2,472 subjects from the UK Biobank dataset. We investigate two convolutional neural networks previously used in retina biomarker discovery and directly trained on the stroke outcome, and an extension of the vasculature embedding approach which infers its feature representation from the vasculature and combines the information of retinal images from both eyes.In our experiments, we show that the pipeline based on vasculature embeddings has comparable or better performance than other methods with a much more compact feature representation and ease of training.Clinical Relevance-This study compares and contrasts three retinal biomarker discovery strategies, using a curated dataset of subject evidence, for the analysis of the retina as a proxy in the assessment of clinical outcomes, such as stroke risk.
Collapse
|
126
|
Zhao C, Basu A. Pixel Distribution Learning for Vessel Segmentation under Multiple Scales. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2717-2721. [PMID: 34891812 DOI: 10.1109/embc46164.2021.9629614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In this work we try to address if there is a better way to classify two distributions, rather than using histograms; and answer if we can make a deep learning network learn and classify distributions automatically. These improvements can have wide ranging applications in computer vision and medical image processing. More specifically, we propose a new vessel segmentation method based on pixel distribution learning under multiple scales. In particular, a spatial distribution descriptor named Random Permutation of Spatial Pixels (RPoSP) is derived from vessel images and used as the input to a convolutional neural network for distribution learning. Based on our preliminary experiments we currently believe that a wide network, rather than a deep one, is better for distribution learning. There is only one convolutional layer, one rectified linear layer and one fully connected layer followed by a softmax loss in our network. Furthermore, in order to improve the accuracy of the proposed approach, the RPoSP features are captured at multiple scales and combined together to form the input of the network. Evaluations using standard benchmark datasets demonstrate that the proposed approach achieves promising results compared to the state-of-the-art.
Collapse
|
127
|
Owler J, Rockett P. Influence of background preprocessing on the performance of deep learning retinal vessel detection. J Med Imaging (Bellingham) 2021; 8:064001. [PMID: 34746333 PMCID: PMC8562352 DOI: 10.1117/1.jmi.8.6.064001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Accepted: 10/18/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Segmentation of the vessel tree from retinal fundus images can be used to track changes in the retina and be an important first step in a diagnosis. Manual segmentation is a time-consuming process that is prone to error; effective and reliable automation can alleviate these problems but one of the difficulties is uneven image background, which may affect segmentation performance. Approach: We present a patch-based deep learning framework, based on a modified U-Net architecture, that automatically segments the retinal blood vessels from fundus images. In particular, we evaluate how various pre-processing techniques, images with either no processing, N4 bias field correction, contrast limited adaptive histogram equalization (CLAHE), or a combination of N4 and CLAHE, can compensate for uneven image background and impact final segmentation performance. Results: We achieved competitive results on three publicly available datasets as a benchmark for our comparisons of pre-processing techniques. In addition, we introduce Bayesian statistical testing, which indicates little practical difference ( Pr > 0.99 ) between pre-processing methods apart from the sensitivity metric. In terms of sensitivity and pre-processing, the combination of N4 correction and CLAHE performs better in comparison to unprocessed and N4 pre-processing ( Pr > 0.87 ); but compared to CLAHE alone, the differences are not significant ( Pr ≈ 0.38 to 0.88). Conclusions: We conclude that deep learning is an effective method for retinal vessel segmentation and that CLAHE pre-processing has the greatest positive impact on segmentation performance, with N4 correction helping only in images with extremely inhomogeneous background illumination.
Collapse
Affiliation(s)
- James Owler
- University of Sheffield, Bioengineering—Interdisciplinary Programmes Engineering, United Kingdom
| | - Peter Rockett
- University of Sheffield, Department of Electronic and Electrical Engineering, Sheffield, United Kingdom
| |
Collapse
|
128
|
Zou B, Dai Y, He Q, Zhu C, Liu G, Su Y, Tang R. Multi-Label Classification Scheme Based on Local Regression for Retinal Vessel Segmentation. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:2586-2597. [PMID: 32175869 DOI: 10.1109/tcbb.2020.2980233] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Segmenting small retinal vessels with width less than 2 pixels in fundus images is a challenging task. In this paper, in order to effectively segment the vessels, especially the narrow parts, we propose a local regression scheme to enhance the narrow parts, along with a novel multi-label classification method based on this scheme. We consider five labels for blood vessels and background in particular: the center of big vessels, the edge of big vessels, the center as well as the edge of small vessels, the center of background, and the edge of background. We first determine the multi-label by the local de-regression model according to the vessel pattern from the ground truth images. Then, we train a convolutional neural network (CNN) for multi-label classification. Next, we perform a local regression method to transform the previous multi-label into binary label to better locate small vessels and generate an entire retinal vessel image. Our method is evaluated using two publicly available datasets and compared with several state-of-the-art studies. The experimental results have demonstrated the effectiveness of our method in segmenting retinal vessels.
Collapse
|
129
|
Ding L, Kuriyan AE, Ramchandran RS, Wykoff CC, Sharma G. Weakly-Supervised Vessel Detection in Ultra-Widefield Fundus Photography via Iterative Multi-Modal Registration and Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2748-2758. [PMID: 32991281 PMCID: PMC8513803 DOI: 10.1109/tmi.2020.3027665] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We propose a deep-learning based annotation-efficient framework for vessel detection in ultra-widefield (UWF) fundus photography (FP) that does not require de novo labeled UWF FP vessel maps. Our approach utilizes concurrently captured UWF fluorescein angiography (FA) images, for which effective deep learning approaches have recently become available, and iterates between a multi-modal registration step and a weakly-supervised learning step. In the registration step, the UWF FA vessel maps detected with a pre-trained deep neural network (DNN) are registered with the UWF FP via parametric chamfer alignment. The warped vessel maps can be used as the tentative training data but inevitably contain incorrect (noisy) labels due to the differences between FA and FP modalities and the errors in the registration. In the learning step, a robust learning method is proposed to train DNNs with noisy labels. The detected FP vessel maps are used for the registration in the following iteration. The registration and the vessel detection benefit from each other and are progressively improved. Once trained, the UWF FP vessel detection DNN from the proposed approach allows FP vessel detection without requiring concurrently captured UWF FA images. We validate the proposed framework on a new UWF FP dataset, PRIME-FP20, and on existing narrow-field FP datasets. Experimental evaluation, using both pixel-wise metrics and the CAL metrics designed to provide better agreement with human assessment, shows that the proposed approach provides accurate vessel detection, without requiring manually labeled UWF FP training data.
Collapse
|
130
|
Ding J, Zhang Z, Tang J, Guo F. A Multichannel Deep Neural Network for Retina Vessel Segmentation via a Fusion Mechanism. Front Bioeng Biotechnol 2021; 9:697915. [PMID: 34490220 PMCID: PMC8417313 DOI: 10.3389/fbioe.2021.697915] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 07/06/2021] [Indexed: 11/17/2022] Open
Abstract
Changes in fundus blood vessels reflect the occurrence of eye diseases, and from this, we can explore other physical diseases that cause fundus lesions, such as diabetes and hypertension complication. However, the existing computational methods lack high efficiency and precision segmentation for the vascular ends and thin retina vessels. It is important to construct a reliable and quantitative automatic diagnostic method for improving the diagnosis efficiency. In this study, we propose a multichannel deep neural network for retina vessel segmentation. First, we apply U-net on original and thin (or thick) vessels for multi-objective optimization for purposively training thick and thin vessels. Then, we design a specific fusion mechanism for combining three kinds of prediction probability maps into a final binary segmentation map. Experiments show that our method can effectively improve the segmentation performances of thin blood vessels and vascular ends. It outperforms many current excellent vessel segmentation methods on three public datasets. In particular, it is pretty impressive that we achieve the best F1-score of 0.8247 on the DRIVE dataset and 0.8239 on the STARE dataset. The findings of this study have the potential for the application in an automated retinal image analysis, and it may provide a new, general, and high-performance computing framework for image segmentation.
Collapse
Affiliation(s)
- Jiaqi Ding
- School of Computer Science and Technology, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Zehua Zhang
- School of Computer Science and Technology, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Jijun Tang
- School of Computer Science and Technology, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Fei Guo
- School of Computer Science and Engineering, Central South University, Changsha, China
| |
Collapse
|
131
|
Hu X, Wang L, Cheng S, Li Y. HDC-Net: A hierarchical dilation convolutional network for retinal vessel segmentation. PLoS One 2021; 16:e0257013. [PMID: 34492064 PMCID: PMC8423235 DOI: 10.1371/journal.pone.0257013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 08/23/2021] [Indexed: 11/18/2022] Open
Abstract
The cardinal symptoms of some ophthalmic diseases observed through exceptional retinal blood vessels, such as retinal vein occlusion, diabetic retinopathy, etc. The advanced deep learning models used to obtain morphological and structural information of blood vessels automatically are conducive to the early treatment and initiative prevention of ophthalmic diseases. In our work, we propose a hierarchical dilation convolutional network (HDC-Net) to extract retinal vessels in a pixel-to-pixel manner. It utilizes the hierarchical dilation convolution (HDC) module to capture the fragile retinal blood vessels usually neglected by other methods. An improved residual dual efficient channel attention (RDECA) module can infer more delicate channel information to reinforce the discriminative capability of the model. The structured Dropblock can help our HDC-Net model to solve the network overfitting effectively. From a holistic perspective, the segmentation results obtained by HDC-Net are superior to other deep learning methods on three acknowledged datasets (DRIVE, CHASE-DB1, STARE), the sensitivity, specificity, accuracy, f1-score and AUC score are {0.8252, 0.9829, 0.9692, 0.8239, 0.9871}, {0.8227, 0.9853, 0.9745, 0.8113, 0.9884}, and {0.8369, 0.9866, 0.9751, 0.8385, 0.9913}, respectively. It surpasses most other advanced retinal vessel segmentation models. Qualitative and quantitative analysis demonstrates that HDC-Net can fulfill the task of retinal vessel segmentation efficiently and accurately.
Collapse
Affiliation(s)
- Xiaolong Hu
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Liejun Wang
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Shuli Cheng
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Yongming Li
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
| |
Collapse
|
132
|
Toptaş B, Hanbay D. Retinal blood vessel segmentation using pixel-based feature vector. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103053] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
133
|
Shi Z, Wang T, Huang Z, Xie F, Liu Z, Wang B, Xu J. MD-Net: A multi-scale dense network for retinal vessel segmentation. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102977] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
134
|
Lin Z, Huang J, Chen Y, Zhang X, Zhao W, Li Y, Lu L, Zhan M, Jiang X, Liang X. A high resolution representation network with multi-path scale for retinal vessel segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106206. [PMID: 34146772 DOI: 10.1016/j.cmpb.2021.106206] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 05/23/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES Automatic retinal vessel segmentation (RVS) in fundus images is expected to be a vital step in the early image diagnosis of ophthalmologic diseases. However, it is a challenging task to detect the retinal vessel accurately mainly due to the vascular intricacies, lesion areas and optic disc edges in retinal fundus images. METHODS In this paper, we propose a high resolution representation network with multi-path scale (MPS-Net) for RVS aiming to improve the performance of extracting the retinal blood vessels. In the MPS-Net, there exist one high resolution main road and two lower resolution branch roads where the proposed multi-path scale modules are embedded to enhance the representation ability of network. Besides, in order to guide the network focus on learning the features of hard examples in retinal images, we design a hard-focused cross-entropy loss function. RESULTS We evaluate our network structure on DRIVE, STARE, CHASE and synthetic images and the quantitative comparisons with respect to the existing methods are presented. The experimental results show that our approach is superior to most methods in terms of F1-score, sensitivity, G-mean and Matthews correlation coefficient. CONCLUSIONS The promising segmentation performances reveal that our method has potential in real-world applications and can be exploited for other medical images with further analysis.
Collapse
Affiliation(s)
- Zefang Lin
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China.
| | - Jianping Huang
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China.
| | - Yingyin Chen
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China.
| | - Xiao Zhang
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China
| | - Wei Zhao
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China
| | - Yong Li
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China
| | - Ligong Lu
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China
| | - Meixiao Zhan
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China.
| | - Xiaofei Jiang
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China; Department of Cardiology, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China.
| | - Xiong Liang
- Department of Obstetrics, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China.
| |
Collapse
|
135
|
Martinez-Murcia FJ, Ortiz A, Ramírez J, Górriz JM, Cruz R. Deep residual transfer learning for automatic diagnosis and grading of diabetic retinopathy. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.04.148] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
136
|
Du XF, Wang JS, Sun WZ. UNet retinal blood vessel segmentation algorithm based on improved pyramid pooling method and attention mechanism. Phys Med Biol 2021; 66. [PMID: 34375955 DOI: 10.1088/1361-6560/ac1c4c] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 08/10/2021] [Indexed: 11/12/2022]
Abstract
The segmentation results of retinal vessels have a significant impact on the automatic diagnosis of retinal diabetes, hypertension, cardiovascular and cerebrovascular diseases and other ophthalmic diseases. In order to improve the performance of blood vessels segmentation, a pyramid scene parseing U-Net segmentation algorithm based on attention mechanism was proposed. The modified PSP-Net pyramid pooling module is introduced on the basis of U-Net network, which aggregates the context information of different regions so as to improve the ability of obtaining global information. At the same time, attention mechanism was introduced in the skip connection part of U-Net network, which makes the integration of low-level features and high-level semantic features more efficient and reduces the loss of feature information through nonlinear connection mode. The sensitivity, specificity, accuracy and AUC of DRIVE and CHASE_DB1 data sets are 0.7814, 0.9810, 0.9556, 0.9780; 0.8195, 0.9727, 0.9590, 0.9784. Experimental results show that the PSP-UNet segmentation algorithm based on the attention mechanism enhances the detection ability of blood vessel pixels, suppresses the interference of irrelevant information and improves the network segmentation performance, which is superior to U-Net algorithm and some mainstream retinal vascular segmentation algorithms at present.
Collapse
Affiliation(s)
- Xin-Feng Du
- School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan 114051, People's Republic of China
| | - Jie-Sheng Wang
- School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan 114051, People's Republic of China
| | - Wei-Zhen Sun
- School of Biological Science and Medical Engineering , Southeast University, Jiangsu, Nanjing 210000, People's Republic of China
| |
Collapse
|
137
|
SERR-U-Net: Squeeze-and-Excitation Residual and Recurrent Block-Based U-Net for Automatic Vessel Segmentation in Retinal Image. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:5976097. [PMID: 34422093 PMCID: PMC8371614 DOI: 10.1155/2021/5976097] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/02/2021] [Revised: 07/03/2021] [Accepted: 07/24/2021] [Indexed: 11/23/2022]
Abstract
Methods A new SERR-U-Net framework for retinal vessel segmentation is proposed, which leverages technologies including Squeeze-and-Excitation (SE), residual module, and recurrent block. First, the convolution layers of encoder and decoder are modified on the basis of U-Net, and the recurrent block is used to increase the network depth. Second, the residual module is utilized to alleviate the vanishing gradient problem. Finally, to derive more specific vascular features, we employed the SE structure to introduce attention mechanism into the U-shaped network. In addition, enhanced super-resolution generative adversarial networks (ESRGANs) are also deployed to remove the noise of retinal image. Results The effectiveness of this method was tested on two public datasets, DRIVE and STARE. In the experiment of DRIVE dataset, the accuracy and AUC (area under the curve) of our method were 0.9552 and 0.9784, respectively, and for SATRE dataset, 0.9796 and 0.9859 were achieved, respectively, which proved a high accuracy and promising prospect on clinical assistance. Conclusion An improved U-Net network combining SE, ResNet, and recurrent technologies is developed for automatic vessel segmentation from retinal image. This new model enables an improvement on the accuracy compared to learning-based methods, and its robustness in circumvent challenging cases such as small blood vessels and intersection of vessels is also well demonstrated and validated.
Collapse
|
138
|
Xia H, Lan Y, Song S, Li H. A multi-scale segmentation-to-classification network for tiny microaneurysm detection in fundus images. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107140] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
139
|
Abdul Rahman A, Biswal B, P GP, Hasan S, Sairam M. Robust segmentation of vascular network using deeply cascaded AReN-UNet. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102953] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
140
|
Xu R, Liu T, Ye X, Liu F, Lin L, Li L, Tanaka S, Chen YW. Joint Extraction of Retinal Vessels and Centerlines Based on Deep Semantics and Multi-Scaled Cross-Task Aggregation. IEEE J Biomed Health Inform 2021; 25:2722-2732. [PMID: 33320815 DOI: 10.1109/jbhi.2020.3044957] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Retinal vessel segmentation and centerline extraction are crucial steps in building a computer-aided diagnosis system on retinal images. Previous works treat them as two isolated tasks, while ignoring their tight association. In this paper, we propose a deep semantics and multi-scaled cross-task aggregation network that takes advantage of the association to jointly improve their performances. Our network is featured by two sub-networks. The forepart is a deep semantics aggregation sub-network that aggregates strong semantic information to produce more powerful features for both tasks, and the tail is a multi-scaled cross-task aggregation sub-network that explores complementary information to refine the results. We evaluate the proposed method on three public databases, which are DRIVE, STARE and CHASE_DB1. Experimental results show that our method can not only simultaneously extract retinal vessels and their centerlines but also achieve the state-of-the-art performances on both tasks.
Collapse
|
141
|
V S, G I, A SR. Parallel Architecture of Fully Convolved Neural Network for Retinal Vessel Segmentation. J Digit Imaging 2021; 33:168-180. [PMID: 31342298 DOI: 10.1007/s10278-019-00250-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Retinal blood vessel extraction is considered to be the indispensable action for the diagnostic purpose of many retinal diseases. In this work, a parallel fully convolved neural network-based architecture is proposed for the retinal blood vessel segmentation. Also, the network performance improvement is studied by applying different levels of preprocessed images. The proposed method is experimented on DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (STructured Analysis of the Retina) which are the widely accepted public database for this research area. The proposed work attains high accuracy, sensitivity, and specificity of about 96.37%, 86.53%, and 98.18% respectively. Data independence is also proved by testing abnormal STARE images with DRIVE trained model. The proposed architecture shows better result in the vessel extraction irrespective of vessel thickness. The obtained results show that the proposed work outperforms most of the existing segmentation methodologies, and it can be implemented as the real time application tool since the entire work is carried out on CPU. The proposed work is executed with low-cost computation; at the same time, it takes less than 2 s per image for vessel extraction.
Collapse
Affiliation(s)
- Sathananthavathi V
- Department of ECE, Mepco Schlenk Engineering College, Sivakasi, Tamilnadu, 626005, India.
| | - Indumathi G
- Department of ECE, Mepco Schlenk Engineering College, Sivakasi, Tamilnadu, 626005, India
| | - Swetha Ranjani A
- Department of ECE, Mepco Schlenk Engineering College, Sivakasi, Tamilnadu, 626005, India
| |
Collapse
|
142
|
Ahmedt-Aristizabal D, Armin MA, Denman S, Fookes C, Petersson L. Graph-Based Deep Learning for Medical Diagnosis and Analysis: Past, Present and Future. SENSORS (BASEL, SWITZERLAND) 2021; 21:4758. [PMID: 34300498 PMCID: PMC8309939 DOI: 10.3390/s21144758] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 07/05/2021] [Accepted: 07/07/2021] [Indexed: 01/17/2023]
Abstract
With the advances of data-driven machine learning research, a wide variety of prediction problems have been tackled. It has become critical to explore how machine learning and specifically deep learning methods can be exploited to analyse healthcare data. A major limitation of existing methods has been the focus on grid-like data; however, the structure of physiological recordings are often irregular and unordered, which makes it difficult to conceptualise them as a matrix. As such, graph neural networks have attracted significant attention by exploiting implicit information that resides in a biological system, with interacting nodes connected by edges whose weights can be determined by either temporal associations or anatomical junctions. In this survey, we thoroughly review the different types of graph architectures and their applications in healthcare. We provide an overview of these methods in a systematic manner, organized by their domain of application including functional connectivity, anatomical structure, and electrical-based analysis. We also outline the limitations of existing techniques and discuss potential directions for future research.
Collapse
Affiliation(s)
- David Ahmedt-Aristizabal
- Imaging and Computer Vision Group, CSIRO Data61, Canberra 2601, Australia; (M.A.A.); (L.P.)
- Signal Processing, Artificial Intelligence and Vision Technologies (SAIVT) Research Program, Queensland University of Technology, Brisbane 4000, Australia; (S.D.); (C.F.)
| | - Mohammad Ali Armin
- Imaging and Computer Vision Group, CSIRO Data61, Canberra 2601, Australia; (M.A.A.); (L.P.)
| | - Simon Denman
- Signal Processing, Artificial Intelligence and Vision Technologies (SAIVT) Research Program, Queensland University of Technology, Brisbane 4000, Australia; (S.D.); (C.F.)
| | - Clinton Fookes
- Signal Processing, Artificial Intelligence and Vision Technologies (SAIVT) Research Program, Queensland University of Technology, Brisbane 4000, Australia; (S.D.); (C.F.)
| | - Lars Petersson
- Imaging and Computer Vision Group, CSIRO Data61, Canberra 2601, Australia; (M.A.A.); (L.P.)
| |
Collapse
|
143
|
Garg M, Gupta S, Nayak SR, Nayak J, Pelusi D. Modified pixel level snake using bottom hat transformation for evolution of retinal vasculature map. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:5737-5757. [PMID: 34517510 DOI: 10.3934/mbe.2021290] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Small changes in retinal blood vessels may produce different pathological disorders which may further cause blindness. Therefore, accurate extraction of vasculature map of retinal fundus image has become a challenging task for analysis of different pathologies. The present study offers an unsupervised method for extraction of vasculature map from retinal fundus images. This paper presents the methodology for evolution of vessels using Modified Pixel Level Snake (MPLS) algorithm based on Black Top-Hat (BTH) transformation. In the proposed method, initially bimodal masking is used for extraction of the mask of the retinal fundus image. Then adaptive segmentation and global thresholding is applied on masked image to find the initial contour image. Finally, MPLS is used for evolution of contour in all four cardinal directions using external, internal and balloon potential. This proposed work is implemented using MATLAB software. DRIVE and STARE databases are used for checking the performance of the system. In the proposed work, various performance metrics such as sensitivity, specificity and accuracy are evaluated. The average sensitivity of 76.96%, average specificity of 98.34% and average accuracy of 96.30% is achieved for DRIVE database. This technique can also segment vessels of pathological images accurately; reaching the average sensitivity of 70.80%, average specificity of 96.40% and average accuracy of 94.41%. The present study provides a simple and accurate method for the detection of vasculature map for normal fundus images as well as pathological images. It can be helpful for the assessment of various retinal vascular attributes like length, diameter, width, tortuosity and branching angle.
Collapse
Affiliation(s)
- Meenu Garg
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Soumya Ranjan Nayak
- Amity School of Engineering and Technology, Amity University Uttar Pradesh, Noida, India
| | - Janmenjoy Nayak
- Aditya Institute of Technology and Management, Tekkali, K. Kotturu, Andhra Pradesh, India
| | - Danilo Pelusi
- Faculty of Communication Sciences, University of Teramo, Italy
| |
Collapse
|
144
|
Du X, Wang J, Sun W. Densely connected U-Net retinal vessel segmentation algorithm based on multi-scale feature convolution extraction. Med Phys 2021; 48:3827-3841. [PMID: 34028030 DOI: 10.1002/mp.14944] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Revised: 03/26/2021] [Accepted: 05/05/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE The segmentation results of retinal blood vessels have a significant impact on the automatic diagnosis of various ophthalmic diseases. In order to further improve the segmentation accuracy of retinal vessels, we propose an improved algorithm based on multiscale vessel detection, which extracts features through densely connected networks and reuses features. METHODS A parallel fusion and serial embedding multiscale feature dense connection U-Net structure are designed. In the parallel fusion method, features of the input images are extracted for Inception multiscale convolution and dense block convolution, respectively, and then the features are fused and input into the subsequent network. In serial embedding mode, the Inception multiscale convolution structure is embedded in the dense connection network module, and then the dense connection structure is used to replace the classical convolution block in the U-Net network encoder part, so as to achieve multiscale feature extraction and efficient utilization of complex structure vessels and thereby improve the network segmentation performance. RESULTS The experimental analysis on the standard DRIVE and CHASE_DB1 databases shows that the sensitivity, specificity, accuracy, and AUC of the parallel fusion and serial embedding methods reach 0.7854, 0.9813, 0.9563, 0.9794; 0.7876, 0.9811, 0.9565, 0.9793 and 0.8110, 0.9737, 0.9547, 0.9667; 0.8113, 0.9717, 0.9574, 0.9750, respectively. CONCLUSIONS The experimental results show that multiscale feature detection and feature dense connection can effectively enhance the network model's ability to detect blood vessels and improve the network segmentation performance, which is superior to U-Net algorithm and some mainstream retinal blood vessel segmentation algorithms at present.
Collapse
Affiliation(s)
- Xinfeng Du
- School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan, 114051, China
| | - Jiesheng Wang
- School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan, 114051, China
| | - Weizhen Sun
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, Jiangsu, 210000, China
| |
Collapse
|
145
|
Yuan Y, Zhang L, Wang L, Huang H. Multi-level Attention Network for Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2021; 26:312-323. [PMID: 34129508 DOI: 10.1109/jbhi.2021.3089201] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Automatic vessel segmentation in the fundus images plays an important role in the screening, diagnosis, treatment, and evaluation of various cardiovascular and ophthalmologic diseases. However, due to the limited well-annotated data, varying size of vessels, and intricate vessel structures, retinal vessel segmentation has become a long-standing challenge. In this paper, a novel deep learning model called AACA-MLA-D-UNet is proposed to fully utilize the low-level detailed information and the complementary information encoded in different layers to accurately distinguish the vessels from the background with low model complexity. The architecture of the proposed model is based on U-Net, and the dropout dense block is proposed to preserve maximum vessel information between convolution layers and mitigate the over-fitting problem. The adaptive atrous channel attention module is embedded in the contracting path to sort the importance of each feature channel automatically. After that, the multi-level attention module is proposed to integrate the multi-level features extracted from the expanding path, and use them to refine the features at each individual layer via attention mechanism. The proposed method has been validated on the three publicly available databases, i.e. the DRIVE, STARE, and CHASE DB1. The experimental results demonstrate that the proposed method can achieve better or comparable performance on retinal vessel segmentation with lower model complexity. Furthermore, the proposed method can also deal with some challenging cases and has strong generalization ability.
Collapse
|
146
|
Hu J, Wang H, Cao Z, Wu G, Jonas JB, Wang YX, Zhang J. Automatic Artery/Vein Classification Using a Vessel-Constraint Network for Multicenter Fundus Images. Front Cell Dev Biol 2021; 9:659941. [PMID: 34178986 PMCID: PMC8226261 DOI: 10.3389/fcell.2021.659941] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 04/19/2021] [Indexed: 11/24/2022] Open
Abstract
Retinal blood vessel morphological abnormalities are generally associated with cardiovascular, cerebrovascular, and systemic diseases, automatic artery/vein (A/V) classification is particularly important for medical image analysis and clinical decision making. However, the current method still has some limitations in A/V classification, especially the blood vessel edge and end error problems caused by the single scale and the blurred boundary of the A/V. To alleviate these problems, in this work, we propose a vessel-constraint network (VC-Net) that utilizes the information of vessel distribution and edge to enhance A/V classification, which is a high-precision A/V classification model based on data fusion. Particularly, the VC-Net introduces a vessel-constraint (VC) module that combines local and global vessel information to generate a weight map to constrain the A/V features, which suppresses the background-prone features and enhances the edge and end features of blood vessels. In addition, the VC-Net employs a multiscale feature (MSF) module to extract blood vessel information with different scales to improve the feature extraction capability and robustness of the model. And the VC-Net can get vessel segmentation results simultaneously. The proposed method is tested on publicly available fundus image datasets with different scales, namely, DRIVE, LES, and HRF, and validated on two newly created multicenter datasets: Tongren and Kailuan. We achieve a balance accuracy of 0.9554 and F1 scores of 0.7616 and 0.7971 for the arteries and veins, respectively, on the DRIVE dataset. The experimental results prove that the proposed model achieves competitive performance in A/V classification and vessel segmentation tasks compared with state-of-the-art methods. Finally, we test the Kailuan dataset with other trained fusion datasets, the results also show good robustness. To promote research in this area, the Tongren dataset and source code will be made publicly available. The dataset and code will be made available at https://github.com/huawang123/VC-Net.
Collapse
Affiliation(s)
- Jingfei Hu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Hua Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Zhaohui Cao
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Guang Wu
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Jost B Jonas
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China.,Department of Ophthalmology, Medical Faculty Mannheim of the Ruprecht-Karls-University Heidelberg, Mannheim, Germany
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China.,Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| |
Collapse
|
147
|
Li K, Qi X, Luo Y, Yao Z, Zhou X, Sun M. Accurate Retinal Vessel Segmentation in Color Fundus Images via Fully Attention-Based Networks. IEEE J Biomed Health Inform 2021; 25:2071-2081. [PMID: 33001809 DOI: 10.1109/jbhi.2020.3028180] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Automatic retinal vessel segmentation is important for the diagnosis and prevention of ophthalmic diseases. The existing deep learning retinal vessel segmentation models always treat each pixel equally. However, the multi-scale vessel structure is a vital factor affecting the segmentation results, especially in thin vessels. To address this crucial gap, we propose a novel Fully Attention-based Network (FANet) based on attention mechanisms to adaptively learn rich feature representation and aggregate the multi-scale information. Specifically, the framework consists of the image pre-processing procedure and the semantic segmentation networks. Green channel extraction (GE) and contrast limited adaptive histogram equalization (CLAHE) are employed as pre-processing to enhance the texture and contrast of retinal blood images. Besides, the network combines two types of attention modules with the U-Net. We propose a lightweight dual-direction attention block to model global dependencies and reduce intra-class inconsistencies, in which the weights of feature maps are updated based on the semantic correlation between pixels. The dual-direction attention block utilizes horizontal and vertical pooling operations to produce the attention map. In this way, the network aggregates global contextual information from semantic-closer regions or a series of pixels belonging to the same object category. Meanwhile, we adopt the selective kernel (SK) unit to replace the standard convolution for obtaining multi-scale features of different receptive field sizes generated by soft attention. Furthermore, we demonstrate that the proposed model can effectively identify irregular, noisy, and multi-scale retinal vessels. The abundant experiments on DRIVE, STARE, and CHASE_DB1 datasets show that our method achieves state-of-the-art performance.
Collapse
|
148
|
Tang X, Peng J, Zhong B, Li J, Yan Z. Introducing frequency representation into convolution neural networks for medical image segmentation via twin-Kernel Fourier convolution. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106110. [PMID: 33910149 DOI: 10.1016/j.cmpb.2021.106110] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Accepted: 04/07/2021] [Indexed: 05/28/2023]
Abstract
BACKGROUND AND OBJECTIVE For medical image segmentation, deep learning-based methods have achieved state-of-the-art performance. However, the powerful spectral representation in the field of image processing is rarely considered in these models. METHODS In this work, we propose to introduce frequency representation into convolution neural networks (CNNs) and design a novel model, tKFC-Net, to combine powerful feature representation in both frequency and spatial domains. Through the Fast Fourier Transform (FFT) operation, frequency representation is employed on pooling, upsampling, and convolution without any adjustments to the network architecture. Furthermore, we replace original convolution with twin-Kernel Fourier Convolution (t-KFC), a new designed convolution layer, to specify the convolution kernels for particular functions and extract features from different frequency components. RESULTS We experimentally show that our method has an edge over other models in the task of medical image segmentation. Evaluated on four datasets-skin lesion segmentation (ISIC 2018), retinal blood vessel segmentation (DRIVE), lung segmentation (COVID-19-CT-Seg), and brain tumor segmentation (BraTS 2019), the proposed model achieves outstanding results: the metric F1-Score is 0.878 for ISIC 2018, 0.8185 for DRIVE, 0.9830 for COVID-19-CT-Seg, and 0.8457 for BraTS 2019. CONCLUSION The introduction of spectral representation retains spectral features which result in more accurate segmentation. The proposed method is orthogonal to other topology improvement methods and very convenient to be combined.
Collapse
Affiliation(s)
- Xianlun Tang
- Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Jiangping Peng
- Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Bing Zhong
- Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Jie Li
- College of Mobile Telecommunications, Chongqing University of Posts and Telecom, Chongqing 401520, China
| | - Zhenfu Yan
- Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|
149
|
Li D, Rahardja S. BSEResU-Net: An attention-based before-activation residual U-Net for retinal vessel segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106070. [PMID: 33857703 DOI: 10.1016/j.cmpb.2021.106070] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Accepted: 03/22/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES Retinal vessels are a major feature used for the physician to diagnose many retinal diseases, such as cardiovascular disease and Glaucoma. Therefore, the designing of an auto-segmentation algorithm for retinal vessel draw great attention in medical field. Recently, deep learning methods, especially convolutional neural networks (CNNs) show extraordinary potential for the task of vessel segmentation. However, most of the deep learning methods only take advantage of the shallow networks with a traditional cross-entropy objective, which becomes the main obstacle to further improve the performance on a task that is imbalanced. We therefore propose a new type of residual U-Net called Before-activation Squeeze-and-Excitation ResU-Net (BSEResu-Net) to tackle the aforementioned issues. METHODS Our BSEResU-Net can be viewed as an encoder/decoder framework that constructed by Before-activation Squeeze-and-Excitation blocks (BSE Blocks). In comparison to the current existing CNN structures, we utilize a new type of residual block structure, namely BSE block, in which the attention mechanism is combined with skip connection to boost the performance. What's more, the network could consistently gain accuracy from the increasing depth as we incorporate more residual blocks, attributing to the dropblock mechanism used in BSE blocks. A joint loss function which is based on the dice and cross-entropy loss functions is also introduced to achieve more balanced segmentation between the vessel and non-vessel pixels. RESULTS The proposed BSEResU-Net is evaluated on the publicly available DRIVE, STARE and HRF datasets. It achieves the F1-score of 0.8324, 0.8368 and 0.8237 on DRIVE, STARE and HRF dataset, respectively. Experimental results show that the proposed BSEResU-Net outperforms current state-of-the-art algorithms. CONCLUSIONS The proposed algorithm utilizes a new type of residual blocks called BSE residual blocks for vessel segmentation. Together with a joint loss function, it shows outstanding performance both on low and high-resolution fundus images.
Collapse
Affiliation(s)
- Di Li
- Centre of Intelligent Acoustics and Immersive Communications, School of Marine Science and Technology, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, P.R. China.
| | - Susanto Rahardja
- Centre of Intelligent Acoustics and Immersive Communications, School of Marine Science and Technology, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, P.R. China.
| |
Collapse
|
150
|
Gegundez-Arias ME, Marin-Santos D, Perez-Borrero I, Vasallo-Vazquez MJ. A new deep learning method for blood vessel segmentation in retinal images based on convolutional kernels and modified U-Net model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106081. [PMID: 33882418 DOI: 10.1016/j.cmpb.2021.106081] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Accepted: 03/28/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic monitoring of retinal blood vessels proves very useful for the clinical assessment of ocular vascular anomalies or retinopathies. This paper presents an efficient and accurate deep learning-based method for vessel segmentation in eye fundus images. METHODS The approach consists of a convolutional neural network based on a simplified version of the U-Net architecture that combines residual blocks and batch normalization in the up- and downscaling phases. The network receives patches extracted from the original image as input and is trained with a novel loss function that considers the distance of each pixel to the vascular tree. At its output, it generates the probability of each pixel of the input patch belonging to the vascular structure. The application of the network to the patches in which a retinal image can be divided allows obtaining the pixel-wise probability map of the complete image. This probability map is then binarized with a certain threshold to generate the blood vessel segmentation provided by the method. RESULTS The method has been developed and evaluated in the DRIVE, STARE and CHASE_Db1 databases, which offer a manual segmentation of the vascular tree by each of its images. Using this set of images as ground truth, the accuracy of the vessel segmentations obtained for an operating point proposal (established by a single threshold value for each database) was quantified. The overall performance was measured using the area of its receiver operating characteristic curve. The method demonstrated robustness in the face of the variability of the fundus images of diverse origin, being capable of working with the highest level of accuracy in the entire set of possible points of operation, compared to those provided by the most accurate methods found in literature. CONCLUSIONS The analysis of results concludes that the proposed method reaches better performance than the rest of state-of-art methods and can be considered the most promising for integration into a real tool for vascular structure segmentation.
Collapse
Affiliation(s)
- Manuel E Gegundez-Arias
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Diego Marin-Santos
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Isaac Perez-Borrero
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Manuel J Vasallo-Vazquez
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| |
Collapse
|