51
|
Guo S. CSGNet: Cascade semantic guided net for retinal vessel segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
52
|
Tan Y, Yang KF, Zhao SX, Li YJ. Retinal Vessel Segmentation With Skeletal Prior and Contrastive Loss. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2238-2251. [PMID: 35320091 DOI: 10.1109/tmi.2022.3161681] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The morphology of retinal vessels is closely associated with many kinds of ophthalmic diseases. Although huge progress in retinal vessel segmentation has been achieved with the advancement of deep learning, some challenging issues remain. For example, vessels can be disturbed or covered by other components presented in the retina (such as optic disc or lesions). Moreover, some thin vessels are also easily missed by current methods. In addition, existing fundus image datasets are generally tiny, due to the difficulty of vessel labeling. In this work, a new network called SkelCon is proposed to deal with these problems by introducing skeletal prior and contrastive loss. A skeleton fitting module is developed to preserve the morphology of the vessels and improve the completeness and continuity of thin vessels. A contrastive loss is employed to enhance the discrimination between vessels and background. In addition, a new data augmentation method is proposed to enrich the training samples and improve the robustness of the proposed model. Extensive validations were performed on several popular datasets (DRIVE, STARE, CHASE, and HRF), recently developed datasets (UoA-DR, IOSTAR, and RC-SLO), and some challenging clinical images (from RFMiD and JSIEC39 datasets). In addition, some specially designed metrics for vessel segmentation, including connectivity, overlapping area, consistency of vessel length, revised sensitivity, specificity, and accuracy were used for quantitative evaluation. The experimental results show that, the proposed model achieves state-of-the-art performance and significantly outperforms compared methods when extracting thin vessels in the regions of lesions or optic disc. Source code is available at https://www.github.com/tyb311/SkelCon.
Collapse
|
53
|
Khandouzi A, Ariafar A, Mashayekhpour Z, Pazira M, Baleghi Y. Retinal Vessel Segmentation, a Review of Classic and Deep Methods. Ann Biomed Eng 2022; 50:1292-1314. [PMID: 36008569 DOI: 10.1007/s10439-022-03058-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 08/15/2022] [Indexed: 11/01/2022]
Abstract
Retinal illnesses such as diabetic retinopathy (DR) are the main causes of vision loss. In the early recognition of eye diseases, the segmentation of blood vessels in retina images plays an important role. Different symptoms of ocular diseases can be identified by the geometric features of ocular arteries. However, due to the complex construction of the blood vessels and their different thicknesses, segmenting the retina image is a challenging task. There are a number of algorithms that helped the detection of retinal diseases. This paper presents an overview of papers from 2016 to 2022 that discuss machine learning and deep learning methods for automatic vessel segmentation. The methods are divided into two groups: Deep learning-based, and classic methods. Algorithms, classifiers, pre-processing and specific techniques of each group is described, comprehensively. The performances of recent works are compared based on their achieved accuracy in different datasets in inclusive tables. A survey of most popular datasets like DRIVE, STARE, HRF and CHASE_DB1 is also given in this paper. Finally, a list of findings from this review is presented in the conclusion section.
Collapse
Affiliation(s)
- Ali Khandouzi
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Ali Ariafar
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Zahra Mashayekhpour
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Milad Pazira
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Yasser Baleghi
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran.
| |
Collapse
|
54
|
Su Y, Cheng J, Cao G, Liu H. How to design a deep neural network for retinal vessel segmentation: an empirical study. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103761] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
55
|
Sun K, He M, He Z, Liu H, Pi X. EfficientNet embedded with spatial attention for recognition of multi-label fundus disease from color fundus photographs. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
56
|
Lyu X, Cheng L, Zhang S. The RETA Benchmark for Retinal Vascular Tree Analysis. Sci Data 2022; 9:397. [PMID: 35817778 PMCID: PMC9273761 DOI: 10.1038/s41597-022-01507-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Accepted: 06/28/2022] [Indexed: 12/23/2022] Open
Abstract
Topological and geometrical analysis of retinal blood vessels could be a cost-effective way to detect various common diseases. Automated vessel segmentation and vascular tree analysis models require powerful generalization capability in clinical applications. In this work, we constructed a novel benchmark RETA with 81 labelled vessel masks aiming to facilitate retinal vessel analysis. A semi-automated coarse-to-fine workflow was proposed for vessel annotation task. During database construction, we strived to control inter-annotator and intra-annotator variability by means of multi-stage annotation and label disambiguation on self-developed dedicated software. In addition to binary vessel masks, we obtained other types of annotations including artery/vein masks, vascular skeletons, bifurcations, trees and abnormalities. Subjective and objective quality validations of the annotated vessel masks demonstrated significantly improved quality over the existing open datasets. Our annotation software is also made publicly available serving the purpose of pixel-level vessel visualization. Researchers could develop vessel segmentation algorithms and evaluate segmentation performance using RETA. Moreover, it might promote the study of cross-modality tubular structure segmentation and analysis.
Collapse
Affiliation(s)
- Xingzheng Lyu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China.
| | - Li Cheng
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, T6G 1H9, Canada
| | - Sanyuan Zhang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China.
| |
Collapse
|
57
|
Zhang H, Zhong X, Li Z, Chen Y, Zhu Z, Lv J, Li C, Zhou Y, Li G. TiM-Net: Transformer in M-Net for Retinal Vessel Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:9016401. [PMID: 35859930 PMCID: PMC9293566 DOI: 10.1155/2022/9016401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/04/2022] [Accepted: 06/21/2022] [Indexed: 11/17/2022]
Abstract
retinal image is a crucial window for the clinical observation of cardiovascular, cerebrovascular, or other correlated diseases. Retinal vessel segmentation is of great benefit to the clinical diagnosis. Recently, the convolutional neural network (CNN) has become a dominant method in the retinal vessel segmentation field, especially the U-shaped CNN models. However, the conventional encoder in CNN is vulnerable to noisy interference, and the long-rang relationship in fundus images has not been fully utilized. In this paper, we propose a novel model called Transformer in M-Net (TiM-Net) based on M-Net, diverse attention mechanisms, and weighted side output layers to efficaciously perform retinal vessel segmentation. First, to alleviate the effects of noise, a dual-attention mechanism based on channel and spatial is designed. Then the self-attention mechanism in Transformer is introduced into skip connection to re-encode features and model the long-range relationship explicitly. Finally, a weighted SideOut layer is proposed for better utilization of the features from each side layer. Extensive experiments are conducted on three public data sets to show the effectiveness and robustness of our TiM-Net compared with the state-of-the-art baselines. Both quantitative and qualitative results prove its clinical practicality. Moreover, variants of TiM-Net also achieve competitive performance, demonstrating its scalability and generalization ability. The code of our model is available at https://github.com/ZX-ECJTU/TiM-Net.
Collapse
Affiliation(s)
- Hongbin Zhang
- School of Software, East China Jiaotong University, Nanchang, China
| | - Xiang Zhong
- School of Software, East China Jiaotong University, Nanchang, China
| | - Zhijie Li
- School of Software, East China Jiaotong University, Nanchang, China
| | - Yanan Chen
- School of International, East China Jiaotong University, Nanchang, China
| | - Zhiliang Zhu
- School of Software, East China Jiaotong University, Nanchang, China
| | - Jingqin Lv
- School of Software, East China Jiaotong University, Nanchang, China
| | - Chuanxiu Li
- School of Information Engineering, East China Jiaotong University, Nanchang, China
| | - Ying Zhou
- Medical School, Nanchang University, Nanchang, China
| | - Guangli Li
- School of Information Engineering, East China Jiaotong University, Nanchang, China
| |
Collapse
|
58
|
Zhao C, Tang H, McGonigle D, He Z, Zhang C, Wang YP, Deng HW, Bober R, Zhou W. Development of an approach to extracting coronary arteries and detecting stenosis in invasive coronary angiograms. J Med Imaging (Bellingham) 2022; 9:044002. [PMID: 35875389 PMCID: PMC9295705 DOI: 10.1117/1.jmi.9.4.044002] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 06/28/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: In stable coronary artery disease (CAD), reduction in mortality and/or myocardial infarction with revascularization over medical therapy has not been reliably achieved. Coronary arteries are usually extracted to perform stenosis detection. As such, developing accurate segmentation of vascular structures and quantification of coronary arterial stenosis in invasive coronary angiograms (ICA) is necessary. Approach: A multi-input and multiscale (MIMS) U-Net with a two-stage recurrent training strategy was proposed for the automatic vessel segmentation. The proposed model generated a refined prediction map with the following two training stages: (i) stage I coarsely segmented the major coronary arteries from preprocessed single-channel ICAs and generated the probability map of arteries; and (ii) during the stage II, a three-channel image consisting of the original preprocessed image, a generated probability map, and an edge-enhanced image generated from the preprocessed image was fed to the proposed MIMS U-Net to produce the final segmentation result. After segmentation, an arterial stenosis detection algorithm was developed to extract vascular centerlines and calculate arterial diameters to evaluate stenotic level. Results: Experimental results demonstrated that the proposed method achieved an average Dice similarity coefficient of 0.8329, an average sensitivity of 0.8281, and an average specificity of 0.9979 in our dataset with 294 ICAs obtained from 73 patients. Moreover, our stenosis detection algorithm achieved a true positive rate of 0.6668 and a positive predictive value of 0.7043. Conclusions: Our proposed approach has great promise for clinical use and could help physicians improve diagnosis and therapeutic decisions for CAD.
Collapse
Affiliation(s)
- Chen Zhao
- Michigan Technological University, Department of Applied Computing, Houghton, Michigan, United States
| | - Haipeng Tang
- University of Southern Mississippi, School of Computing Sciences and Computer Engineering, Hattiesburg, Mississippi, United States
| | - Daniel McGonigle
- University of Southern Mississippi, School of Computing Sciences and Computer Engineering, Hattiesburg, Mississippi, United States
| | - Zhuo He
- Michigan Technological University, Department of Applied Computing, Houghton, Michigan, United States
| | - Chaoyang Zhang
- University of Southern Mississippi, School of Computing Sciences and Computer Engineering, Hattiesburg, Mississippi, United States
| | - Yu-Ping Wang
- Tulane University School of Public Health and Tropical Medicine, Tulane Center of Bioinformatics and Genomics, New Orleans, Louisiana, United States
| | - Hong-Wen Deng
- Tulane University School of Public Health and Tropical Medicine, Tulane Center of Bioinformatics and Genomics, New Orleans, Louisiana, United States
| | - Robert Bober
- Ochsner Medical Center, Department of Cardiology, New Orleans, Louisiana, United States
| | - Weihua Zhou
- Michigan Technological University, Department of Applied Computing, Houghton, Michigan, United States
- Michigan Technological University, Institute of Computing and Cybersystems, and Health Research Institute, Center of Biocomputing and Digital Health, Houghton, Michigan, United States
| |
Collapse
|
59
|
Multifilters-Based Unsupervised Method for Retinal Blood Vessel Segmentation. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136393] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Fundus imaging is one of the crucial methods that help ophthalmologists for diagnosing the various eye diseases in modern medicine. An accurate vessel segmentation method can be a convenient tool to foresee and analyze fatal diseases, including hypertension or diabetes, which damage the retinal vessel’s appearance. This work suggests an unsupervised approach for vessels segmentation out of retinal images. The proposed method includes multiple steps. Firstly, from the colored retinal image, green channel is extracted and preprocessed utilizing Contrast Limited Histogram Equalization as well as Fuzzy Histogram Based Equalization for contrast enhancement. To expel geometrical articles (macula, optic disk) and noise, top-hat morphological operations are used. On the resulted enhanced image, matched filter and Gabor wavelet filter are applied, and the outputs from both is added to extract vessels pixels. The resulting image with the now noticeable blood vessel is binarized using human visual system (HVS). A final image of segmented blood vessel is obtained by applying post-processing. The suggested method is assessed on two public datasets (DRIVE and STARE) and showed comparable results with regard to sensitivity, specificity and accuracy. The results we achieved with respect to sensitivity, specificity together with accuracy on DRIVE database are 0.7271, 0.9798 and 0.9573, and on STARE database these are 0.7164, 0.9760, and 0.9560, respectively, in less than 3.17 s on average per image.
Collapse
|
60
|
Mishra S, Zhang Y, Chen DZ, Hu XS. Data-Driven Deep Supervision for Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1560-1574. [PMID: 35030076 DOI: 10.1109/tmi.2022.3143371] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Medical image segmentation plays a vital role in disease diagnosis and analysis. However, data-dependent difficulties such as low image contrast, noisy background, and complicated objects of interest render the segmentation problem challenging. These difficulties diminish dense prediction and make it tough for known approaches to explore data-specific attributes for robust feature extraction. In this paper, we study medical image segmentation by focusing on robust data-specific feature extraction to achieve improved dense prediction. We propose a new deep convolutional neural network (CNN), which exploits specific attributes of input datasets to utilize deep supervision for enhanced feature extraction. In particular, we strategically locate and deploy auxiliary supervision, by matching the object perceptive field (OPF) (which we define and compute) with the layer-wise effective receptive fields (LERF) of the network. This helps the model pay close attention to some distinct input data dependent features, which the network might otherwise 'ignore' during training. Further, to achieve better target localization and refined dense prediction, we propose the densely decoded networks (DDN), by selectively introducing additional network connections (the 'crutch' connections). Using five public datasets (two retinal vessel, melanoma, optic disc/cup, and spleen segmentation) and two in-house datasets (lymph node and fungus segmentation), we verify the effectiveness of our proposed approach in 2D and 3D segmentation.
Collapse
|
61
|
Yang D, Zhao H, Han T. Learning feature-rich integrated comprehensive context networks for automated fundus retinal vessel analysis. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
62
|
Liang L, Feng J, Zhou L, Yin J, Sheng X. U-shaped Retinal Vessel Segmentation Based on Adaptive Aggregation of Feature Information. Interdiscip Sci 2022; 14:623-637. [PMID: 35486313 DOI: 10.1007/s12539-022-00519-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Revised: 04/03/2022] [Accepted: 04/05/2022] [Indexed: 06/14/2023]
Abstract
Detection and analysis of retinal blood vessels contribute to the clinical diagnosis of many ophthalmic diseases. In this paper, aiming on achieving more accurate segmentation of retinal vessels and enhance the ability of the algorithm to identify microvessels, we propose a U-shaped network based on adaptive aggregation of feature information. The introduced feature selection module, which could strengthen feature transmission and selectively emphasize feature information. To effectively capture the characteristics of vessels at different scales, generate richer and denser context information, and DenseASPP is embedded at the bottom of the network. Meanwhile, we propose an adaptive aggregation module to aggregate the semantic information in each layer of the encoder part and transmit it to subsequent layers, which is beneficial to the spatial reconstruction of retinal vessels. A joint loss function is also introduced to facilitate network training. The proposed network is evaluated on three public datasets. The sensitivity, accuracy, and area under curve(AUC) are 83.48%/83.16/85.86, 95.67%/96.67%/96.52%, and 98.11%/98.69%/98.60% on DRIVE, STARE and CHASE_DB1, respectively. In order to achieve more accurate retinal blood vessel segmentation and improve the ability of the algorithm to identify microvessels. We propose a U-shaped network based on adaptive aggregation of feature information. The introduction of the adaptive aggregation module aggregates the semantic information of each level of the encoder part, which improves the robustness of the model to segment blood vessels.
Collapse
Affiliation(s)
- Liming Liang
- School of Electrical Engineering and Automation, Jiangxi University of Science and Technology, Ganzhou, 341000, Jiangxi, China.
| | - Jun Feng
- School of Electrical Engineering and Automation, Jiangxi University of Science and Technology, Ganzhou, 341000, Jiangxi, China
| | - Longsong Zhou
- School of Electrical Engineering and Automation, Jiangxi University of Science and Technology, Ganzhou, 341000, Jiangxi, China
| | - Jiang Yin
- School of Electrical Engineering and Automation, Jiangxi University of Science and Technology, Ganzhou, 341000, Jiangxi, China
| | - Xiaoqi Sheng
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, 511400, Guangdong, China.
| |
Collapse
|
63
|
Tao X, Dang H, Zhou X, Xu X, Xiong D. A Lightweight Network for Accurate Coronary Artery Segmentation Using X-Ray Angiograms. Front Public Health 2022; 10:892418. [PMID: 35692314 PMCID: PMC9174536 DOI: 10.3389/fpubh.2022.892418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 04/04/2022] [Indexed: 11/28/2022] Open
Abstract
An accurate and automated segmentation of coronary arteries in X-ray angiograms is essential for cardiologists to diagnose coronary artery disease in clinics. The existing deep learning-based coronary arteries segmentation models focus on using complex networks to improve the accuracy of segmentation while ignoring the computational cost. However, performing such segmentation networks requires a high-performance device with a powerful GPU and a high bandwidth memory. To address this issue, in this study, a lightweight deep learning network is developed for a better balance between computational cost and segmentation accuracy. We have made two efforts in designing the network. On the one hand, we adopt bottleneck residual blocks to replace the internal components in the encoder and decoder of the traditional U-Net to make the network more lightweight. On the other hand, we embed the two attention modules to model long-range dependencies in spatial and channel dimensions for the accuracy of segmentation. In addition, we employ Top-hat transforms and contrast-limited adaptive histogram equalization (CLAHE) as the pre-processing strategy to enhance the coronary arteries to further improve the accuracy. Experimental evaluations conducted on the coronary angiograms dataset show that the proposed lightweight network performs well for accurate coronary artery segmentation, achieving the sensitivity, specificity, accuracy, and area under the curve (AUC) of 0.8770, 0.9789, 0.9729, and 0.9910, respectively. It is noteworthy that the proposed network contains only 0.75 M of parameters, which achieves the best performance by the comparative experiments with popular segmentation networks (such as U-Net with 31.04 M of parameters). Experimental results demonstrate that our network can achieve better performance with an extremely low number of parameters. Furthermore, the generalization experiments indicate that our network can accurately segment coronary angiograms from other coronary angiograms' databases, which demonstrates the strong generalization and robustness of our network.
Collapse
Affiliation(s)
- Xingxiang Tao
- School of Modern Posts/Automation, Beijing University of Posts and Telecommunications, Beijing, China
| | - Hao Dang
- School of Information Technology, Henan University of Chinese Medicine, Zhengzhou, China
| | - Xiaoguang Zhou
- School of Modern Posts/Automation, Beijing University of Posts and Telecommunications, Beijing, China
| | - Xiangdong Xu
- Department of Cardiology, Jiading District Central Hospital Affiliated Shanghai University of Medical and Health Sciences, Shanghai, China
| | - Danqun Xiong
- Department of Cardiology, Jiading District Central Hospital Affiliated Shanghai University of Medical and Health Sciences, Shanghai, China
| |
Collapse
|
64
|
Segmenting Retinal Vessels Using a Shallow Segmentation Network to Aid Ophthalmic Analysis. MATHEMATICS 2022. [DOI: 10.3390/math10091536] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Retinal blood vessels possess a complex structure in the retina and are considered an important biomarker for several retinal diseases. Ophthalmic diseases result in specific changes in the retinal vasculature; for example, diabetic retinopathy causes the retinal vessels to swell, and depending upon disease severity, fluid or blood can leak. Similarly, hypertensive retinopathy causes a change in the retinal vasculature due to the thinning of these vessels. Central retinal vein occlusion (CRVO) is a phenomenon in which the main vein causes drainage of the blood from the retina and this main vein can close completely or partially with symptoms of blurred vision and similar eye problems. Considering the importance of the retinal vasculature as an ophthalmic disease biomarker, ophthalmologists manually analyze retinal vascular changes. Manual analysis is a tedious task that requires constant observation to detect changes. The deep learning-based methods can ease the problem by learning from the annotations provided by an expert ophthalmologist. However, current deep learning-based methods are relatively inaccurate, computationally expensive, complex, and require image preprocessing for final detection. Moreover, existing methods are unable to provide a better true positive rate (sensitivity), which shows that the model can predict most of the vessel pixels. Therefore, this study presents the so-called vessel segmentation ultra-lite network (VSUL-Net) to accurately extract the retinal vasculature from the background. The proposed VSUL-Net comprises only 0.37 million trainable parameters and uses an original image as input without preprocessing. The VSUL-Net uses a retention block that specifically maintains the larger feature map size and low-level spatial information transfer. This retention block results in better sensitivity of the proposed VSUL-Net without using expensive preprocessing schemes. The proposed method was tested on three publicly available datasets: digital retinal images for vessel extraction (DRIVE), structured analysis of retina (STARE), and children’s heart health study in England database (CHASE-DB1) for retinal vasculature segmentation. The experimental results demonstrated that VSUL-Net provides robust segmentation of retinal vasculature with sensitivity (Sen), specificity (Spe), accuracy (Acc), and area under the curve (AUC) values of 83.80%, 98.21%, 96.95%, and 98.54%, respectively, for DRIVE, 81.73%, 98.35%, 97.17%, and 98.69%, respectively, for CHASE-DB1, and 86.64%, 98.13%, 97.27%, and 99.01%, respectively, for STARE datasets. The proposed method provides an accurate segmentation mask for deep ophthalmic analysis.
Collapse
|
65
|
Hussain S, Guo F, Li W, Shen Z. DilUnet: A U-net based architecture for blood vessels segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 218:106732. [PMID: 35279601 DOI: 10.1016/j.cmpb.2022.106732] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 02/24/2022] [Accepted: 03/03/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinal image segmentation can help clinicians detect pathological disorders by studying changes in retinal blood vessels. This early detection can help prevent blindness and many other vision impairments. So far, several supervised and unsupervised methods have been proposed for the task of automatic blood vessel segmentation. However, the sensitivity and the robustness of these methods can be improved by correctly classifying more vessel pixels. METHOD We proposed an automatic, retinal blood vessel segmentation method based on the U-net architecture. This end-to-end framework utilizes preprocessing and a data augmentation pipeline for training. The architecture utilizes multiscale input and multioutput modules with improved skip connections and the correct use of dilated convolutions for effective feature extraction. In multiscale input, the input image is scaled down and concatenated with the output of convolutional blocks at different points in the encoder path to ensure the feature transfer of the original image. The multioutput module obtains upsampled outputs from each decoder block that are combined to obtain the final output. Skip paths connect each encoder block with the corresponding decoder block, and the whole architecture utilizes different dilation rates to improve the overall feature extraction. RESULTS The proposed method achieved an accuracy: of 0.9680, 0.9694, and 0.9701; sensitivity of 0.8837, 0.8263, and 0.8713; and Intersection Over Union (IOU) of 0.8698, 0.7951, and 0.8184 with three publicly available datasets: DRIVE, STARE, and CHASE, respectively. An ablation study is performed to show the contribution of each proposed module and technique. CONCLUSION The evaluation metrics revealed that the performance of the proposed method is higher than that of the original U-net and other U-net-based architectures, as well as many other state-of-the-art segmentation techniques, and that the proposed method is robust to noise.
Collapse
Affiliation(s)
- Snawar Hussain
- School of Automation, Central South University, Changsha, Hunan 410083, China
| | - Fan Guo
- School of Automation, Central South University, Changsha, Hunan 410083, China.
| | - Weiqing Li
- School of Automation, Central South University, Changsha, Hunan 410083, China
| | - Ziqi Shen
- School of Automation, Central South University, Changsha, Hunan 410083, China
| |
Collapse
|
66
|
State-of-the-art retinal vessel segmentation with minimalistic models. Sci Rep 2022; 12:6174. [PMID: 35418576 PMCID: PMC9007957 DOI: 10.1038/s41598-022-09675-y] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 03/10/2022] [Indexed: 01/03/2023] Open
Abstract
The segmentation of retinal vasculature from eye fundus images is a fundamental task in retinal image analysis. Over recent years, increasingly complex approaches based on sophisticated Convolutional Neural Network architectures have been pushing performance on well-established benchmark datasets. In this paper, we take a step back and analyze the real need of such complexity. We first compile and review the performance of 20 different techniques on some popular databases, and we demonstrate that a minimalistic version of a standard U-Net with several orders of magnitude less parameters, carefully trained and rigorously evaluated, closely approximates the performance of current best techniques. We then show that a cascaded extension (W-Net) reaches outstanding performance on several popular datasets, still using orders of magnitude less learnable weights than any previously published work. Furthermore, we provide the most comprehensive cross-dataset performance analysis to date, involving up to 10 different databases. Our analysis demonstrates that the retinal vessel segmentation is far from solved when considering test images that differ substantially from the training data, and that this task represents an ideal scenario for the exploration of domain adaptation techniques. In this context, we experiment with a simple self-labeling strategy that enables moderate enhancement of cross-dataset performance, indicating that there is still much room for improvement in this area. Finally, we test our approach on Artery/Vein and vessel segmentation from OCTA imaging problems, where we again achieve results well-aligned with the state-of-the-art, at a fraction of the model complexity available in recent literature. Code to reproduce the results in this paper is released.
Collapse
|
67
|
Shen X, Xu J, Jia H, Fan P, Dong F, Yu B, Ren S. Self-attentional microvessel segmentation via squeeze-excitation transformer Unet. Comput Med Imaging Graph 2022; 97:102055. [DOI: 10.1016/j.compmedimag.2022.102055] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 02/17/2022] [Accepted: 03/12/2022] [Indexed: 11/27/2022]
|
68
|
Xu Y, Fan Y. Dual-channel asymmetric convolutional neural network for an efficient retinal blood vessel segmentation in eye fundus images. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.05.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
69
|
Lin G, Bai H, Zhao J, Yun Z, Chen Y, Pang S, Feng Q. Improving sensitivity and connectivity of retinal vessel segmentation via error discrimination network. Med Phys 2022; 49:4494-4507. [PMID: 35338781 DOI: 10.1002/mp.15627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 03/04/2022] [Accepted: 03/08/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Automated retinal vessel segmentation is crucial to the early diagnosis and treatment of ophthalmological diseases. Many deep learning-based methods have shown exceptional success in this task. However, current approaches are still inadequate in challenging vessels (e.g., thin vessels) and rarely focus on the connectivity of vessel segmentation. METHODS We propose using an error discrimination network (D) to distinguish whether the vessel pixel predictions of the segmentation network (S) are correct, and S is trained to obtain fewer error predictions of D. Our method is similar to, but not the same as, the generative adversarial network (GAN). Three types of vessel samples and corresponding error masks are used to train D, as follows: (1) vessel ground truth; (2) vessel segmented by S; (3) artificial thin vessel error samples that further improve the sensitivity of D to wrong small vessels. As an auxiliary loss function of S, D strengthens the supervision of difficult vessels. Optionally, we can use the errors predicted by D to correct the segmentation result of S. RESULTS Compared with state-of-the-art methods, our method achieves the highest scores in sensitivity (86.19%, 86.26%, and 86.53%) and G-Mean (91.94%, 91.30%, and 92.76%) on three public datasets, namely, STARE, DRIVE, and HRF. Our method also maintains a competitive level in other metrics. On the STARE dataset, the F1-score and AUC of our method rank second and first, respectively, reaching 84.51% and 98.97%. The top scores of the three topology-relevant metrics (Conn, Inf, and Cor) demonstrate that the vessels extracted by our method have excellent connectivity. We also validate the effectiveness of error discrimination supervision and artificial error sample training through ablation experiments. CONCLUSIONS The proposed method provides an accurate and robust solution for difficult vessel segmentation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Guoye Lin
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Hanhua Bai
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Jie Zhao
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China.,School of Medical Information Engineering, Guangdong Pharmaceutical University, Guangzhou, Guangdong, China
| | - Zhaoqiang Yun
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Yangfan Chen
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Shumao Pang
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Qianjin Feng
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| |
Collapse
|
70
|
Shi T, Boutry N, Xu Y, Geraud T. Local Intensity Order Transformation for Robust Curvilinear Object Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:2557-2569. [PMID: 35275816 DOI: 10.1109/tip.2022.3155954] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Segmentation of curvilinear structures is important in many applications, such as retinal blood vessel segmentation for early detection of vessel diseases and pavement crack segmentation for road condition evaluation and maintenance. Currently, deep learning-based methods have achieved impressive performance on these tasks. Yet, most of them mainly focus on finding powerful deep architectures but ignore capturing the inherent curvilinear structure feature (e.g., the curvilinear structure is darker than the context) for a more robust representation. In consequence, the performance usually drops a lot on cross-datasets, which poses great challenges in practice. In this paper, we aim to improve the generalizability by introducing a novel local intensity order transformation (LIOT). Specifically, we transfer a gray-scale image into a contrast-invariant four-channel image based on the intensity order between each pixel and its nearby pixels along with the four (horizontal and vertical) directions. This results in a representation that preserves the inherent characteristic of the curvilinear structure while being robust to contrast changes. Cross-dataset evaluation on three retinal blood vessel segmentation datasets demonstrates that LIOT improves the generalizability of some state-of-the-art methods. Additionally, the cross-dataset evaluation between retinal blood vessel segmentation and pavement crack segmentation shows that LIOT is able to preserve the inherent characteristic of curvilinear structure with large appearance gaps. An implementation of the proposed method is available at https://github.com/TY-Shi/LIOT.
Collapse
|
71
|
Li X, Ding J, Tang J, Guo F. Res2Unet: A multi-scale channel attention network for retinal vessel segmentation. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07086-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
72
|
Xu J, Shen J, Wan C, Jiang Q, Yan Z, Yang W. A Few-Shot Learning-Based Retinal Vessel Segmentation Method for Assisting in the Central Serous Chorioretinopathy Laser Surgery. Front Med (Lausanne) 2022; 9:821565. [PMID: 35308538 PMCID: PMC8927682 DOI: 10.3389/fmed.2022.821565] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 01/28/2022] [Indexed: 12/05/2022] Open
Abstract
BACKGROUND The location of retinal vessels is an important prerequisite for Central Serous Chorioretinopathy (CSC) Laser Surgery, which does not only assist the ophthalmologist in marking the location of the leakage point (LP) on the fundus color image but also avoids the damage of the laser spot to the vessel tissue, as well as the low efficiency of the surgery caused by the absorption of laser energy by retinal vessels. In acquiring an excellent intra- and cross-domain adaptability, the existing deep learning (DL)-based vessel segmentation scheme must be driven by big data, which makes the densely annotated work tedious and costly. METHODS This paper aims to explore a new vessel segmentation method with a few samples and annotations to alleviate the above problems. Firstly, a key solution is presented to transform the vessel segmentation scene into the few-shot learning task, which lays a foundation for the vessel segmentation task with a few samples and annotations. Then, we improve the existing few-shot learning framework as our baseline model to adapt to the vessel segmentation scenario. Next, the baseline model is upgraded from the following three aspects: (1) A multi-scale class prototype extraction technique is designed to obtain more sufficient vessel features for better utilizing the information from the support images; (2) The multi-scale vessel features of the query images, inferred by the support image class prototype information, are gradually fused to provide more effective guidance for the vessel extraction tasks; and (3) A multi-scale attention module is proposed to promote the consideration of the global information in the upgraded model to assist vessel localization. Concurrently, the integrated framework is further conceived to appropriately alleviate the low performance of a single model in the cross-domain vessel segmentation scene, enabling to boost the domain adaptabilities of both the baseline and the upgraded models. RESULTS Extensive experiments showed that the upgraded operation could further improve the performance of vessel segmentation significantly. Compared with the listed methods, both the baseline and the upgraded models achieved competitive results on the three public retinal image datasets (i.e., CHASE_DB, DRIVE, and STARE). In the practical application of private CSC datasets, the integrated scheme partially enhanced the domain adaptabilities of the two proposed models.
Collapse
Affiliation(s)
- Jianguo Xu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jianxin Shen
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Cheng Wan
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Zhipeng Yan
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Weihua Yang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
73
|
Fundus Retinal Vessels Image Segmentation Method Based on Improved U-Net. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2022.03.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
74
|
Yan M, Zhou J, Luo C, Xu T, Xing X. Multiscale Joint Optimization Strategy for Retinal Vascular Segmentation. SENSORS 2022; 22:s22031258. [PMID: 35162002 PMCID: PMC8838406 DOI: 10.3390/s22031258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Revised: 01/31/2022] [Accepted: 02/03/2022] [Indexed: 12/04/2022]
Abstract
The accurate segmentation of retinal vascular is of great significance for the diagnosis of diseases such as diabetes, hypertension, microaneurysms and arteriosclerosis. In order to segment more deep and small blood vessels and provide more information to doctors, a multi-scale joint optimization strategy for retinal vascular segmentation is presented in this paper. Firstly, the Multi-Scale Retinex (MSR) algorithm is used to improve the uneven illumination of fundus images. Then, the multi-scale Gaussian matched filtering method is used to enhance the contrast of the retinal images. Optimized by the Particle Swarm Optimization (PSO) algorithm, Otsu algorithm (OTSU) multi-threshold segmentation is utilized to segment the retinal image extracted by the multi-scale matched filtering method. Finally, the image is post-processed, including binarization, morphological operation and edge-contour removal. The test experiments are implemented on the DRIVE and STARE datasets to evaluate the effectiveness and practicability of the proposed method. Compared with other existing methods, it can be concluded that the proposed method can segment more small blood vessels while ensuring the integrity of vascular structure and has a higher performance. The proposed method has more obvious targets, a higher contrast, more plentiful detailed information, and local features. The qualitative and quantitative analysis results show that the presented method is superior to the other advanced methods.
Collapse
Affiliation(s)
- Minghan Yan
- College of Electronic Information Engineering, Changchun University, Changchun 130012, China; (M.Y.); (J.Z.); (C.L.)
| | - Jian Zhou
- College of Electronic Information Engineering, Changchun University, Changchun 130012, China; (M.Y.); (J.Z.); (C.L.)
| | - Cong Luo
- College of Electronic Information Engineering, Changchun University, Changchun 130012, China; (M.Y.); (J.Z.); (C.L.)
| | - Tingfa Xu
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China;
| | - Xiaoxue Xing
- College of Electronic Information Engineering, Changchun University, Changchun 130012, China; (M.Y.); (J.Z.); (C.L.)
- Correspondence:
| |
Collapse
|
75
|
Wei J, Zhu G, Fan Z, Liu J, Rong Y, Mo J, Li W, Chen X. Genetic U-Net: Automatically Designed Deep Networks for Retinal Vessel Segmentation Using a Genetic Algorithm. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:292-307. [PMID: 34506278 DOI: 10.1109/tmi.2021.3111679] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Recently, many methods based on hand-designed convolutional neural networks (CNNs) have achieved promising results in automatic retinal vessel segmentation. However, these CNNs remain constrained in capturing retinal vessels in complex fundus images. To improve their segmentation performance, these CNNs tend to have many parameters, which may lead to overfitting and high computational complexity. Moreover, the manual design of competitive CNNs is time-consuming and requires extensive empirical knowledge. Herein, a novel automated design method, called Genetic U-Net, is proposed to generate a U-shaped CNN that can achieve better retinal vessel segmentation but with fewer architecture-based parameters, thereby addressing the above issues. First, we devised a condensed but flexible search space based on a U-shaped encoder-decoder. Then, we used an improved genetic algorithm to identify better-performing architectures in the search space and investigated the possibility of finding a superior network architecture with fewer parameters. The experimental results show that the architecture obtained using the proposed method offered a superior performance with less than 1% of the number of the original U-Net parameters in particular and with significantly fewer parameters than other state-of-the-art models. Furthermore, through in-depth investigation of the experimental results, several effective operations and patterns of networks to generate superior retinal vessel segmentations were identified. The codes of this work are available at https://github.com/96jhwei/Genetic-U-Net.
Collapse
|
76
|
Li X, Bala R, Monga V. Robust Deep 3D Blood Vessel Segmentation Using Structural Priors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1271-1284. [PMID: 34990361 DOI: 10.1109/tip.2021.3139241] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep learning has enabled significant improvements in the accuracy of 3D blood vessel segmentation. Open challenges remain in scenarios where labeled 3D segmentation maps for training are severely limited, as is often the case in practice, and in ensuring robustness to noise. Inspired by the observation that 3D vessel structures project onto 2D image slices with informative and unique edge profiles, we propose a novel deep 3D vessel segmentation network guided by edge profiles. Our network architecture comprises a shared encoder and two decoders that learn segmentation maps and edge profiles jointly. 3D context is mined in both the segmentation and edge prediction branches by employing bidirectional convolutional long-short term memory (BCLSTM) modules. 3D features from the two branches are concatenated to facilitate learning of the segmentation map. As a key contribution, we introduce new regularization terms that: a) capture the local homogeneity of 3D blood vessel volumes in the presence of biomarkers; and b) ensure performance robustness to domain-specific noise by suppressing false positive responses. Experiments on benchmark datasets with ground truth labels reveal that the proposed approach outperforms state-of-the-art techniques on standard measures such as DICE overlap and mean Intersection-over-Union. The performance gains of our method are even more pronounced when training is limited. Furthermore, the computational cost of our network inference is among the lowest compared with state-of-the-art.
Collapse
|
77
|
Gour N, Tanveer M, Khanna P. Challenges for ocular disease identification in the era of artificial intelligence. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06770-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
78
|
Li J, Li R, Han R, Wang S. Self-relabeling for noise-tolerant retina vessel segmentation through label reliability estimation. BMC Med Imaging 2022; 22:8. [PMID: 35022020 PMCID: PMC8753937 DOI: 10.1186/s12880-021-00732-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 12/23/2021] [Indexed: 11/20/2022] Open
Abstract
Background Retinal vessel segmentation benefits significantly from deep learning. Its performance relies on sufficient training images with accurate ground-truth segmentation, which are usually manually annotated in the form of binary pixel-wise label maps. Manually annotated ground-truth label maps, more or less, contain errors for part of the pixels. Due to the thin structure of retina vessels, such errors are more frequent and serious in manual annotations, which negatively affect deep learning performance. Methods In this paper, we develop a new method to automatically and iteratively identify and correct such noisy segmentation labels in the process of network training. We consider historical predicted label maps of network-in-training from different epochs and jointly use them to self-supervise the predicted labels during training and dynamically correct the supervised labels with noises. Results We conducted experiments on the three datasets of DRIVE, STARE and CHASE-DB1 with synthetic noises, pseudo-labeled noises, and manually labeled noises. For synthetic noise, the proposed method corrects the original noisy label maps to a more accurate label map by 4.0–\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$9.8\%$$\end{document}9.8% on \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$F_1$$\end{document}F1 and 10.7–\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$16.8\%$$\end{document}16.8% on PR on three testing datasets. For the other two types of noise, the method could also improve the label map quality. Conclusions Experiment results verified that the proposed method could achieve better retinal image segmentation performance than many existing methods by simultaneously correcting the noise in the initial label map. Supplementary Information The online version contains supplementary material available at 10.1186/s12880-021-00732-y.
Collapse
|
79
|
|
80
|
Zhang J, Zhang Y, Qiu H, Xie W, Yao Z, Yuan H, Jia Q, Wang T, Shi Y, Huang M, Zhuang J, Xu X. Pyramid-Net: Intra-layer Pyramid-Scale Feature Aggregation Network for Retinal Vessel Segmentation. Front Med (Lausanne) 2021; 8:761050. [PMID: 34950679 PMCID: PMC8688400 DOI: 10.3389/fmed.2021.761050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 11/05/2021] [Indexed: 11/18/2022] Open
Abstract
Retinal vessel segmentation plays an important role in the diagnosis of eye-related diseases and biomarkers discovery. Existing works perform multi-scale feature aggregation in an inter-layer manner, namely inter-layer feature aggregation. However, such an approach only fuses features at either a lower scale or a higher scale, which may result in a limited segmentation performance, especially on thin vessels. This discovery motivates us to fuse multi-scale features in each layer, intra-layer feature aggregation, to mitigate the problem. Therefore, in this paper, we propose Pyramid-Net for accurate retinal vessel segmentation, which features intra-layer pyramid-scale aggregation blocks (IPABs). At each layer, IPABs generate two associated branches at a higher scale and a lower scale, respectively, and the two with the main branch at the current scale operate in a pyramid-scale manner. Three further enhancements including pyramid inputs enhancement, deep pyramid supervision, and pyramid skip connections are proposed to boost the performance. We have evaluated Pyramid-Net on three public retinal fundus photography datasets (DRIVE, STARE, and CHASE-DB1). The experimental results show that Pyramid-Net can effectively improve the segmentation performance especially on thin vessels, and outperforms the current state-of-the-art methods on all the adopted three datasets. In addition, our method is more efficient than existing methods with a large reduction in computational cost. We have released the source code at https://github.com/JerRuy/Pyramid-Net.
Collapse
Affiliation(s)
- Jiawei Zhang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
- Shanghai key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou, China
| | - Yanchun Zhang
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou, China
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou, China
- College of Engineering and Science, Victoria University, Melbourne, VIC, Australia
| | - Hailong Qiu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wen Xie
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Zeyang Yao
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Haiyun Yuan
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Qianjun Jia
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Tianchen Wang
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Yiyu Shi
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Meiping Huang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Jian Zhuang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Xiaowei Xu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| |
Collapse
|
81
|
Kovács G, Fazekas A. A new baseline for retinal vessel segmentation: Numerical identification and correction of methodological inconsistencies affecting 100+ papers. Med Image Anal 2021; 75:102300. [PMID: 34814057 DOI: 10.1016/j.media.2021.102300] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 09/20/2021] [Accepted: 11/04/2021] [Indexed: 12/18/2022]
Abstract
In the last 15 years, the segmentation of vessels in retinal images has become an intensively researched problem in medical imaging, with hundreds of algorithms published. One of the de facto benchmarking data sets of vessel segmentation techniques is the DRIVE data set. Since DRIVE contains a predefined split of training and test images, the published performance results of the various segmentation techniques should provide a reliable ranking of the algorithms. Including more than 100 papers in the study, we performed a detailed numerical analysis of the coherence of the published performance scores. We found inconsistencies in the reported scores related to the use of the field of view (FoV), which has a significant impact on the performance scores. We attempted to eliminate the biases using numerical techniques to provide a more realistic picture of the state of the art. Based on the results, we have formulated several findings, most notably: despite the well-defined test set of DRIVE, most rankings in published papers are based on non-comparable figures; in contrast to the near-perfect accuracy scores reported in the literature, the highest accuracy score achieved to date is 0.9582 in the FoV region, which is 1% higher than that of human annotators. The methods we have developed for identifying and eliminating the evaluation biases can be easily applied to other domains where similar problems may arise.
Collapse
Affiliation(s)
- György Kovács
- Analytical Minds Ltd., Árpád street 5, Beregsurány 4933, Hungary.
| | - Attila Fazekas
- University of Debrecen, Faculty of Informatics, P.O.BOX 400, Debrecen 4002, Hungary.
| |
Collapse
|
82
|
Sun G, Liu X, Yu X. Multi-path cascaded U-net for vessel segmentation from fundus fluorescein angiography sequential images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106422. [PMID: 34598080 DOI: 10.1016/j.cmpb.2021.106422] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2020] [Accepted: 09/13/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Fundus fluorescein angiography (FFA) technique is widely used in the examination of retinal diseases. In analysis of FFA sequential images, accurate vessel segmentation is a prerequisite for quantification of vascular morphology. Current vessel segmentation methods concentrate mainly on color fundus images and they are limited in processing FFA sequential images with varying background and vessels. METHODS We proposed a multi-path cascaded U-net (MCU-net) architecture for vessel segmentation in FFA sequential images, which is capable of integrating vessel features from different image modes to improve segmentation accuracy. Firstly, two modes of synthetic FFA images that enhance details of small vessels and large vessels are prepared, and are then used together with the raw FFA image as inputs of the MCU-net. By fusion of vessel features from the three modes of FFA images, a vascular probability map is generated as output of MCU-net. RESULTS The proposed MCU-net was trained and tested on the public Duke dataset and our own dataset for FFA sequential images as well as on the DRIVE dataset for color fundus images. Results show that MCU-net outperforms current state-of-the-art methods in terms of F1-score, sensitivity and accuracy, and is able of reserving details such as thin vessels and vascular connections. It also shows good robustness in processing FFA images captured at different perfusion stages. CONCLUSIONS The proposed method can segment vessels from FFA sequential images with high accuracy and shows good robustness to FFA images in different perfusion stages. This method has potential applications in quantitative analysis of vascular morphology in FFA sequential images.
Collapse
Affiliation(s)
- Gang Sun
- College of Electrical & Information Engineering, Hunan University
| | - Xiaoyan Liu
- College of Electrical & Information Engineering, Hunan University; Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing.
| | - Xuefei Yu
- College of Electrical & Information Engineering, Hunan University
| |
Collapse
|
83
|
Owler J, Rockett P. Influence of background preprocessing on the performance of deep learning retinal vessel detection. J Med Imaging (Bellingham) 2021; 8:064001. [PMID: 34746333 PMCID: PMC8562352 DOI: 10.1117/1.jmi.8.6.064001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Accepted: 10/18/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Segmentation of the vessel tree from retinal fundus images can be used to track changes in the retina and be an important first step in a diagnosis. Manual segmentation is a time-consuming process that is prone to error; effective and reliable automation can alleviate these problems but one of the difficulties is uneven image background, which may affect segmentation performance. Approach: We present a patch-based deep learning framework, based on a modified U-Net architecture, that automatically segments the retinal blood vessels from fundus images. In particular, we evaluate how various pre-processing techniques, images with either no processing, N4 bias field correction, contrast limited adaptive histogram equalization (CLAHE), or a combination of N4 and CLAHE, can compensate for uneven image background and impact final segmentation performance. Results: We achieved competitive results on three publicly available datasets as a benchmark for our comparisons of pre-processing techniques. In addition, we introduce Bayesian statistical testing, which indicates little practical difference ( Pr > 0.99 ) between pre-processing methods apart from the sensitivity metric. In terms of sensitivity and pre-processing, the combination of N4 correction and CLAHE performs better in comparison to unprocessed and N4 pre-processing ( Pr > 0.87 ); but compared to CLAHE alone, the differences are not significant ( Pr ≈ 0.38 to 0.88). Conclusions: We conclude that deep learning is an effective method for retinal vessel segmentation and that CLAHE pre-processing has the greatest positive impact on segmentation performance, with N4 correction helping only in images with extremely inhomogeneous background illumination.
Collapse
Affiliation(s)
- James Owler
- University of Sheffield, Bioengineering—Interdisciplinary Programmes Engineering, United Kingdom
| | - Peter Rockett
- University of Sheffield, Department of Electronic and Electrical Engineering, Sheffield, United Kingdom
| |
Collapse
|
84
|
Zou B, Dai Y, He Q, Zhu C, Liu G, Su Y, Tang R. Multi-Label Classification Scheme Based on Local Regression for Retinal Vessel Segmentation. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:2586-2597. [PMID: 32175869 DOI: 10.1109/tcbb.2020.2980233] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Segmenting small retinal vessels with width less than 2 pixels in fundus images is a challenging task. In this paper, in order to effectively segment the vessels, especially the narrow parts, we propose a local regression scheme to enhance the narrow parts, along with a novel multi-label classification method based on this scheme. We consider five labels for blood vessels and background in particular: the center of big vessels, the edge of big vessels, the center as well as the edge of small vessels, the center of background, and the edge of background. We first determine the multi-label by the local de-regression model according to the vessel pattern from the ground truth images. Then, we train a convolutional neural network (CNN) for multi-label classification. Next, we perform a local regression method to transform the previous multi-label into binary label to better locate small vessels and generate an entire retinal vessel image. Our method is evaluated using two publicly available datasets and compared with several state-of-the-art studies. The experimental results have demonstrated the effectiveness of our method in segmenting retinal vessels.
Collapse
|
85
|
Ding L, Kuriyan AE, Ramchandran RS, Wykoff CC, Sharma G. Weakly-Supervised Vessel Detection in Ultra-Widefield Fundus Photography via Iterative Multi-Modal Registration and Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2748-2758. [PMID: 32991281 PMCID: PMC8513803 DOI: 10.1109/tmi.2020.3027665] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We propose a deep-learning based annotation-efficient framework for vessel detection in ultra-widefield (UWF) fundus photography (FP) that does not require de novo labeled UWF FP vessel maps. Our approach utilizes concurrently captured UWF fluorescein angiography (FA) images, for which effective deep learning approaches have recently become available, and iterates between a multi-modal registration step and a weakly-supervised learning step. In the registration step, the UWF FA vessel maps detected with a pre-trained deep neural network (DNN) are registered with the UWF FP via parametric chamfer alignment. The warped vessel maps can be used as the tentative training data but inevitably contain incorrect (noisy) labels due to the differences between FA and FP modalities and the errors in the registration. In the learning step, a robust learning method is proposed to train DNNs with noisy labels. The detected FP vessel maps are used for the registration in the following iteration. The registration and the vessel detection benefit from each other and are progressively improved. Once trained, the UWF FP vessel detection DNN from the proposed approach allows FP vessel detection without requiring concurrently captured UWF FA images. We validate the proposed framework on a new UWF FP dataset, PRIME-FP20, and on existing narrow-field FP datasets. Experimental evaluation, using both pixel-wise metrics and the CAL metrics designed to provide better agreement with human assessment, shows that the proposed approach provides accurate vessel detection, without requiring manually labeled UWF FP training data.
Collapse
|
86
|
Ding J, Zhang Z, Tang J, Guo F. A Multichannel Deep Neural Network for Retina Vessel Segmentation via a Fusion Mechanism. Front Bioeng Biotechnol 2021; 9:697915. [PMID: 34490220 PMCID: PMC8417313 DOI: 10.3389/fbioe.2021.697915] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 07/06/2021] [Indexed: 11/17/2022] Open
Abstract
Changes in fundus blood vessels reflect the occurrence of eye diseases, and from this, we can explore other physical diseases that cause fundus lesions, such as diabetes and hypertension complication. However, the existing computational methods lack high efficiency and precision segmentation for the vascular ends and thin retina vessels. It is important to construct a reliable and quantitative automatic diagnostic method for improving the diagnosis efficiency. In this study, we propose a multichannel deep neural network for retina vessel segmentation. First, we apply U-net on original and thin (or thick) vessels for multi-objective optimization for purposively training thick and thin vessels. Then, we design a specific fusion mechanism for combining three kinds of prediction probability maps into a final binary segmentation map. Experiments show that our method can effectively improve the segmentation performances of thin blood vessels and vascular ends. It outperforms many current excellent vessel segmentation methods on three public datasets. In particular, it is pretty impressive that we achieve the best F1-score of 0.8247 on the DRIVE dataset and 0.8239 on the STARE dataset. The findings of this study have the potential for the application in an automated retinal image analysis, and it may provide a new, general, and high-performance computing framework for image segmentation.
Collapse
Affiliation(s)
- Jiaqi Ding
- School of Computer Science and Technology, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Zehua Zhang
- School of Computer Science and Technology, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Jijun Tang
- School of Computer Science and Technology, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Fei Guo
- School of Computer Science and Engineering, Central South University, Changsha, China
| |
Collapse
|
87
|
Hu X, Wang L, Cheng S, Li Y. HDC-Net: A hierarchical dilation convolutional network for retinal vessel segmentation. PLoS One 2021; 16:e0257013. [PMID: 34492064 PMCID: PMC8423235 DOI: 10.1371/journal.pone.0257013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 08/23/2021] [Indexed: 11/18/2022] Open
Abstract
The cardinal symptoms of some ophthalmic diseases observed through exceptional retinal blood vessels, such as retinal vein occlusion, diabetic retinopathy, etc. The advanced deep learning models used to obtain morphological and structural information of blood vessels automatically are conducive to the early treatment and initiative prevention of ophthalmic diseases. In our work, we propose a hierarchical dilation convolutional network (HDC-Net) to extract retinal vessels in a pixel-to-pixel manner. It utilizes the hierarchical dilation convolution (HDC) module to capture the fragile retinal blood vessels usually neglected by other methods. An improved residual dual efficient channel attention (RDECA) module can infer more delicate channel information to reinforce the discriminative capability of the model. The structured Dropblock can help our HDC-Net model to solve the network overfitting effectively. From a holistic perspective, the segmentation results obtained by HDC-Net are superior to other deep learning methods on three acknowledged datasets (DRIVE, CHASE-DB1, STARE), the sensitivity, specificity, accuracy, f1-score and AUC score are {0.8252, 0.9829, 0.9692, 0.8239, 0.9871}, {0.8227, 0.9853, 0.9745, 0.8113, 0.9884}, and {0.8369, 0.9866, 0.9751, 0.8385, 0.9913}, respectively. It surpasses most other advanced retinal vessel segmentation models. Qualitative and quantitative analysis demonstrates that HDC-Net can fulfill the task of retinal vessel segmentation efficiently and accurately.
Collapse
Affiliation(s)
- Xiaolong Hu
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Liejun Wang
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Shuli Cheng
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Yongming Li
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
| |
Collapse
|
88
|
Shi Z, Wang T, Huang Z, Xie F, Liu Z, Wang B, Xu J. MD-Net: A multi-scale dense network for retinal vessel segmentation. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102977] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
89
|
Lin Z, Huang J, Chen Y, Zhang X, Zhao W, Li Y, Lu L, Zhan M, Jiang X, Liang X. A high resolution representation network with multi-path scale for retinal vessel segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106206. [PMID: 34146772 DOI: 10.1016/j.cmpb.2021.106206] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 05/23/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES Automatic retinal vessel segmentation (RVS) in fundus images is expected to be a vital step in the early image diagnosis of ophthalmologic diseases. However, it is a challenging task to detect the retinal vessel accurately mainly due to the vascular intricacies, lesion areas and optic disc edges in retinal fundus images. METHODS In this paper, we propose a high resolution representation network with multi-path scale (MPS-Net) for RVS aiming to improve the performance of extracting the retinal blood vessels. In the MPS-Net, there exist one high resolution main road and two lower resolution branch roads where the proposed multi-path scale modules are embedded to enhance the representation ability of network. Besides, in order to guide the network focus on learning the features of hard examples in retinal images, we design a hard-focused cross-entropy loss function. RESULTS We evaluate our network structure on DRIVE, STARE, CHASE and synthetic images and the quantitative comparisons with respect to the existing methods are presented. The experimental results show that our approach is superior to most methods in terms of F1-score, sensitivity, G-mean and Matthews correlation coefficient. CONCLUSIONS The promising segmentation performances reveal that our method has potential in real-world applications and can be exploited for other medical images with further analysis.
Collapse
Affiliation(s)
- Zefang Lin
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China.
| | - Jianping Huang
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China.
| | - Yingyin Chen
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China.
| | - Xiao Zhang
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China
| | - Wei Zhao
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China
| | - Yong Li
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China
| | - Ligong Lu
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China
| | - Meixiao Zhan
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China.
| | - Xiaofei Jiang
- Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China; Department of Cardiology, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China.
| | - Xiong Liang
- Department of Obstetrics, Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, Guangdong 519000, PR China.
| |
Collapse
|
90
|
Hakim L, Kavitha MS, Yudistira N, Kurita T. Regularizer based on Euler characteristic for retinal blood vessel segmentation. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2021.05.023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
91
|
Du XF, Wang JS, Sun WZ. UNet retinal blood vessel segmentation algorithm based on improved pyramid pooling method and attention mechanism. Phys Med Biol 2021; 66. [PMID: 34375955 DOI: 10.1088/1361-6560/ac1c4c] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 08/10/2021] [Indexed: 11/12/2022]
Abstract
The segmentation results of retinal vessels have a significant impact on the automatic diagnosis of retinal diabetes, hypertension, cardiovascular and cerebrovascular diseases and other ophthalmic diseases. In order to improve the performance of blood vessels segmentation, a pyramid scene parseing U-Net segmentation algorithm based on attention mechanism was proposed. The modified PSP-Net pyramid pooling module is introduced on the basis of U-Net network, which aggregates the context information of different regions so as to improve the ability of obtaining global information. At the same time, attention mechanism was introduced in the skip connection part of U-Net network, which makes the integration of low-level features and high-level semantic features more efficient and reduces the loss of feature information through nonlinear connection mode. The sensitivity, specificity, accuracy and AUC of DRIVE and CHASE_DB1 data sets are 0.7814, 0.9810, 0.9556, 0.9780; 0.8195, 0.9727, 0.9590, 0.9784. Experimental results show that the PSP-UNet segmentation algorithm based on the attention mechanism enhances the detection ability of blood vessel pixels, suppresses the interference of irrelevant information and improves the network segmentation performance, which is superior to U-Net algorithm and some mainstream retinal vascular segmentation algorithms at present.
Collapse
Affiliation(s)
- Xin-Feng Du
- School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan 114051, People's Republic of China
| | - Jie-Sheng Wang
- School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan 114051, People's Republic of China
| | - Wei-Zhen Sun
- School of Biological Science and Medical Engineering , Southeast University, Jiangsu, Nanjing 210000, People's Republic of China
| |
Collapse
|
92
|
Yang L, Wang H, Zeng Q, Liu Y, Bian G. A hybrid deep segmentation network for fundus vessels via deep-learning framework. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.085] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
93
|
Xu R, Liu T, Ye X, Liu F, Lin L, Li L, Tanaka S, Chen YW. Joint Extraction of Retinal Vessels and Centerlines Based on Deep Semantics and Multi-Scaled Cross-Task Aggregation. IEEE J Biomed Health Inform 2021; 25:2722-2732. [PMID: 33320815 DOI: 10.1109/jbhi.2020.3044957] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Retinal vessel segmentation and centerline extraction are crucial steps in building a computer-aided diagnosis system on retinal images. Previous works treat them as two isolated tasks, while ignoring their tight association. In this paper, we propose a deep semantics and multi-scaled cross-task aggregation network that takes advantage of the association to jointly improve their performances. Our network is featured by two sub-networks. The forepart is a deep semantics aggregation sub-network that aggregates strong semantic information to produce more powerful features for both tasks, and the tail is a multi-scaled cross-task aggregation sub-network that explores complementary information to refine the results. We evaluate the proposed method on three public databases, which are DRIVE, STARE and CHASE_DB1. Experimental results show that our method can not only simultaneously extract retinal vessels and their centerlines but also achieve the state-of-the-art performances on both tasks.
Collapse
|
94
|
Yan Z, Wicaksana J, Wang Z, Yang X, Cheng KT. Variation-Aware Federated Learning With Multi-Source Decentralized Medical Image Data. IEEE J Biomed Health Inform 2021; 25:2615-2628. [PMID: 33232246 DOI: 10.1109/jbhi.2020.3040015] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Privacy concerns make it infeasible to construct a large medical image dataset by fusing small ones from different sources/institutions. Therefore, federated learning (FL) becomes a promising technique to learn from multi-source decentralized data with privacy preservation. However, the cross-client variation problem in medical image data would be the bottleneck in practice. In this paper, we propose a variation-aware federated learning (VAFL) framework, where the variations among clients are minimized by transforming the images of all clients onto a common image space. We first select one client with the lowest data complexity to define the target image space and synthesize a collection of images through a privacy-preserving generative adversarial network, called PPWGAN-GP. Then, a subset of those synthesized images, which effectively capture the characteristics of the raw images and are sufficiently distinct from any raw image, is automatically selected for sharing with other clients. For each client, a modified CycleGAN is applied to translate its raw images to the target image space defined by the shared synthesized images. In this way, the cross-client variation problem is addressed with privacy preservation. We apply the framework for automated classification of clinically significant prostate cancer and evaluate it using multi-source decentralized apparent diffusion coefficient (ADC) image data. Experimental results demonstrate that the proposed VAFL framework stably outperforms the current horizontal FL framework. As VAFL is independent of deep learning architectures for classification, we believe that the proposed framework is widely applicable to other medical image classification tasks.
Collapse
|
95
|
V S, G I, A SR. Parallel Architecture of Fully Convolved Neural Network for Retinal Vessel Segmentation. J Digit Imaging 2021; 33:168-180. [PMID: 31342298 DOI: 10.1007/s10278-019-00250-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Retinal blood vessel extraction is considered to be the indispensable action for the diagnostic purpose of many retinal diseases. In this work, a parallel fully convolved neural network-based architecture is proposed for the retinal blood vessel segmentation. Also, the network performance improvement is studied by applying different levels of preprocessed images. The proposed method is experimented on DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (STructured Analysis of the Retina) which are the widely accepted public database for this research area. The proposed work attains high accuracy, sensitivity, and specificity of about 96.37%, 86.53%, and 98.18% respectively. Data independence is also proved by testing abnormal STARE images with DRIVE trained model. The proposed architecture shows better result in the vessel extraction irrespective of vessel thickness. The obtained results show that the proposed work outperforms most of the existing segmentation methodologies, and it can be implemented as the real time application tool since the entire work is carried out on CPU. The proposed work is executed with low-cost computation; at the same time, it takes less than 2 s per image for vessel extraction.
Collapse
Affiliation(s)
- Sathananthavathi V
- Department of ECE, Mepco Schlenk Engineering College, Sivakasi, Tamilnadu, 626005, India.
| | - Indumathi G
- Department of ECE, Mepco Schlenk Engineering College, Sivakasi, Tamilnadu, 626005, India
| | - Swetha Ranjani A
- Department of ECE, Mepco Schlenk Engineering College, Sivakasi, Tamilnadu, 626005, India
| |
Collapse
|
96
|
Han Z, Huang H. GAN Based Three-Stage-Training Algorithm for Multi-view Facial Expression Recognition. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10591-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
97
|
Wan T, Chen J, Zhang Z, Li D, Qin Z. Automatic vessel segmentation in X-ray angiogram using spatio-temporal fully-convolutional neural network. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102646] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
|
98
|
Du X, Wang J, Sun W. Densely connected U-Net retinal vessel segmentation algorithm based on multi-scale feature convolution extraction. Med Phys 2021; 48:3827-3841. [PMID: 34028030 DOI: 10.1002/mp.14944] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Revised: 03/26/2021] [Accepted: 05/05/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE The segmentation results of retinal blood vessels have a significant impact on the automatic diagnosis of various ophthalmic diseases. In order to further improve the segmentation accuracy of retinal vessels, we propose an improved algorithm based on multiscale vessel detection, which extracts features through densely connected networks and reuses features. METHODS A parallel fusion and serial embedding multiscale feature dense connection U-Net structure are designed. In the parallel fusion method, features of the input images are extracted for Inception multiscale convolution and dense block convolution, respectively, and then the features are fused and input into the subsequent network. In serial embedding mode, the Inception multiscale convolution structure is embedded in the dense connection network module, and then the dense connection structure is used to replace the classical convolution block in the U-Net network encoder part, so as to achieve multiscale feature extraction and efficient utilization of complex structure vessels and thereby improve the network segmentation performance. RESULTS The experimental analysis on the standard DRIVE and CHASE_DB1 databases shows that the sensitivity, specificity, accuracy, and AUC of the parallel fusion and serial embedding methods reach 0.7854, 0.9813, 0.9563, 0.9794; 0.7876, 0.9811, 0.9565, 0.9793 and 0.8110, 0.9737, 0.9547, 0.9667; 0.8113, 0.9717, 0.9574, 0.9750, respectively. CONCLUSIONS The experimental results show that multiscale feature detection and feature dense connection can effectively enhance the network model's ability to detect blood vessels and improve the network segmentation performance, which is superior to U-Net algorithm and some mainstream retinal blood vessel segmentation algorithms at present.
Collapse
Affiliation(s)
- Xinfeng Du
- School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan, 114051, China
| | - Jiesheng Wang
- School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan, 114051, China
| | - Weizhen Sun
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, Jiangsu, 210000, China
| |
Collapse
|
99
|
Yuan Y, Zhang L, Wang L, Huang H. Multi-level Attention Network for Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2021; 26:312-323. [PMID: 34129508 DOI: 10.1109/jbhi.2021.3089201] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Automatic vessel segmentation in the fundus images plays an important role in the screening, diagnosis, treatment, and evaluation of various cardiovascular and ophthalmologic diseases. However, due to the limited well-annotated data, varying size of vessels, and intricate vessel structures, retinal vessel segmentation has become a long-standing challenge. In this paper, a novel deep learning model called AACA-MLA-D-UNet is proposed to fully utilize the low-level detailed information and the complementary information encoded in different layers to accurately distinguish the vessels from the background with low model complexity. The architecture of the proposed model is based on U-Net, and the dropout dense block is proposed to preserve maximum vessel information between convolution layers and mitigate the over-fitting problem. The adaptive atrous channel attention module is embedded in the contracting path to sort the importance of each feature channel automatically. After that, the multi-level attention module is proposed to integrate the multi-level features extracted from the expanding path, and use them to refine the features at each individual layer via attention mechanism. The proposed method has been validated on the three publicly available databases, i.e. the DRIVE, STARE, and CHASE DB1. The experimental results demonstrate that the proposed method can achieve better or comparable performance on retinal vessel segmentation with lower model complexity. Furthermore, the proposed method can also deal with some challenging cases and has strong generalization ability.
Collapse
|
100
|
Hu J, Wang H, Cao Z, Wu G, Jonas JB, Wang YX, Zhang J. Automatic Artery/Vein Classification Using a Vessel-Constraint Network for Multicenter Fundus Images. Front Cell Dev Biol 2021; 9:659941. [PMID: 34178986 PMCID: PMC8226261 DOI: 10.3389/fcell.2021.659941] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 04/19/2021] [Indexed: 11/24/2022] Open
Abstract
Retinal blood vessel morphological abnormalities are generally associated with cardiovascular, cerebrovascular, and systemic diseases, automatic artery/vein (A/V) classification is particularly important for medical image analysis and clinical decision making. However, the current method still has some limitations in A/V classification, especially the blood vessel edge and end error problems caused by the single scale and the blurred boundary of the A/V. To alleviate these problems, in this work, we propose a vessel-constraint network (VC-Net) that utilizes the information of vessel distribution and edge to enhance A/V classification, which is a high-precision A/V classification model based on data fusion. Particularly, the VC-Net introduces a vessel-constraint (VC) module that combines local and global vessel information to generate a weight map to constrain the A/V features, which suppresses the background-prone features and enhances the edge and end features of blood vessels. In addition, the VC-Net employs a multiscale feature (MSF) module to extract blood vessel information with different scales to improve the feature extraction capability and robustness of the model. And the VC-Net can get vessel segmentation results simultaneously. The proposed method is tested on publicly available fundus image datasets with different scales, namely, DRIVE, LES, and HRF, and validated on two newly created multicenter datasets: Tongren and Kailuan. We achieve a balance accuracy of 0.9554 and F1 scores of 0.7616 and 0.7971 for the arteries and veins, respectively, on the DRIVE dataset. The experimental results prove that the proposed model achieves competitive performance in A/V classification and vessel segmentation tasks compared with state-of-the-art methods. Finally, we test the Kailuan dataset with other trained fusion datasets, the results also show good robustness. To promote research in this area, the Tongren dataset and source code will be made publicly available. The dataset and code will be made available at https://github.com/huawang123/VC-Net.
Collapse
Affiliation(s)
- Jingfei Hu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Hua Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Zhaohui Cao
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Guang Wu
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Jost B Jonas
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China.,Department of Ophthalmology, Medical Faculty Mannheim of the Ruprecht-Karls-University Heidelberg, Mannheim, Germany
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China.,Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| |
Collapse
|