1
|
Liu J, Zhao D, Shen J, Geng P, Zhang Y, Yang J, Zhang Z. HRD-Net: High resolution segmentation network with adaptive learning ability of retinal vessel features. Comput Biol Med 2024; 173:108295. [PMID: 38520920 DOI: 10.1016/j.compbiomed.2024.108295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 01/31/2024] [Accepted: 03/12/2024] [Indexed: 03/25/2024]
Abstract
Retinal segmentation is a crucial step in the early warning of human health conditions. However, retinal blood vessels possess complex curvature, irregular distribution, and contain multi-scale fine structures, which make the limited receptive field of regular convolution challenging to process their vascular details efficiently. Additionally, the encoder-decoder based network leads to irreversible spatial information loss because of multiple downsampling, resulting in over-segmentation and missed segmentation of the vessels. For this reason, we develop a high-resolution network based on Deformable Convolution v3, called HRD-Net. By constructing a high-resolution representation, the network allows special attention to be paid to the details of tiny blood vessels. The proposed feature enhancement cascade module based on Deformable Convolution v3 can flexibly adapt and capture the ever-changing morphology and intricate connections of retinal blood vessels, ensuring the continuity of vessel segmentation. In the output phase of the network, the proposed global aggregation module integrates full-resolution feature maps while suppressing redundant features, achieving an effective fusion of high-level semantic information and spatial detail information. In addition, we have re-examined the selection criteria for activation and normalization methods, and also refine the network architectures from a spatial domain perspective to release redundant computational loads. Testing on the DRIVE, STARE, and CHASE_DB1 datasets indicates that HRD-Net, with fewer parameters, outperforms existing segmentation methods on several evaluation metrics such as F1, ACC, SE, SP, AUC, and IOU.
Collapse
Affiliation(s)
- Jianhua Liu
- School of Electrical and Electronic Engineering, Shijiazhuang Tiedao University, Shijiazhuang, 050043, China; Hebei Provincial Collaborative Innovation Center of Transportation Power Grid Intelligent Integration Technology and Equipment, School of Electrical and Electronic Engineering, Shijiazhuang Tiedao University, Shijiazhuang, China.
| | - Dongxin Zhao
- School of Electrical and Electronic Engineering, Shijiazhuang Tiedao University, Shijiazhuang, 050043, China.
| | - Juncai Shen
- College of Information Science and Technology, Shijiazhuang Tiedao University, Shijiazhuang, 050043, China.
| | - Peng Geng
- College of Information Science and Technology, Shijiazhuang Tiedao University, Shijiazhuang, 050043, China.
| | - Ying Zhang
- College of Resources and Environment, Xingtai University, Xingtai, 054001, China.
| | - Jiaxin Yang
- School of Electrical and Electronic Engineering, Shijiazhuang Tiedao University, Shijiazhuang, 050043, China.
| | - Ziqian Zhang
- School of Electrical and Electronic Engineering, Shijiazhuang Tiedao University, Shijiazhuang, 050043, China.
| |
Collapse
|
2
|
Lin J, Huang X, Zhou H, Wang Y, Zhang Q. Stimulus-guided adaptive transformer network for retinal blood vessel segmentation in fundus images. Med Image Anal 2023; 89:102929. [PMID: 37598606 DOI: 10.1016/j.media.2023.102929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 06/15/2023] [Accepted: 08/07/2023] [Indexed: 08/22/2023]
Abstract
Automated retinal blood vessel segmentation in fundus images provides important evidence to ophthalmologists in coping with prevalent ocular diseases in an efficient and non-invasive way. However, segmenting blood vessels in fundus images is a challenging task, due to the high variety in scale and appearance of blood vessels and the high similarity in visual features between the lesions and retinal vascular. Inspired by the way that the visual cortex adaptively responds to the type of stimulus, we propose a Stimulus-Guided Adaptive Transformer Network (SGAT-Net) for accurate retinal blood vessel segmentation. It entails a Stimulus-Guided Adaptive Module (SGA-Module) that can extract local-global compound features based on inductive bias and self-attention mechanism. Alongside a light-weight residual encoder (ResEncoder) structure capturing the relevant details of appearance, a Stimulus-Guided Adaptive Pooling Transformer (SGAP-Former) is introduced to reweight the maximum and average pooling to enrich the contextual embedding representation while suppressing the redundant information. Moreover, a Stimulus-Guided Adaptive Feature Fusion (SGAFF) module is designed to adaptively emphasize the local details and global context and fuse them in the latent space to adjust the receptive field (RF) based on the task. The evaluation is implemented on the largest fundus image dataset (FIVES) and three popular retinal image datasets (DRIVE, STARE, CHASEDB1). Experimental results show that the proposed method achieves a competitive performance over the other existing method, with a clear advantage in avoiding errors that commonly happen in areas with highly similar visual features. The sourcecode is publicly available at: https://github.com/Gins-07/SGAT.
Collapse
Affiliation(s)
- Ji Lin
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom
| | - Xingru Huang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom
| | - Huiyu Zhou
- School of Informatics, University of Leicester, University Road, Leicester, LE1 7RH, United Kingdom
| | - Yaqi Wang
- College of Media Engineering, Communication University of Zhejiang, Hangzhou, 310018, China
| | - Qianni Zhang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom.
| |
Collapse
|
3
|
Ramos-Soto O, Rodríguez-Esparza E, Balderas-Mata SE, Oliva D, Hassanien AE, Meleppat RK, Zawadzki RJ. An efficient retinal blood vessel segmentation in eye fundus images by using optimized top-hat and homomorphic filtering. Comput Methods Programs Biomed 2021; 201:105949. [PMID: 33567382 DOI: 10.1016/j.cmpb.2021.105949] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Accepted: 01/18/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic segmentation of retinal blood vessels makes a major contribution in CADx of various ophthalmic and cardiovascular diseases. A procedure to segment thin and thick retinal vessels is essential for medical analysis and diagnosis of related diseases. In this article, a novel methodology for robust vessel segmentation is proposed, handling the existing challenges presented in the literature. METHODS The proposed methodology consists of three stages, pre-processing, main processing, and post-processing. The first stage consists of applying filters for image smoothing. The main processing stage is divided into two configurations, the first to segment thick vessels through the new optimized top-hat, homomorphic filtering, and median filter. Then, the second configuration is used to segment thin vessels using the proposed optimized top-hat, homomorphic filtering, matched filter, and segmentation using the MCET-HHO multilevel algorithm. Finally, morphological image operations are carried out in the post-processing stage. RESULTS The proposed approach was assessed by using two publicly available databases (DRIVE and STARE) through three performance metrics: specificity, sensitivity, and accuracy. Analyzing the obtained results, an average of 0.9860, 0.7578 and 0.9667 were respectively achieved for DRIVE dataset and 0.9836, 0.7474 and 0.9580 for STARE dataset. CONCLUSIONS The numerical results obtained by the proposed technique, achieve competitive average values with the up-to-date techniques. The proposed approach outperform all leading unsupervised methods discussed in terms of specificity and accuracy. In addition, it outperforms most of the state-of-the-art supervised methods without the computational cost associated with these algorithms. Detailed visual analysis has shown that a more precise segmentation of thin vessels was possible with the proposed approach when compared with other procedures.
Collapse
Affiliation(s)
- Oscar Ramos-Soto
- División de Electrónica y Computación, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, C.P. 44430, Guadalajara, Jal., Mexico.
| | - Erick Rodríguez-Esparza
- División de Electrónica y Computación, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, C.P. 44430, Guadalajara, Jal., Mexico; DeustoTech, Faculty of Engineering, University of Deusto, Av. Universidades, 24, 48007 Bilbao, Spain.
| | - Sandra E Balderas-Mata
- División de Electrónica y Computación, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, C.P. 44430, Guadalajara, Jal., Mexico.
| | - Diego Oliva
- División de Electrónica y Computación, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, C.P. 44430, Guadalajara, Jal., Mexico; IN3 - Computer Science Dept., Universitat Oberta de Catalunya, Castelldefels, Spain.
| | | | - Ratheesh K Meleppat
- UC Davis Eyepod Imaging Laboratory, Dept. of Cell Biology and Human Anatomy, University of California Davis, Davis, CA 95616, USA; Dept. of Ophthalmology & Vision Science, University of California Davis, Sacramento, CA, USA.
| | - Robert J Zawadzki
- UC Davis Eyepod Imaging Laboratory, Dept. of Cell Biology and Human Anatomy, University of California Davis, Davis, CA 95616, USA; Dept. of Ophthalmology & Vision Science, University of California Davis, Sacramento, CA, USA.
| |
Collapse
|