101
|
Li K, Qi X, Luo Y, Yao Z, Zhou X, Sun M. Accurate Retinal Vessel Segmentation in Color Fundus Images via Fully Attention-Based Networks. IEEE J Biomed Health Inform 2021; 25:2071-2081. [PMID: 33001809 DOI: 10.1109/jbhi.2020.3028180] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Automatic retinal vessel segmentation is important for the diagnosis and prevention of ophthalmic diseases. The existing deep learning retinal vessel segmentation models always treat each pixel equally. However, the multi-scale vessel structure is a vital factor affecting the segmentation results, especially in thin vessels. To address this crucial gap, we propose a novel Fully Attention-based Network (FANet) based on attention mechanisms to adaptively learn rich feature representation and aggregate the multi-scale information. Specifically, the framework consists of the image pre-processing procedure and the semantic segmentation networks. Green channel extraction (GE) and contrast limited adaptive histogram equalization (CLAHE) are employed as pre-processing to enhance the texture and contrast of retinal blood images. Besides, the network combines two types of attention modules with the U-Net. We propose a lightweight dual-direction attention block to model global dependencies and reduce intra-class inconsistencies, in which the weights of feature maps are updated based on the semantic correlation between pixels. The dual-direction attention block utilizes horizontal and vertical pooling operations to produce the attention map. In this way, the network aggregates global contextual information from semantic-closer regions or a series of pixels belonging to the same object category. Meanwhile, we adopt the selective kernel (SK) unit to replace the standard convolution for obtaining multi-scale features of different receptive field sizes generated by soft attention. Furthermore, we demonstrate that the proposed model can effectively identify irregular, noisy, and multi-scale retinal vessels. The abundant experiments on DRIVE, STARE, and CHASE_DB1 datasets show that our method achieves state-of-the-art performance.
Collapse
|
102
|
Tang X, Peng J, Zhong B, Li J, Yan Z. Introducing frequency representation into convolution neural networks for medical image segmentation via twin-Kernel Fourier convolution. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106110. [PMID: 33910149 DOI: 10.1016/j.cmpb.2021.106110] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Accepted: 04/07/2021] [Indexed: 05/28/2023]
Abstract
BACKGROUND AND OBJECTIVE For medical image segmentation, deep learning-based methods have achieved state-of-the-art performance. However, the powerful spectral representation in the field of image processing is rarely considered in these models. METHODS In this work, we propose to introduce frequency representation into convolution neural networks (CNNs) and design a novel model, tKFC-Net, to combine powerful feature representation in both frequency and spatial domains. Through the Fast Fourier Transform (FFT) operation, frequency representation is employed on pooling, upsampling, and convolution without any adjustments to the network architecture. Furthermore, we replace original convolution with twin-Kernel Fourier Convolution (t-KFC), a new designed convolution layer, to specify the convolution kernels for particular functions and extract features from different frequency components. RESULTS We experimentally show that our method has an edge over other models in the task of medical image segmentation. Evaluated on four datasets-skin lesion segmentation (ISIC 2018), retinal blood vessel segmentation (DRIVE), lung segmentation (COVID-19-CT-Seg), and brain tumor segmentation (BraTS 2019), the proposed model achieves outstanding results: the metric F1-Score is 0.878 for ISIC 2018, 0.8185 for DRIVE, 0.9830 for COVID-19-CT-Seg, and 0.8457 for BraTS 2019. CONCLUSION The introduction of spectral representation retains spectral features which result in more accurate segmentation. The proposed method is orthogonal to other topology improvement methods and very convenient to be combined.
Collapse
Affiliation(s)
- Xianlun Tang
- Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Jiangping Peng
- Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Bing Zhong
- Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Jie Li
- College of Mobile Telecommunications, Chongqing University of Posts and Telecom, Chongqing 401520, China
| | - Zhenfu Yan
- Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|
103
|
|
104
|
Li D, Rahardja S. BSEResU-Net: An attention-based before-activation residual U-Net for retinal vessel segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106070. [PMID: 33857703 DOI: 10.1016/j.cmpb.2021.106070] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Accepted: 03/22/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES Retinal vessels are a major feature used for the physician to diagnose many retinal diseases, such as cardiovascular disease and Glaucoma. Therefore, the designing of an auto-segmentation algorithm for retinal vessel draw great attention in medical field. Recently, deep learning methods, especially convolutional neural networks (CNNs) show extraordinary potential for the task of vessel segmentation. However, most of the deep learning methods only take advantage of the shallow networks with a traditional cross-entropy objective, which becomes the main obstacle to further improve the performance on a task that is imbalanced. We therefore propose a new type of residual U-Net called Before-activation Squeeze-and-Excitation ResU-Net (BSEResu-Net) to tackle the aforementioned issues. METHODS Our BSEResU-Net can be viewed as an encoder/decoder framework that constructed by Before-activation Squeeze-and-Excitation blocks (BSE Blocks). In comparison to the current existing CNN structures, we utilize a new type of residual block structure, namely BSE block, in which the attention mechanism is combined with skip connection to boost the performance. What's more, the network could consistently gain accuracy from the increasing depth as we incorporate more residual blocks, attributing to the dropblock mechanism used in BSE blocks. A joint loss function which is based on the dice and cross-entropy loss functions is also introduced to achieve more balanced segmentation between the vessel and non-vessel pixels. RESULTS The proposed BSEResU-Net is evaluated on the publicly available DRIVE, STARE and HRF datasets. It achieves the F1-score of 0.8324, 0.8368 and 0.8237 on DRIVE, STARE and HRF dataset, respectively. Experimental results show that the proposed BSEResU-Net outperforms current state-of-the-art algorithms. CONCLUSIONS The proposed algorithm utilizes a new type of residual blocks called BSE residual blocks for vessel segmentation. Together with a joint loss function, it shows outstanding performance both on low and high-resolution fundus images.
Collapse
Affiliation(s)
- Di Li
- Centre of Intelligent Acoustics and Immersive Communications, School of Marine Science and Technology, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, P.R. China.
| | - Susanto Rahardja
- Centre of Intelligent Acoustics and Immersive Communications, School of Marine Science and Technology, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, P.R. China.
| |
Collapse
|
105
|
Multichannel Retinal Blood Vessel Segmentation Based on the Combination of Matched Filter and U-Net Network. BIOMED RESEARCH INTERNATIONAL 2021; 2021:5561125. [PMID: 34124247 PMCID: PMC8172291 DOI: 10.1155/2021/5561125] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 04/07/2021] [Accepted: 05/05/2021] [Indexed: 11/17/2022]
Abstract
Aiming at the current problem of insufficient extraction of small retinal blood vessels, we propose a retinal blood vessel segmentation algorithm that combines supervised learning and unsupervised learning algorithms. In this study, we use a multiscale matched filter with vessel enhancement capability and a U-Net model with a coding and decoding network structure. Three channels are used to extract vessel features separately, and finally, the segmentation results of the three channels are merged. The algorithm proposed in this paper has been verified and evaluated on the DRIVE, STARE, and CHASE_DB1 datasets. The experimental results show that the proposed algorithm can segment small blood vessels better than most other methods. We conclude that our algorithm has reached 0.8745, 0.8903, and 0.8916 on the three datasets in the sensitivity metric, respectively, which is nearly 0.1 higher than other existing methods.
Collapse
|
106
|
Al-Masni MA, Kim DH. CMM-Net: Contextual multi-scale multi-level network for efficient biomedical image segmentation. Sci Rep 2021; 11:10191. [PMID: 33986375 PMCID: PMC8119726 DOI: 10.1038/s41598-021-89686-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Accepted: 04/26/2021] [Indexed: 01/20/2023] Open
Abstract
Medical image segmentation of tissue abnormalities, key organs, or blood vascular system is of great significance for any computerized diagnostic system. However, automatic segmentation in medical image analysis is a challenging task since it requires sophisticated knowledge of the target organ anatomy. This paper develops an end-to-end deep learning segmentation method called Contextual Multi-Scale Multi-Level Network (CMM-Net). The main idea is to fuse the global contextual features of multiple spatial scales at every contracting convolutional network level in the U-Net. Also, we re-exploit the dilated convolution module that enables an expansion of the receptive field with different rates depending on the size of feature maps throughout the networks. In addition, an augmented testing scheme referred to as Inversion Recovery (IR) which uses logical "OR" and "AND" operators is developed. The proposed segmentation network is evaluated on three medical imaging datasets, namely ISIC 2017 for skin lesions segmentation from dermoscopy images, DRIVE for retinal blood vessels segmentation from fundus images, and BraTS 2018 for brain gliomas segmentation from MR scans. The experimental results showed superior state-of-the-art performance with overall dice similarity coefficients of 85.78%, 80.27%, and 88.96% on the segmentation of skin lesions, retinal blood vessels, and brain tumors, respectively. The proposed CMM-Net is inherently general and could be efficiently applied as a robust tool for various medical image segmentations.
Collapse
Affiliation(s)
- Mohammed A Al-Masni
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea.
| |
Collapse
|
107
|
Jo HC, Jeong H, Lee J, Na KS, Kim DY. Quantification of Blood Flow Velocity in the Human Conjunctival Microvessels Using Deep Learning-Based Stabilization Algorithm. SENSORS 2021; 21:s21093224. [PMID: 34066590 PMCID: PMC8124391 DOI: 10.3390/s21093224] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 04/30/2021] [Accepted: 05/01/2021] [Indexed: 11/27/2022]
Abstract
The quantification of blood flow velocity in the human conjunctiva is clinically essential for assessing microvascular hemodynamics. Since the conjunctival microvessel is imaged in several seconds, eye motion during image acquisition causes motion artifacts limiting the accuracy of image segmentation performance and measurement of the blood flow velocity. In this paper, we introduce a novel customized optical imaging system for human conjunctiva with deep learning-based segmentation and motion correction. The image segmentation process is performed by the Attention-UNet structure to achieve high-performance segmentation results in conjunctiva images with motion blur. Motion correction processes with two steps—registration and template matching—are used to correct for large displacements and fine movements. The image displacement values decrease to 4–7 μm during registration (first step) and less than 1 μm during template matching (second step). With the corrected images, the blood flow velocity is calculated for selected vessels considering temporal signal variances and vessel lengths. These methods for resolving motion artifacts contribute insights into studies quantifying the hemodynamics of the conjunctiva, as well as other tissues.
Collapse
Affiliation(s)
- Hang-Chan Jo
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (H.-C.J.); (H.J.); (J.L.)
- Center for Sensor Systems, Inha University, Incheon 22212, Korea
| | - Hyeonwoo Jeong
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (H.-C.J.); (H.J.); (J.L.)
| | - Junhyuk Lee
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (H.-C.J.); (H.J.); (J.L.)
| | - Kyung-Sun Na
- Department of Ophthalmology & Visual Science, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea
- Correspondence: (K.-S.N.); (D.-Y.K.); Tel.: +82-02-3779-1520 (K.-S.N.); +82-32-860-7394 (D.-Y.K.)
| | - Dae-Yu Kim
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (H.-C.J.); (H.J.); (J.L.)
- Center for Sensor Systems, Inha University, Incheon 22212, Korea
- Inha Research Institute for Aerospace Medicine, Inha University, Incheon 22212, Korea
- Correspondence: (K.-S.N.); (D.-Y.K.); Tel.: +82-02-3779-1520 (K.-S.N.); +82-32-860-7394 (D.-Y.K.)
| |
Collapse
|
108
|
Zhou Y, Chen Z, Shen H, Zheng X, Zhao R, Duan X. A refined equilibrium generative adversarial network for retinal vessel segmentation. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.06.143] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
109
|
Dharmawan DA. Assessing fairness in performance evaluation of publicly available retinal blood vessel segmentation algorithms. J Med Eng Technol 2021; 45:351-360. [PMID: 33843422 DOI: 10.1080/03091902.2021.1906342] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
In the literature, various algorithms have been proposed for automatically extracting blood vessels from retinal images. In general, they are developed and evaluated using several publicly available datasets such as the DRIVE and STARE datasets. For performance evaluation, several metrics such as Sensitivity, Specificity, and Accuracy have been widely used. However, not all methods in the literature have been fairly evaluated and compared among their counterparts. In particular, for some publicly available algorithms, the performance is measured only for the area inside the field of view (FOV) of each retinal image while the rest use the complete image for the performance evaluation. Therefore, performing a comparison of the performance of methods in the latter group with those in the former group may lead to a misleading justification. This study aims to assess fairness in the performance evaluation of various publicly available retinal blood vessel segmentation algorithms. The conducted study allows getting several meaningful results: (i) a guideline to assess fairness in performance evaluation of retinal vessel segmentation algorithms, (ii) a more proper performance comparison of retinal vessel segmentation algorithms in the literature, and (iii) a suggestion regarding the use of performance evaluation metrics that will not lead to misleading comparison and justification.
Collapse
|
110
|
Wang B, Wang S, Qiu S, Wei W, Wang H, He H. CSU-Net: A Context Spatial U-Net for Accurate Blood Vessel Segmentation in Fundus Images. IEEE J Biomed Health Inform 2021; 25:1128-1138. [PMID: 32750968 DOI: 10.1109/jbhi.2020.3011178] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Blood vessel segmentation in fundus images is a critical procedure in the diagnosis of ophthalmic diseases. Recent deep learning methods achieve high accuracy in vessel segmentation but still face the challenge to segment the microvascular and detect the vessel boundary. This is due to the fact that common Convolutional Neural Networks (CNN) are unable to preserve rich spatial information and a large receptive field simultaneously. Besides, CNN models for vessel segmentation usually are trained by equal pixel level cross-entropy loss, which tend to miss fine vessel structures. In this paper, we propose a novel Context Spatial U-Net (CSU-Net) for blood vessel segmentation. Compared with the other U-Net based models, we design a two-channel encoder: a context channel with multi-scale convolution to capture more receptive field and a spatial channel with large kernel to retain spatial information. Also, to combine and strengthen the features extracted from two paths, we introduce a feature fusion module (FFM) and an attention skip module (ASM). Furthermore, we propose a structure loss, which adds a spatial weight to cross-entropy loss and guide the network to focus more on the thin vessels and boundaries. We evaluated this model on three public datasets: DRIVE, CHASE-DB1 and STARE. The results show that the CSU-Net achieves higher segmentation accuracy than the current state-of-the-art methods.
Collapse
|
111
|
Mookiah MRK, Hogg S, MacGillivray T, Trucco E. On the quantitative effects of compression of retinal fundus images on morphometric vascular measurements in VAMPIRE. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 202:105969. [PMID: 33631639 DOI: 10.1016/j.cmpb.2021.105969] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Accepted: 01/30/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES This paper reports a quantitative analysis of the effects of joint photographic experts group (JPEG) image compression of retinal fundus camera images on automatic vessel segmentation and on morphometric vascular measurements derived from it, including vessel width, tortuosity and fractal dimension. METHODS Measurements are computed with vascular assessment and measurement platform for images of the retina (VAMPIRE), a specialized software application adopted in many international studies on retinal biomarkers. For reproducibility, we use three public archives of fundus images (digital retinal images for vessel extraction (DRIVE), automated retinal image analyzer (ARIA), high-resolution fundus (HRF)). We generate compressed versions of original images in a range of representative levels. RESULTS We compare the resulting vessel segmentations with ground truth maps and morphological measurements of the vascular network with those obtained from the original (uncompressed) images. We assess the segmentation quality with sensitivity, specificity, accuracy, area under the curve and Dice coefficient. We assess the agreement between VAMPIRE measurements from compressed and uncompressed images with correlation, intra-class correlation and Bland-Altman analysis. CONCLUSIONS Results suggest that VAMPIRE width-related measurements (central retinal artery equivalent (CRAE), central retinal vein equivalent (CRVE), arteriolar-venular width ratio (AVR)), the fractal dimension (FD) and arteriolar tortuosity have excellent agreement with those from the original images, remaining substantially stable even for strong loss of quality (20% of the original), suggesting the suitability of VAMPIRE in association studies with compressed images.
Collapse
|
112
|
P GP, Biswal B, Biswal P. Robust classification of neovascularization using random forest classifier via convoluted vascular network. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
113
|
Ramos-Soto O, Rodríguez-Esparza E, Balderas-Mata SE, Oliva D, Hassanien AE, Meleppat RK, Zawadzki RJ. An efficient retinal blood vessel segmentation in eye fundus images by using optimized top-hat and homomorphic filtering. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 201:105949. [PMID: 33567382 DOI: 10.1016/j.cmpb.2021.105949] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Accepted: 01/18/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic segmentation of retinal blood vessels makes a major contribution in CADx of various ophthalmic and cardiovascular diseases. A procedure to segment thin and thick retinal vessels is essential for medical analysis and diagnosis of related diseases. In this article, a novel methodology for robust vessel segmentation is proposed, handling the existing challenges presented in the literature. METHODS The proposed methodology consists of three stages, pre-processing, main processing, and post-processing. The first stage consists of applying filters for image smoothing. The main processing stage is divided into two configurations, the first to segment thick vessels through the new optimized top-hat, homomorphic filtering, and median filter. Then, the second configuration is used to segment thin vessels using the proposed optimized top-hat, homomorphic filtering, matched filter, and segmentation using the MCET-HHO multilevel algorithm. Finally, morphological image operations are carried out in the post-processing stage. RESULTS The proposed approach was assessed by using two publicly available databases (DRIVE and STARE) through three performance metrics: specificity, sensitivity, and accuracy. Analyzing the obtained results, an average of 0.9860, 0.7578 and 0.9667 were respectively achieved for DRIVE dataset and 0.9836, 0.7474 and 0.9580 for STARE dataset. CONCLUSIONS The numerical results obtained by the proposed technique, achieve competitive average values with the up-to-date techniques. The proposed approach outperform all leading unsupervised methods discussed in terms of specificity and accuracy. In addition, it outperforms most of the state-of-the-art supervised methods without the computational cost associated with these algorithms. Detailed visual analysis has shown that a more precise segmentation of thin vessels was possible with the proposed approach when compared with other procedures.
Collapse
Affiliation(s)
- Oscar Ramos-Soto
- División de Electrónica y Computación, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, C.P. 44430, Guadalajara, Jal., Mexico.
| | - Erick Rodríguez-Esparza
- División de Electrónica y Computación, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, C.P. 44430, Guadalajara, Jal., Mexico; DeustoTech, Faculty of Engineering, University of Deusto, Av. Universidades, 24, 48007 Bilbao, Spain.
| | - Sandra E Balderas-Mata
- División de Electrónica y Computación, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, C.P. 44430, Guadalajara, Jal., Mexico.
| | - Diego Oliva
- División de Electrónica y Computación, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, C.P. 44430, Guadalajara, Jal., Mexico; IN3 - Computer Science Dept., Universitat Oberta de Catalunya, Castelldefels, Spain.
| | | | - Ratheesh K Meleppat
- UC Davis Eyepod Imaging Laboratory, Dept. of Cell Biology and Human Anatomy, University of California Davis, Davis, CA 95616, USA; Dept. of Ophthalmology & Vision Science, University of California Davis, Sacramento, CA, USA.
| | - Robert J Zawadzki
- UC Davis Eyepod Imaging Laboratory, Dept. of Cell Biology and Human Anatomy, University of California Davis, Davis, CA 95616, USA; Dept. of Ophthalmology & Vision Science, University of California Davis, Sacramento, CA, USA.
| |
Collapse
|
114
|
Lightweight pyramid network with spatial attention mechanism for accurate retinal vessel segmentation. Int J Comput Assist Radiol Surg 2021; 16:673-682. [PMID: 33751370 DOI: 10.1007/s11548-021-02344-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 03/04/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE The morphological characteristics of retinal vessels are vital for the early diagnosis of pathological diseases such as diabetes and hypertension. However, the low contrast and complex morphology pose a challenge to automatic retinal vessel segmentation. To extract precise semantic features, more convolution and pooling operations are adopted, but some structural information is potentially ignored. METHODS In the paper, we propose a novel lightweight pyramid network (LPN) fusing multi-scale features with spatial attention mechanism to preserve the structure information of retinal vessels. The pyramid hierarchy model is constructed to generate multi-scale representations, and its semantic features are strengthened with the introduction of the attention mechanism. The combination of multi-scale features contributes to its accurate prediction. RESULTS The LPN is evaluated on benchmark datasets DRIVE, STARE and CHASE, and the results indicate its state-of-the-art performance (e.g., ACC of 97.09[Formula: see text]/97.49[Formula: see text]/97.48[Formula: see text], AUC of 98.79[Formula: see text]/99.01[Formula: see text]/98.91[Formula: see text] on the DRIVE, STARE and CHASE datasets, respectively). The robustness and generalization ability of the LPN are further proved in cross-training experiment. CONCLUSION The visualization experiment reveals the semantic gap between various scales of the pyramid and verifies the effectiveness of the attention mechanism, which provide a potential basis for the pyramid hierarchy model in multi-scale vessel segmentation task. Furthermore, the number of model parameters is greatly reduced.
Collapse
|
115
|
Fast and efficient retinal blood vessel segmentation method based on deep learning network. Comput Med Imaging Graph 2021; 90:101902. [PMID: 33892389 DOI: 10.1016/j.compmedimag.2021.101902] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 03/04/2021] [Accepted: 03/06/2021] [Indexed: 01/28/2023]
Abstract
The segmentation of the retinal vascular tree presents a major step for detecting ocular pathologies. The clinical context expects higher segmentation performance with a reduced processing time. For higher accurate segmentation, several automated methods have been based on Deep Learning (DL) networks. However, the used convolutional layers bring to a higher computational complexity and so for execution times. For such need, this work presents a new DL based method for retinal vessel tree segmentation. Our main contribution consists in suggesting a new U-form DL architecture using lightweight convolution blocks in order to preserve a higher segmentation performance while reducing the computational complexity. As a second main contribution, preprocessing and data augmentation steps are proposed with respect to the retinal image and blood vessel characteristics. The proposed method is tested on DRIVE and STARE databases, which can achieve a better trade-off between the retinal blood vessel detection rate and the detection time with average accuracy of 0.978 and 0.98 in 0.59 s and 0.48 s per fundus image on GPU NVIDIA GTX 980 platforms, respectively for DRIVE and STARE database fundus images.
Collapse
|
116
|
Li X, Jiang Y, Li M, Yin S. Lightweight Attention Convolutional Neural Network for Retinal Vessel Image Segmentation. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS 2021; 17:1958-1967. [DOI: 10.1109/tii.2020.2993842] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
117
|
Jia D, Zhuang X. Learning-based algorithms for vessel tracking: A review. Comput Med Imaging Graph 2021; 89:101840. [PMID: 33548822 DOI: 10.1016/j.compmedimag.2020.101840] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Revised: 10/07/2020] [Accepted: 12/03/2020] [Indexed: 11/24/2022]
Abstract
Developing efficient vessel-tracking algorithms is crucial for imaging-based diagnosis and treatment of vascular diseases. Vessel tracking aims to solve recognition problems such as key (seed) point detection, centerline extraction, and vascular segmentation. Extensive image-processing techniques have been developed to overcome the problems of vessel tracking that are mainly attributed to the complex morphologies of vessels and image characteristics of angiography. This paper presents a literature review on vessel-tracking methods, focusing on machine-learning-based methods. First, the conventional machine-learning-based algorithms are reviewed, and then, a general survey of deep-learning-based frameworks is provided. On the basis of the reviewed methods, the evaluation issues are introduced. The paper is concluded with discussions about the remaining exigencies and future research.
Collapse
Affiliation(s)
- Dengqiang Jia
- School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiahai Zhuang
- School of Data Science, Fudan University, Shanghai, China.
| |
Collapse
|
118
|
Li T, Bo W, Hu C, Kang H, Liu H, Wang K, Fu H. Applications of deep learning in fundus images: A review. Med Image Anal 2021; 69:101971. [PMID: 33524824 DOI: 10.1016/j.media.2021.101971] [Citation(s) in RCA: 99] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 01/12/2021] [Indexed: 02/06/2023]
Abstract
The use of fundus images for the early screening of eye diseases is of great clinical importance. Due to its powerful performance, deep learning is becoming more and more popular in related applications, such as lesion segmentation, biomarkers segmentation, disease diagnosis and image synthesis. Therefore, it is very necessary to summarize the recent developments in deep learning for fundus images with a review paper. In this review, we introduce 143 application papers with a carefully designed hierarchy. Moreover, 33 publicly available datasets are presented. Summaries and analyses are provided for each task. Finally, limitations common to all tasks are revealed and possible solutions are given. We will also release and regularly update the state-of-the-art results and newly-released datasets at https://github.com/nkicsl/Fundus_Review to adapt to the rapid development of this field.
Collapse
Affiliation(s)
- Tao Li
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Wang Bo
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Chunyu Hu
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hong Kang
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hanruo Liu
- Beijing Tongren Hospital, Capital Medical University, Address, Beijing 100730 China
| | - Kai Wang
- College of Computer Science, Nankai University, Tianjin 300350, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE
| |
Collapse
|
119
|
Naveed K, Daud F, Madni HA, Khan MA, Khan TM, Naqvi SS. Towards Automated Eye Diagnosis: An Improved Retinal Vessel Segmentation Framework Using Ensemble Block Matching 3D Filter. Diagnostics (Basel) 2021; 11:114. [PMID: 33445723 PMCID: PMC7828181 DOI: 10.3390/diagnostics11010114] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 01/07/2021] [Accepted: 01/08/2021] [Indexed: 12/11/2022] Open
Abstract
Automated detection of vision threatening eye disease based on high resolution retinal fundus images requires accurate segmentation of the blood vessels. In this regard, detection and segmentation of finer vessels, which are obscured by a considerable degree of noise and poor illumination, is particularly challenging. These noises include (systematic) additive noise and multiplicative (speckle) noise, which arise due to various practical limitations of the fundus imaging systems. To address this inherent issue, we present an efficient unsupervised vessel segmentation strategy as a step towards accurate classification of eye diseases from the noisy fundus images. To that end, an ensemble block matching 3D (BM3D) speckle filter is proposed for removal of unwanted noise leading to improved detection. The BM3D-speckle filter, despite its ability to recover finer details (i.e., vessels in fundus images), yields a pattern of checkerboard artifacts in the aftermath of multiplicative (speckle) noise removal. These artifacts are generally ignored in the case of satellite images; however, in the case of fundus images, these artifacts have a degenerating effect on the segmentation or detection of fine vessels. To counter that, an ensemble of BM3D-speckle filter is proposed to suppress these artifacts while further sharpening the recovered vessels. This is subsequently used to devise an improved unsupervised segmentation strategy that can detect fine vessels even in the presence of dominant noise and yields an overall much improved accuracy. Testing was carried out on three publicly available databases namely Structured Analysis of the Retina (STARE), Digital Retinal Images for Vessel Extraction (DRIVE) and CHASE_DB1. We have achieved a sensitivity of 82.88, 81.41 and 82.03 on DRIVE, SATARE, and CHASE_DB1, respectively. The accuracy is also boosted to 95.41, 95.70 and 95.61 on DRIVE, SATARE, and CHASE_DB1, respectively. The performance of the proposed methods on images with pathologies was observed to be more convincing than the performance of similar state-of-the-art methods.
Collapse
Affiliation(s)
- Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan; (H.A.M.); (S.S.N.)
| | - Faizan Daud
- School of Information Technology, Faculty of Science Engineering & Built Environment, Deakin University, Locked Bag 20000, Geelong, VIC 3220, Australia;
| | - Hussain Ahmad Madni
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan; (H.A.M.); (S.S.N.)
| | - Mohammad A.U. Khan
- Department of Electrical Engineering, Namal Institute, Mianwali, Namal 42200, Pakistan;
| | - Tariq M. Khan
- School of Information Technology, Faculty of Science Engineering & Built Environment, Deakin University, Locked Bag 20000, Geelong, VIC 3220, Australia;
| | - Syed Saud Naqvi
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan; (H.A.M.); (S.S.N.)
| |
Collapse
|
120
|
A Hybrid Unsupervised Approach for Retinal Vessel Segmentation. BIOMED RESEARCH INTERNATIONAL 2020; 2020:8365783. [PMID: 33381585 PMCID: PMC7749777 DOI: 10.1155/2020/8365783] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 11/26/2020] [Indexed: 12/04/2022]
Abstract
Retinal vessel segmentation (RVS) is a significant source of useful information for monitoring, identification, initial medication, and surgical development of ophthalmic disorders. Most common disorders, i.e., stroke, diabetic retinopathy (DR), and cardiac diseases, often change the normal structure of the retinal vascular network. A lot of research has been committed to building an automatic RVS system. But, it is still an open issue. In this article, a framework is recommended for RVS with fast execution and competing outcomes. An initial binary image is obtained by the application of the MISODATA on the preprocessed image. For vessel structure enhancement, B-COSFIRE filters are utilized along with thresholding to obtain another binary image. These two binary images are combined by logical AND-type operation. Then, it is fused with the enhanced image of B-COSFIRE filters followed by thresholding to obtain the vessel location map (VLM). The methodology is verified on four different datasets: DRIVE, STARE, HRF, and CHASE_DB1, which are publicly accessible for benchmarking and validation. The obtained results are compared with the existing competing methods.
Collapse
|
121
|
Rodrigues EO, Conci A, Liatsis P. ELEMENT: Multi-Modal Retinal Vessel Segmentation Based on a Coupled Region Growing and Machine Learning Approach. IEEE J Biomed Health Inform 2020; 24:3507-3519. [PMID: 32750920 DOI: 10.1109/jbhi.2020.2999257] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Vascular structures in the retina contain important information for the detection and analysis of ocular diseases, including age-related macular degeneration, diabetic retinopathy and glaucoma. Commonly used modalities in diagnosis of these diseases are fundus photography, scanning laser ophthalmoscope (SLO) and fluorescein angiography (FA). Typically, retinal vessel segmentation is carried out either manually or interactively, which makes it time consuming and prone to human errors. In this research, we propose a new multi-modal framework for vessel segmentation called ELEMENT (vEsseL sEgmentation using Machine lEarning and coNnecTivity). This framework consists of feature extraction and pixel-based classification using region growing and machine learning. The proposed features capture complementary evidence based on grey level and vessel connectivity properties. The latter information is seamlessly propagated through the pixels at the classification phase. ELEMENT reduces inconsistencies and speeds up the segmentation throughput. We analyze and compare the performance of the proposed approach against state-of-the-art vessel segmentation algorithms in three major groups of experiments, for each of the ocular modalities. Our method produced higher overall performance, with an overall accuracy of 97.40%, compared to 25 of the 26 state-of-the-art approaches, including six works based on deep learning, evaluated on the widely known DRIVE fundus image dataset. In the case of the STARE, CHASE-DB, VAMPIRE FA, IOSTAR SLO and RC-SLO datasets, the proposed framework outperformed all of the state-of-the-art methods with accuracies of 98.27%, 97.78%, 98.34%, 98.04% and 98.35%, respectively.
Collapse
|
122
|
Wang S, Yu L, Li K, Yang X, Fu CW, Heng PA. DoFE: Domain-Oriented Feature Embedding for Generalizable Fundus Image Segmentation on Unseen Datasets. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4237-4248. [PMID: 32776876 DOI: 10.1109/tmi.2020.3015224] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Deep convolutional neural networks have significantly boosted the performance of fundus image segmentation when test datasets have the same distribution as the training datasets. However, in clinical practice, medical images often exhibit variations in appearance for various reasons, e.g., different scanner vendors and image quality. These distribution discrepancies could lead the deep networks to over-fit on the training datasets and lack generalization ability on the unseen test datasets. To alleviate this issue, we present a novel Domain-oriented Feature Embedding (DoFE) framework to improve the generalization ability of CNNs on unseen target domains by exploring the knowledge from multiple source domains. Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains to make the semantic features more discriminative. Specifically, we introduce a Domain Knowledge Pool to learn and memorize the prior information extracted from multi-source domains. Then the original image features are augmented with domain-oriented aggregated features, which are induced from the knowledge pool based on the similarity between the input image and multi-source domain images. We further design a novel domain code prediction branch to infer this similarity and employ an attention-guided mechanism to dynamically combine the aggregated features with the semantic features. We comprehensively evaluate our DoFE framework on two fundus image segmentation tasks, including the optic cup and disc segmentation and vessel segmentation. Our DoFE framework generates satisfying segmentation results on unseen datasets and surpasses other domain generalization and network regularization methods.
Collapse
|
123
|
Wang D, Haytham A, Pottenburgh J, Saeedi O, Tao Y. Hard Attention Net for Automatic Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2020; 24:3384-3396. [DOI: 10.1109/jbhi.2020.3002985] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
124
|
Mookiah MRK, Hogg S, MacGillivray TJ, Prathiba V, Pradeepa R, Mohan V, Anjana RM, Doney AS, Palmer CNA, Trucco E. A review of machine learning methods for retinal blood vessel segmentation and artery/vein classification. Med Image Anal 2020; 68:101905. [PMID: 33385700 DOI: 10.1016/j.media.2020.101905] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 11/10/2020] [Accepted: 11/11/2020] [Indexed: 12/20/2022]
Abstract
The eye affords a unique opportunity to inspect a rich part of the human microvasculature non-invasively via retinal imaging. Retinal blood vessel segmentation and classification are prime steps for the diagnosis and risk assessment of microvascular and systemic diseases. A high volume of techniques based on deep learning have been published in recent years. In this context, we review 158 papers published between 2012 and 2020, focussing on methods based on machine and deep learning (DL) for automatic vessel segmentation and classification for fundus camera images. We divide the methods into various classes by task (segmentation or artery-vein classification), technique (supervised or unsupervised, deep and non-deep learning, hand-crafted methods) and more specific algorithms (e.g. multiscale, morphology). We discuss advantages and limitations, and include tables summarising results at-a-glance. Finally, we attempt to assess the quantitative merit of DL methods in terms of accuracy improvement compared to other methods. The results allow us to offer our views on the outlook for vessel segmentation and classification for fundus camera images.
Collapse
Affiliation(s)
| | - Stephen Hogg
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| | - Tom J MacGillivray
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh EH16 4SB, UK
| | - Vijayaraghavan Prathiba
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Rajendra Pradeepa
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Viswanathan Mohan
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Ranjit Mohan Anjana
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Alexander S Doney
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Colin N A Palmer
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Emanuele Trucco
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| |
Collapse
|
125
|
Retinal Vessel Segmentation by Deep Residual Learning with Wide Activation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2020; 2020:8822407. [PMID: 33101403 PMCID: PMC7569427 DOI: 10.1155/2020/8822407] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 09/14/2020] [Accepted: 09/21/2020] [Indexed: 11/17/2022]
Abstract
Purpose Retinal blood vessel image segmentation is an important step in ophthalmological analysis. However, it is difficult to segment small vessels accurately because of low contrast and complex feature information of blood vessels. The objective of this study is to develop an improved retinal blood vessel segmentation structure (WA-Net) to overcome these challenges. Methods This paper mainly focuses on the width of deep learning. The channels of the ResNet block were broadened to propagate more low-level features, and the identity mapping pathway was slimmed to maintain parameter complexity. A residual atrous spatial pyramid module was used to capture the retinal vessels at various scales. We applied weight normalization to eliminate the impacts of the mini-batch and improve segmentation accuracy. The experiments were performed on the DRIVE and STARE datasets. To show the generalizability of WA-Net, we performed cross-training between datasets. Results The global accuracy and specificity within datasets were 95.66% and 96.45% and 98.13% and 98.71%, respectively. The accuracy and area under the curve of the interdataset diverged only by 1%∼2% compared with the performance of the corresponding intradataset. Conclusion All the results show that WA-Net extracts more detailed blood vessels and shows superior performance on retinal blood vessel segmentation tasks.
Collapse
|
126
|
Kuang Z, Deng X, Yu L, Wang H, Li T, Wang S. Ψ-Net: Focusing on the border areas of intracerebral hemorrhage on CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 194:105546. [PMID: 32474252 DOI: 10.1016/j.cmpb.2020.105546] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Revised: 05/11/2020] [Accepted: 05/12/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE The volume of the intracerebral hemorrhage (ICH) obtained from CT scans is essential for quantification and treatment planning. However,a fast and accurate volume acquisition brings great challenges. On the one hand, it is both time consuming and operator dependent for manual segmentation, which is the gold standard for volume estimation. On the other hand, low contrast to normal tissues, irregular shapes and distributions of the hemorrhage make the existing automatic segmentation methods hard to achieve satisfactory performance. METHOD To solve above problems, a CNN-based architecture is proposed in this work, consisting of a novel model, which is named as Ψ-Net and a multi-level training strategy. In the structure of Ψ-Net, a self-attention block and a contextual-attention block is designed to suppresses the irrelevant information and segment border areas of the hemorrhage more finely. Further, an multi-level training strategy is put forward to facilitate the training process. By adding the slice-level learning and a weighted loss, the multi-level training strategy effectively alleviates the problems of vanishing gradient and the class imbalance. The proposed training strategy could be applied to most of the segmentation networks, especially for complex models and on small datasets. RESULTS The proposed architecture is evaluated on a spontaneous ICH dataset and a traumatic ICH dataset. Compared to the previous works on the ICH sementation, the proposed architecture obtains the state-of-the-art performance(Dice of 0.950) on the spontaneous ICH, and comparable results(Dice of 0.895) with the best method on the traumatic ICH. On the other hand, the time consumption of the proposed architecture is much less than the previous methods on both training and inference. Morever, experiment results on various of models prove the universality of the multi-level training strategy. CONCLUSIONS This study proposed a novel CNN-based architecture, Ψ-Net with multi-level training strategy. It takes less time for training and achives superior performance than previous ICH segmentaion methods.
Collapse
Affiliation(s)
- Zhuo Kuang
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Xianbo Deng
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China
| | - Li Yu
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Hongkui Wang
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Tiansong Li
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Shengwei Wang
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, 430074, China
| |
Collapse
|
127
|
Zhou Y, Yen GG, Yi Z. Evolutionary Compression of Deep Neural Networks for Biomedical Image Segmentation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2916-2929. [PMID: 31536016 DOI: 10.1109/tnnls.2019.2933879] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Biomedical image segmentation is lately dominated by deep neural networks (DNNs) due to their surpassing expert-level performance. However, the existing DNN models for biomedical image segmentation are generally highly parameterized, which severely impede their deployment on real-time platforms and portable devices. To tackle this difficulty, we propose an evolutionary compression method (ECDNN) to automatically discover efficient DNN architectures for biomedical image segmentation. Different from the existing studies, ECDNN can optimize network loss and number of parameters simultaneously during the evolution, and search for a set of Pareto-optimal solutions in a single run, which is useful for quantifying the tradeoff in satisfying different objectives, and flexible for compressing DNN when preference information is uncertain. In particular, a set of novel genetic operators is proposed for automatically identifying less important filters over the whole network. Moreover, a pruning operator is designed for eliminating convolutional filters from layers involved in feature map concatenation, which is commonly adopted in DNN architectures for capturing multi-level features from biomedical images. Experiments carried out on compressing DNN for retinal vessel and neuronal membrane segmentation tasks show that ECDNN can not only improve the performance without any retraining but also discover efficient network architectures that well maintain the performance. The superiority of the proposed method is further validated by comparison with the state-of-the-art methods.
Collapse
|
128
|
Qiao M, Wang Y, Guo Y, Huang L, Xia L, Tao Q. Temporally coherent cardiac motion tracking from cine MRI: Traditional registration method and modern CNN method. Med Phys 2020; 47:4189-4198. [PMID: 32564357 PMCID: PMC7586816 DOI: 10.1002/mp.14341] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Revised: 06/07/2020] [Accepted: 06/10/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose Cardiac motion tracking enables quantitative evaluation of myocardial strain, which is clinically interesting in cardiovascular disease research. However, motion tracking is difficult to perform manually. In this paper, we aim to develop and compare two fully automated motion tracking methods for the steady state free precession (SSFP) cine magnetic resonance imaging (MRI), and explore their use in real clinical scenario with different patient groups. Methods We proposed two automated cardiac motion tracking method: (a) a traditional registration‐based method, named full cardiac cycle registration, which simultaneously tracks all cine frames within a full cardiac cycle by joint registration of all frames; and (b) a modern convolutional neural network (CNN)‐based method, named Groupwise MotionNet, which enhances the temporal coherence by fusing motion along a continuous time scale. Both methods were evaluated on the healthy volunteer data from the MICCAI 2011 STACOM Challenge, as well as on patient data including hypertrophic cardiomyopathy (HCM) and myocardial infarction (MI). Results The full cardiac cycle registration method achieved an average end‐point error (EPE) 2.89 ± 1.57 mm for cardiac motion tracking, with computation time of around 9 min per short‐axis cine MRI (size 128 × 128, 30 cardiac phases). In comparison, the Groupwise MotionNet achieved an average EPE of 0.94 ± 1.59 mm, taking < 1 s for a full cardiac phases. Further experiments showed that registration method had stable performance, independent of patient cohort and MRI machine, while the CNN‐based method relied on the training data to deliver consistently accurate results. Conclusion Both registration‐based and CNN‐based method can track the cardiac motion from SSFP cine MRI in a fully automated manner, while taking temporal coherence into account. The registration method is generic, robust, but relatively slow; the CNN‐based method trained with heterogeneous data was able to achieve high tracking accuracy with real‐time performance.
Collapse
Affiliation(s)
- Mengyun Qiao
- Department of Electrical Engineering, Fudan University, Shanghai, China
| | - Yuanyuan Wang
- Department of Electrical Engineering, Fudan University, Shanghai, China
| | - Yi Guo
- Department of Electrical Engineering, Fudan University, Shanghai, China
| | - Lu Huang
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Liming Xia
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qian Tao
- Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| |
Collapse
|
129
|
Yu L, Qin Z, Zhuang T, Ding Y, Qin Z, Raymond Choo KK. A framework for hierarchical division of retinal vascular networks. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.11.113] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
130
|
NFN+: A novel network followed network for retinal vessel segmentation. Neural Netw 2020; 126:153-162. [DOI: 10.1016/j.neunet.2020.02.018] [Citation(s) in RCA: 59] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 01/28/2020] [Accepted: 02/26/2020] [Indexed: 11/21/2022]
|
131
|
Hao D, Ding S, Qiu L, Lv Y, Fei B, Zhu Y, Qin B. Sequential vessel segmentation via deep channel attention network. Neural Netw 2020; 128:172-187. [PMID: 32447262 DOI: 10.1016/j.neunet.2020.05.005] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2019] [Revised: 04/22/2020] [Accepted: 05/04/2020] [Indexed: 02/01/2023]
Abstract
Accurately segmenting contrast-filled vessels from X-ray coronary angiography (XCA) image sequence is an essential step for the diagnosis and therapy of coronary artery disease. However, developing automatic vessel segmentation is particularly challenging due to the overlapping structures, low contrast and the presence of complex and dynamic background artifacts in XCA images. This paper develops a novel encoder-decoder deep network architecture which exploits the several contextual frames of 2D+t sequential images in a sliding window centered at current frame to segment 2D vessel masks from the current frame. The architecture is equipped with temporal-spatial feature extraction in encoder stage, feature fusion in skip connection layers and channel attention mechanism in decoder stage. In the encoder stage, a series of 3D convolutional layers are employed to hierarchically extract temporal-spatial features. Skip connection layers subsequently fuse the temporal-spatial feature maps and deliver them to the corresponding decoder stages. To efficiently discriminate vessel features from the complex and noisy backgrounds in the XCA images, the decoder stage effectively utilizes channel attention blocks to refine the intermediate feature maps from skip connection layers for subsequently decoding the refined features in 2D ways to produce the segmented vessel masks. Furthermore, Dice loss function is implemented to train the proposed deep network in order to tackle the class imbalance problem in the XCA data due to the wide distribution of complex background artifacts. Extensive experiments by comparing our method with other state-of-the-art algorithms demonstrate the proposed method's superior performance over other methods in terms of the quantitative metrics and visual validation. To facilitate the reproductive research in XCA community, we publicly release our dataset and source codes at https://github.com/Binjie-Qin/SVS-net.
Collapse
Affiliation(s)
- Dongdong Hao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Song Ding
- Department of Cardiology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200127, China
| | - Linwei Qiu
- School of Astronautics, Beihang University, Beijing 100191, China
| | - Yisong Lv
- School of Continuing Education, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Baowei Fei
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, University of Texas at Dallas, Richardson, TX 75080, USA
| | - Yueqi Zhu
- Department of Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai Jiao Tong University, 600 Yi Shan Road, Shanghai 200233, China
| | - Binjie Qin
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China.
| |
Collapse
|
132
|
Ding L, Bawany MH, Kuriyan AE, Ramchandran RS, Wykoff CC, Sharma G. A Novel Deep Learning Pipeline for Retinal Vessel Detection In Fluorescein Angiography. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:10.1109/TIP.2020.2991530. [PMID: 32396087 PMCID: PMC7648732 DOI: 10.1109/tip.2020.2991530] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
While recent advances in deep learning have significantly advanced the state of the art for vessel detection in color fundus (CF) images, the success for detecting vessels in fluorescein angiography (FA) has been stymied due to the lack of labeled ground truth datasets. We propose a novel pipeline to detect retinal vessels in FA images using deep neural networks (DNNs) that reduces the effort required for generating labeled ground truth data by combining two key components: cross-modality transfer and human-in-the-loop learning. The cross-modality transfer exploits concurrently captured CF and fundus FA images. Binary vessels maps are first detected from CF images with a pre-trained neural network and then are geometrically registered with and transferred to FA images via robust parametric chamfer alignment to a preliminary FA vessel detection obtained with an unsupervised technique. Using the transferred vessels as initial ground truth labels for deep learning, the human-in-the-loop approach progressively improves the quality of the ground truth labeling by iterating between deep-learning and labeling. The approach significantly reduces manual labeling effort while increasing engagement. We highlight several important considerations for the proposed methodology and validate the performance on three datasets. Experimental results demonstrate that the proposed pipeline significantly reduces the annotation effort and the resulting deep learning methods outperform prior existing FA vessel detection methods by a significant margin. A new public dataset, RECOVERY-FA19, is introduced that includes high-resolution ultra-widefield images and accurately labeled ground truth binary vessel maps.
Collapse
|
133
|
Rundo L, Beer L, Ursprung S, Martin-Gonzalez P, Markowetz F, Brenton JD, Crispin-Ortuzar M, Sala E, Woitek R. Tissue-specific and interpretable sub-segmentation of whole tumour burden on CT images by unsupervised fuzzy clustering. Comput Biol Med 2020; 120:103751. [PMID: 32421652 PMCID: PMC7248575 DOI: 10.1016/j.compbiomed.2020.103751] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 04/03/2020] [Accepted: 04/05/2020] [Indexed: 12/18/2022]
Abstract
BACKGROUND Cancer typically exhibits genotypic and phenotypic heterogeneity, which can have prognostic significance and influence therapy response. Computed Tomography (CT)-based radiomic approaches calculate quantitative features of tumour heterogeneity at a mesoscopic level, regardless of macroscopic areas of hypo-dense (i.e., cystic/necrotic), hyper-dense (i.e., calcified), or intermediately dense (i.e., soft tissue) portions. METHOD With the goal of achieving the automated sub-segmentation of these three tissue types, we present here a two-stage computational framework based on unsupervised Fuzzy C-Means Clustering (FCM) techniques. No existing approach has specifically addressed this task so far. Our tissue-specific image sub-segmentation was tested on ovarian cancer (pelvic/ovarian and omental disease) and renal cell carcinoma CT datasets using both overlap-based and distance-based metrics for evaluation. RESULTS On all tested sub-segmentation tasks, our two-stage segmentation approach outperformed conventional segmentation techniques: fixed multi-thresholding, the Otsu method, and automatic cluster number selection heuristics for the K-means clustering algorithm. In addition, experiments showed that the integration of the spatial information into the FCM algorithm generally achieves more accurate segmentation results, whilst the kernelised FCM versions are not beneficial. The best spatial FCM configuration achieved average Dice similarity coefficient values starting from 81.94±4.76 and 83.43±3.81 for hyper-dense and hypo-dense components, respectively, for the investigated sub-segmentation tasks. CONCLUSIONS The proposed intelligent framework could be readily integrated into clinical research environments and provides robust tools for future radiomic biomarker validation.
Collapse
Affiliation(s)
- Leonardo Rundo
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK.
| | - Lucian Beer
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK; Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna 1090, Austria.
| | - Stephan Ursprung
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK.
| | - Paula Martin-Gonzalez
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK; Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge CB2 0RE, UK.
| | - Florian Markowetz
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK; Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge CB2 0RE, UK.
| | - James D Brenton
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK; Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge CB2 0RE, UK.
| | - Mireia Crispin-Ortuzar
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK; Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge CB2 0RE, UK.
| | - Evis Sala
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK.
| | - Ramona Woitek
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK; Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna 1090, Austria.
| |
Collapse
|
134
|
|
135
|
Zhou C, Zhang X, Chen H. A new robust method for blood vessel segmentation in retinal fundus images based on weighted line detector and hidden Markov model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105231. [PMID: 31786454 DOI: 10.1016/j.cmpb.2019.105231] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Revised: 11/08/2019] [Accepted: 11/17/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic vessel segmentation is a crucial preliminary processing step to facilitate ophthalmologist diagnosis in some diseases. But, due to the complexity of retinal fundus image, there are some problems on accurate segmentation of retinal vessel. In this paper, a new method for retinal vessel segmentation is proposed to handle two main problems: thin vessel missing and false detection in difficult regions. METHODS First, an improved line detector is proposed and used to fast extract the major structures of vessels. Then, Hidden Markov model (HMM) is applied to effectively detect vessel centerlines that include thin vessels. Finally, a denoising approach is presented to remove noises and two types of vessels are unified to obtain the complete segmentation results. RESULTS Our method is tested on two public databases (DRIVE and STARE databases), and five measures namely accuracy (Acc), sensitivity (Se), specificity (Sp), Dice coefficient (Dc), structural similarity index (SSIM) and feature similarity index (FSIM) are used to evaluate our segmentation performance. The respective values of the performance measures are 0.9475, 0.7262, 0.9803, 0.7781, 0.9992 and 0.9793 for DRIVE dataset and 0.9535, 0.7865, 0.9730, 0.7764, 0.9987 and 0.9742 for STARE dataset. CONCLUSIONS The experiment results show that our method outperforms most published state-of-the-art methods and is better the result of a human observer. Moreover, in term of specificity, our proposed algorithm can obtain the best score among the unsupervised methods. Meanwhile, there are excellent structure and feature similarities between our result and the ground truth according to achieved SSIM and FSIM. Visual inspection on the segmentation results shows that the proposed method produces more accurate segmentations on some difficult regions such as optic disc and central light reflex while detecting thin vessels effectively compared with the other methods.
Collapse
Affiliation(s)
- Chao Zhou
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082 China.
| | - Xiaogang Zhang
- College of Electrical and Information Engineering, Hunan University, Changsha, 410082 China.
| | - Hua Chen
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082 China.
| |
Collapse
|
136
|
|
137
|
Tan Y, Liu M, Chen W, Wang X, Peng H, Wang Y. DeepBranch: Deep Neural Networks for Branch Point Detection in Biomedical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1195-1205. [PMID: 31603774 DOI: 10.1109/tmi.2019.2945980] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Morphology reconstruction of tree-like structures in volumetric images, such as neurons, retinal blood vessels, and bronchi, is of fundamental interest for biomedical research. 3D branch points play an important role in many reconstruction applications, especially for graph-based or seed-based reconstruction methods and can help to visualize the morphology structures. There are a few hand-crafted models proposed to detect the branch points. However, they are highly dependent on the empirical setting of the parameters for different images. In this paper, we propose a DeepBranch model for branch point detection with two-level designed convolutional networks, a candidate region segmenter and a false positive reducer. On the first level, an improved 3D U-Net model with anisotropic convolution kernels is employed to detect initial candidates. Compared with the traditional sliding window strategy, the improved 3D U-Net can avoid massive redundant computations and dramatically speed up the detection process by employing dense-inference with fully convolutional neural networks (FCN). On the second level, a method based on multi-scale multi-view convolutional neural networks (MSMV-Net) is proposed for false positive reduction by feeding multi-scale views of 3D volumes into multiple streams of 2D convolution neural networks (CNNs), which can take full advantage of spatial contextual information as well as fit different sizes. Experiments on multiple 3D biomedical images of neurons, retinal blood vessels and bronchi confirm that the proposed 3D branch point detection method outperforms other state-of-the-art detection methods, and is helpful for graph-based or seed-based reconstruction methods.
Collapse
|
138
|
Li F, Li W, Shu Y, Qin S, Xiao B, Zhan Z. Multiscale receptive field based on residual network for pancreas segmentation in CT images. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101828] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
139
|
Cerebrovascular segmentation from TOF-MRA using model- and data-driven method via sparse labels. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.092] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
140
|
Zhang M, Zhang C, Wu X, Cao X, Young GS, Chen H, Xu X. A neural network approach to segment brain blood vessels in digital subtraction angiography. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 185:105159. [PMID: 31710990 PMCID: PMC7518214 DOI: 10.1016/j.cmpb.2019.105159] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2019] [Revised: 08/27/2019] [Accepted: 10/26/2019] [Indexed: 05/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Cerebrovascular diseases (CVDs) affect a large number of patients and often have devastating outcomes. The hallmarks of CVDs are the abnormalities formed on brain blood vessels, including protrusions, narrows, widening, and bifurcation of the blood vessels. CVDs are often diagnosed by digital subtraction angiography (DSA) yet the interpretation of DSA is challenging as one must carefully examine each brain blood vessel. The objective of this work is to develop a computerized analysis approach for automated segmentation of brain blood vessels. METHODS In this work, we present a U-net based deep learning approach, combined with pre-processing, to track and segment brain blood vessels in DSA images. We compared the results given by the deep learning approach with manually marked ground truth using accuracy, sensitivity, specificity, and Dice coefficient. RESULTS Our results showed that the proposed approach achieved an accuracy of 0.978, with a standard deviation of 0.00796, a sensitivity of 0.76 with a standard deviation of 0.096, a specificity of 0.994 with a standard deviation of 0.0036, and an average Dice coefficient was 0.8268 with a standard deviation of 0.052. CONCLUSIONS Our findings show that the deep learning approach can achieve satisfactory performance as a computer-aided analysis tool to assist clinicians in diagnosing CVDs.
Collapse
Affiliation(s)
- Min Zhang
- Departments of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, MA 02115, USA
| | - Chen Zhang
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Xian Wu
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Xinhua Cao
- Departments of Radiology, Boston Children’s Hospital and Harvard Medical School, Boston, MA 02115, USA
| | - Geoffrey S. Young
- Departments of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, MA 02115, USA
| | - Huai Chen
- Department of Radiology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, Guangdong 510120, China
- Corresponding authors. (X. Xu)
| | - Xiaoyin Xu
- Departments of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, MA 02115, USA
- Corresponding authors. (X. Xu)
| |
Collapse
|
141
|
Cao L, Li H. Enhancement of blurry retinal image based on non-uniform contrast stretching and intensity transfer. Med Biol Eng Comput 2020; 58:483-496. [PMID: 31897799 DOI: 10.1007/s11517-019-02106-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 12/18/2019] [Indexed: 11/26/2022]
Abstract
Proper contrast and sufficient illuminance are important in clearly identifying the retinal structures, while the required quality cannot always be guaranteed due to major reasons like acquisition process and diseases. To ensure the effectiveness of enhancement, two solutions are developed for blurry retinal images with sufficient illuminance and insufficient illuminance, respectively. The proposed contrast stretching and intensity transfer are main steps in both of the two solutions. The contrast stretching is based on base-intensity removal and non-uniform addition. We assume that a base-intensity exists in an image, which mainly supports the basic illuminance but has less contribution to texture information. The base-intensity is estimated by the constrained Gaussian function and then removed. The non-uniform addition using compressed Gamma map is further developed to improve the contrast. Additionally, an effective intensity transfer strategy is introduced, which can provide required illuminance for a single channel after contrast stretching. The color correction can be achieved if the intensity transfer is performed on three channels. Results show that the proposed solutions can effectively improve the contrast and illuminance, and good visual perception for quality degraded retinal images is obtained. Illustration of contrast stretching based on a signal colour channel.
Collapse
Affiliation(s)
- Lvchen Cao
- School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Huiqi Li
- School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|
142
|
Multiloss Function Based Deep Convolutional Neural Network for Segmentation of Retinal Vasculature into Arterioles and Venules. BIOMED RESEARCH INTERNATIONAL 2019; 2019:4747230. [PMID: 31111055 PMCID: PMC6487175 DOI: 10.1155/2019/4747230] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Revised: 02/20/2019] [Accepted: 03/20/2019] [Indexed: 02/02/2023]
Abstract
The arterioles and venules (AV) classification of retinal vasculature is considered as the first step in the development of an automated system for analysing the vasculature biomarker association with disease prognosis. Most of the existing AV classification methods depend on the accurate segmentation of retinal blood vessels. Moreover, the unavailability of large-scale annotated data is a major hindrance in the application of deep learning techniques for AV classification. This paper presents an encoder-decoder based fully convolutional neural network for classification of retinal vasculature into arterioles and venules, without requiring the preliminary step of vessel segmentation. An optimized multiloss function is used to learn the pixel-wise and segment-wise retinal vessel labels. The proposed method is trained and evaluated on DRIVE, AVRDB, and a newly created AV classification dataset; and it attains 96%, 98%, and 97% accuracy, respectively. The new AV classification dataset is comprised of 700 annotated retinal images, which will offer the researchers a benchmark to compare their AV classification results.
Collapse
|
143
|
Han Z, Huang H, Huang T, Cao J. Face merged generative adversarial network with tripartite adversaries. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.08.049] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
144
|
USE-Net: Incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.07.006] [Citation(s) in RCA: 123] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
145
|
Cherukuri V, G VKB, Bala R, Monga V. Deep Retinal Image Segmentation with Regularization Under Geometric Priors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2552-2567. [PMID: 31613766 DOI: 10.1109/tip.2019.2946078] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Vessel segmentation of retinal images is a key diagnostic capability in ophthalmology. This problem faces several challenges including low contrast, variable vessel size and thickness, and presence of interfering pathology such as micro-aneurysms and hemorrhages. Early approaches addressing this problem employed hand-crafted filters to capture vessel structures, accompanied by morphological post-processing. More recently, deep learning techniques have been employed with significantly enhanced segmentation accuracy. We propose a novel domain enriched deep network that consists of two components: 1) a representation network that learns geometric features specific to retinal images, and 2) a custom designed computationally efficient residual task network that utilizes the features obtained from the representation layer to perform pixel-level segmentation. The representation and task networks are jointly learned for any given training set. To obtain physically meaningful and practically effective representation filters, we propose two new constraints that are inspired by expected prior structure on these filters: 1) orientation constraint that promotes geometric diversity of curvilinear features, and 2) a data adaptive noise regularizer that penalizes false positives. Multi-scale extensions are developed to enable accurate detection of thin vessels. Experiments performed on three challenging benchmark databases under a variety of training scenarios show that the proposed prior guided deep network outperforms state of the art alternatives as measured by common evaluation metrics, while being more economical in network size and inference time.
Collapse
|
146
|
Yue K, Zou B, Chen Z, Liu Q. Retinal vessel segmentation using dense U-net with multiscale inputs. J Med Imaging (Bellingham) 2019; 6:034004. [PMID: 31572745 DOI: 10.1117/1.jmi.6.3.034004] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2019] [Accepted: 08/30/2019] [Indexed: 11/14/2022] Open
Abstract
A color fundus image is an image of the inner wall of the eyeball taken with a fundus camera. Doctors can observe retinal vessel changes in the image, and these changes can be used to diagnose many serious diseases such as atherosclerosis, glaucoma, and age-related macular degeneration. Automated segmentation of retinal vessels can facilitate more efficient diagnosis of these diseases. We propose an improved U-net architecture to segment retinal vessels. Multiscale input layer and dense block are introduced into the conventional U-net, so that the network can make use of richer spatial context information. The proposed method is evaluated on the public dataset DRIVE, achieving 0.8199 in sensitivity and 0.9561 in accuracy. Especially for thin blood vessels, which are difficult to detect because of their low contrast with the background pixels, the segmentation results have been improved.
Collapse
Affiliation(s)
- Kejuan Yue
- Central South University, School of Computer Science and Engineering, Changsha, China.,Central South University, Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China.,Hunan First Normal University, School of Information Science and Engineering, Changsha, China
| | - Beiji Zou
- Central South University, School of Computer Science and Engineering, Changsha, China.,Central South University, Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Zailiang Chen
- Central South University, School of Computer Science and Engineering, Changsha, China.,Central South University, Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Qing Liu
- Central South University, School of Computer Science and Engineering, Changsha, China.,Central South University, Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| |
Collapse
|
147
|
Shin SY, Lee S, Yun ID, Lee KM. Deep vessel segmentation by learning graphical connectivity. Med Image Anal 2019; 58:101556. [PMID: 31536906 DOI: 10.1016/j.media.2019.101556] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 09/02/2019] [Accepted: 09/05/2019] [Indexed: 11/17/2022]
Abstract
We propose a novel deep learning based system for vessel segmentation. Existing methods using CNNs have mostly relied on local appearances learned on the regular image grid, without consideration of the graphical structure of vessel shape. Effective use of the strong relationship that exists between vessel neighborhoods can help improve the vessel segmentation accuracy. To this end, we incorporate a graph neural network into a unified CNN architecture to jointly exploit both local appearances and global vessel structures. We extensively perform comparative evaluations on four retinal image datasets and a coronary artery X-ray angiography dataset, showing that the proposed method outperforms or is on par with current state-of-the-art methods in terms of the average precision and the area under the receiver operating characteristic curve. Statistical significance on the performance difference between the proposed method and each comparable method is suggested by conducting a paired t-test. In addition, ablation studies support the particular choices of algorithmic detail and hyperparameter values of the proposed method. The proposed architecture is widely applicable since it can be applied to expand any type of CNN-based vessel segmentation method to enhance the performance.
Collapse
Affiliation(s)
- Seung Yeon Shin
- Department of Electrical and Computer Engineering, Automation and Systems Research Institute, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea
| | - Soochahn Lee
- School of Electrical Engineering, Kookmin University, Seoul, 02707, South Korea.
| | - Il Dong Yun
- Division of Computer and Electronic Systems Engineering, Hankuk University of Foreign Studies, Yongin, 17035, South Korea
| | - Kyoung Mu Lee
- Department of Electrical and Computer Engineering, Automation and Systems Research Institute, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea
| |
Collapse
|
148
|
|
149
|
Vessel-Net: Retinal Vessel Segmentation Under Multi-path Supervision. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32239-7_30] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|