101
|
Yousef R, Gupta G, Yousef N, Khari M. A holistic overview of deep learning approach in medical imaging. MULTIMEDIA SYSTEMS 2022; 28:881-914. [PMID: 35079207 PMCID: PMC8776556 DOI: 10.1007/s00530-021-00884-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 12/23/2021] [Indexed: 05/07/2023]
Abstract
Medical images are a rich source of invaluable necessary information used by clinicians. Recent technologies have introduced many advancements for exploiting the most of this information and use it to generate better analysis. Deep learning (DL) techniques have been empowered in medical images analysis using computer-assisted imaging contexts and presenting a lot of solutions and improvements while analyzing these images by radiologists and other specialists. In this paper, we present a survey of DL techniques used for variety of tasks along with the different medical image's modalities to provide critical review of the recent developments in this direction. We have organized our paper to provide significant contribution of deep leaning traits and learn its concepts, which is in turn helpful for non-expert in medical society. Then, we present several applications of deep learning (e.g., segmentation, classification, detection, etc.) which are commonly used for clinical purposes for different anatomical site, and we also present the main key terms for DL attributes like basic architecture, data augmentation, transfer learning, and feature selection methods. Medical images as inputs to deep learning architectures will be the mainstream in the coming years, and novel DL techniques are predicted to be the core of medical images analysis. We conclude our paper by addressing some research challenges and the suggested solutions for them found in literature, and also future promises and directions for further developments.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Gaurav Gupta
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Nabhan Yousef
- Electronics and Communication Engineering, Marwadi University, Rajkot, Gujrat India
| | - Manju Khari
- Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
102
|
HT-Net: hierarchical context-attention transformer network for medical ct image segmentation. APPL INTELL 2022. [DOI: 10.1007/s10489-021-03010-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
103
|
Gour N, Tanveer M, Khanna P. Challenges for ocular disease identification in the era of artificial intelligence. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06770-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
104
|
Pal A, Chaturvedi A, Chandra A, Chatterjee R, Senapati S, Frangi AF, Garain U. MICaps: Multi-instance capsule network for machine inspection of Munro's microabscess. Comput Biol Med 2022; 140:105071. [PMID: 34864301 DOI: 10.1016/j.compbiomed.2021.105071] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 11/21/2021] [Accepted: 11/22/2021] [Indexed: 02/03/2023]
Abstract
Munro's Microabscess (MM) is the diagnostic hallmark of psoriasis. Neutrophil detection in the Stratum Corneum (SC) of the skin epidermis is an integral part of MM detection in skin biopsy. The microscopic inspection of skin biopsy is a tedious task and staining variations in skin histopathology often hinder human performance to differentiate neutrophils from skin keratinocytes. Motivated from this, we propose a computational framework that can assist human experts and reduce potential errors in diagnosis. The framework first segments the SC layer, and multiple patches are sampled from the segmented regions which are classified to detect neutrophils. Both UNet and CapsNet are used for segmentation and classification. Experiments show that of the two choices, CapsNet, owing to its robustness towards better hierarchical object representation and localisation ability, appears as a better candidate for both segmentation and classification tasks and hence, we termed our framework as MICaps. The training algorithm explores both minimisation of Dice Loss and Focal Loss and makes a comparative study between the two. The proposed framework is validated with our in-house dataset consisting of 290 skin biopsy images. Two different experiments are considered. Under the first protocol, only 3-fold cross-validation is done to directly compare the current results with the state-of-the-art ones. Next, the performance of the system on a held-out data set is reported. The experimental results show that MICaps improves the state-of-the-art diagnosis performance by 3.27% (maximum) and reduces the number of model parameters by 50%.
Collapse
Affiliation(s)
- Anabik Pal
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
| | - Akshay Chaturvedi
- Computer Vision and Pattern Recognition Unit, Indian Statistical Institute, India
| | - Aditi Chandra
- Department of Genetics, University of Pennsylvania Perelman School of Medicine, USA
| | | | | | - Alejandro F Frangi
- Center for Computational Imaging & Simulation Technologies in Biomedicine, University of Leeds, UK
| | - Utpal Garain
- Computer Vision and Pattern Recognition Unit, Indian Statistical Institute, India
| |
Collapse
|
105
|
|
106
|
Rivas-Villar D, Hervella ÁS, Rouco J, Novo J. Color fundus image registration using a learning-based domain-specific landmark detection methodology. Comput Biol Med 2022; 140:105101. [PMID: 34875412 DOI: 10.1016/j.compbiomed.2021.105101] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 11/29/2021] [Accepted: 11/29/2021] [Indexed: 11/17/2022]
Abstract
Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.
Collapse
Affiliation(s)
- David Rivas-Villar
- Centro de investigacion CITIC, Universidade da Coruña, 15 071, A Coruña, Spain; Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15 006, A Coruña, Spain.
| | - Álvaro S Hervella
- Centro de investigacion CITIC, Universidade da Coruña, 15 071, A Coruña, Spain; Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15 006, A Coruña, Spain.
| | - José Rouco
- Centro de investigacion CITIC, Universidade da Coruña, 15 071, A Coruña, Spain; Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15 006, A Coruña, Spain.
| | - Jorge Novo
- Centro de investigacion CITIC, Universidade da Coruña, 15 071, A Coruña, Spain; Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15 006, A Coruña, Spain.
| |
Collapse
|
107
|
DAVS-NET: Dense Aggregation Vessel Segmentation Network for retinal vasculature detection in fundus images. PLoS One 2022; 16:e0261698. [PMID: 34972109 PMCID: PMC8719769 DOI: 10.1371/journal.pone.0261698] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Accepted: 12/07/2021] [Indexed: 12/26/2022] Open
Abstract
In this era, deep learning-based medical image analysis has become a reliable source in assisting medical practitioners for various retinal disease diagnosis like hypertension, diabetic retinopathy (DR), arteriosclerosis glaucoma, and macular edema etc. Among these retinal diseases, DR can lead to vision detachment in diabetic patients which cause swelling of these retinal blood vessels or even can create new vessels. This creation or the new vessels and swelling can be analyzed as biomarker for screening and analysis of DR. Deep learning-based semantic segmentation of these vessels can be an effective tool to detect changes in retinal vasculature for diagnostic purposes. This segmentation task becomes challenging because of the low-quality retinal images with different image acquisition conditions, and intensity variations. Existing retinal blood vessels segmentation methods require a large number of trainable parameters for training of their networks. This paper introduces a novel Dense Aggregation Vessel Segmentation Network (DAVS-Net), which can achieve high segmentation performance with only a few trainable parameters. For faster convergence, this network uses an encoder-decoder framework in which edge information is transferred from the first layers of the encoder to the last layer of the decoder. Performance of the proposed network is evaluated on publicly available retinal blood vessels datasets of DRIVE, CHASE_DB1, and STARE. Proposed method achieved state-of-the-art segmentation accuracy using a few number of trainable parameters.
Collapse
|
108
|
Yu X, Tang S, Cheang CF, Yu HH, Choi IC. Multi-Task Model for Esophageal Lesion Analysis Using Endoscopic Images: Classification with Image Retrieval and Segmentation with Attention. SENSORS 2021; 22:s22010283. [PMID: 35009825 PMCID: PMC8749873 DOI: 10.3390/s22010283] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 12/24/2021] [Accepted: 12/27/2021] [Indexed: 12/12/2022]
Abstract
The automatic analysis of endoscopic images to assist endoscopists in accurately identifying the types and locations of esophageal lesions remains a challenge. In this paper, we propose a novel multi-task deep learning model for automatic diagnosis, which does not simply replace the role of endoscopists in decision making, because endoscopists are expected to correct the false results predicted by the diagnosis system if more supporting information is provided. In order to help endoscopists improve the diagnosis accuracy in identifying the types of lesions, an image retrieval module is added in the classification task to provide an additional confidence level of the predicted types of esophageal lesions. In addition, a mutual attention module is added in the segmentation task to improve its performance in determining the locations of esophageal lesions. The proposed model is evaluated and compared with other deep learning models using a dataset of 1003 endoscopic images, including 290 esophageal cancer, 473 esophagitis, and 240 normal. The experimental results show the promising performance of our model with a high accuracy of 96.76% for the classification and a Dice coefficient of 82.47% for the segmentation. Consequently, the proposed multi-task deep learning model can be an effective tool to help endoscopists in judging esophageal lesions.
Collapse
Affiliation(s)
- Xiaoyuan Yu
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; (X.Y.); (S.T.)
| | - Suigu Tang
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; (X.Y.); (S.T.)
| | - Chak Fong Cheang
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; (X.Y.); (S.T.)
- Correspondence: (C.F.C.); (H.H.Y.)
| | - Hon Ho Yu
- Kiang Wu Hospital, Santo António, Macau;
- Correspondence: (C.F.C.); (H.H.Y.)
| | | |
Collapse
|
109
|
Wahid FF, Sugandhi K, Raju G. A Fusion Based Approach for Blood Vessel Segmentation from Fundus Images by Separating Brighter Optic Disc. PATTERN RECOGNITION AND IMAGE ANALYSIS 2021. [DOI: 10.1134/s105466182104026x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
110
|
Liu W, Jiang Y, Zhang J, Ma Z. RFARN: Retinal vessel segmentation based on reverse fusion attention residual network. PLoS One 2021; 16:e0257256. [PMID: 34860847 PMCID: PMC8641866 DOI: 10.1371/journal.pone.0257256] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 08/26/2021] [Indexed: 11/21/2022] Open
Abstract
Accurate segmentation of retinal vessels is critical to the mechanism, diagnosis, and treatment of many ocular pathologies. Due to the poor contrast and inhomogeneous background of fundus imaging and the complex structure of retinal fundus images, this makes accurate segmentation of blood vessels from retinal images still challenging. In this paper, we propose an effective framework for retinal vascular segmentation, which is innovative mainly in the retinal image pre-processing stage and segmentation stage. First, we perform image enhancement on three publicly available fundus datasets based on the multiscale retinex with color restoration (MSRCR) method, which effectively suppresses noise and highlights the vessel structure creating a good basis for the segmentation phase. The processed fundus images are then fed into an effective Reverse Fusion Attention Residual Network (RFARN) for training to achieve more accurate retinal vessel segmentation. In the RFARN, we use Reverse Channel Attention Module (RCAM) and Reverse Spatial Attention Module (RSAM) to highlight the shallow details of the channel and spatial dimensions. And RCAM and RSAM are used to achieve effective fusion of deep local features with shallow global features to ensure the continuity and integrity of the segmented vessels. In the experimental results for the DRIVE, STARE and CHASE datasets, the evaluation metrics were 0.9712, 0.9822 and 0.9780 for accuracy (Acc), 0.8788, 0.8874 and 0.8352 for sensitivity (Se), 0.9803, 0.9891 and 0.9890 for specificity (Sp), area under the ROC curve(AUC) was 0.9910, 0.9952 and 0.9904, and the F1-Score was 0.8453, 0.8707 and 0.8185. In comparison with existing retinal image segmentation methods, e.g. UNet, R2UNet, DUNet, HAnet, Sine-Net, FANet, etc., our method in three fundus datasets achieved better vessel segmentation performance and results.
Collapse
Affiliation(s)
- Wenhuan Liu
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou Gansu, China
| | - Yun Jiang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou Gansu, China
| | - Jingyao Zhang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou Gansu, China
| | - Zeqi Ma
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou Gansu, China
| |
Collapse
|
111
|
Li Z, Jia M, Yang X, Xu M. Blood Vessel Segmentation of Retinal Image Based on Dense-U-Net Network. MICROMACHINES 2021; 12:mi12121478. [PMID: 34945328 PMCID: PMC8705734 DOI: 10.3390/mi12121478] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 11/25/2021] [Accepted: 11/25/2021] [Indexed: 11/02/2022]
Abstract
The accurate segmentation of retinal blood vessels in fundus is of great practical significance to help doctors diagnose fundus diseases. Aiming to solve the problems of serious segmentation errors and low accuracy in traditional retinal segmentation, a scheme based on the combination of U-Net and Dense-Net was proposed. Firstly, the vascular feature information was enhanced by fusion limited contrast histogram equalization, median filtering, data normalization and multi-scale morphological transformation, and the artifact was corrected by adaptive gamma correction. Secondly, the randomly extracted image blocks are used as training data to increase the data and improve the generalization ability. Thirdly, stochastic gradient descent was used to optimize the Dice loss function to improve the segmentation accuracy. Finally, the Dense-U-net model was used for segmentation. The specificity, accuracy, sensitivity and AUC of this algorithm are 0.9896, 0.9698, 0.7931, 0.8946 and 0.9738, respectively. The proposed method improves the segmentation accuracy of vessels and the segmentation of small vessels.
Collapse
|
112
|
Kovács G, Fazekas A. A new baseline for retinal vessel segmentation: Numerical identification and correction of methodological inconsistencies affecting 100+ papers. Med Image Anal 2021; 75:102300. [PMID: 34814057 DOI: 10.1016/j.media.2021.102300] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 09/20/2021] [Accepted: 11/04/2021] [Indexed: 12/18/2022]
Abstract
In the last 15 years, the segmentation of vessels in retinal images has become an intensively researched problem in medical imaging, with hundreds of algorithms published. One of the de facto benchmarking data sets of vessel segmentation techniques is the DRIVE data set. Since DRIVE contains a predefined split of training and test images, the published performance results of the various segmentation techniques should provide a reliable ranking of the algorithms. Including more than 100 papers in the study, we performed a detailed numerical analysis of the coherence of the published performance scores. We found inconsistencies in the reported scores related to the use of the field of view (FoV), which has a significant impact on the performance scores. We attempted to eliminate the biases using numerical techniques to provide a more realistic picture of the state of the art. Based on the results, we have formulated several findings, most notably: despite the well-defined test set of DRIVE, most rankings in published papers are based on non-comparable figures; in contrast to the near-perfect accuracy scores reported in the literature, the highest accuracy score achieved to date is 0.9582 in the FoV region, which is 1% higher than that of human annotators. The methods we have developed for identifying and eliminating the evaluation biases can be easily applied to other domains where similar problems may arise.
Collapse
Affiliation(s)
- György Kovács
- Analytical Minds Ltd., Árpád street 5, Beregsurány 4933, Hungary.
| | - Attila Fazekas
- University of Debrecen, Faculty of Informatics, P.O.BOX 400, Debrecen 4002, Hungary.
| |
Collapse
|
113
|
Shekar S, Satpute N, Gupta A. Review on diabetic retinopathy with deep learning methods. JOURNAL OF MEDICAL IMAGING (BELLINGHAM, WASH.) 2021; 8:060901. [PMID: 34859116 DOI: 10.1117/1.jmi.8.6.060901] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 10/27/2021] [Indexed: 11/14/2022]
Abstract
Purpose: The purpose of our review paper is to examine many existing works of literature presenting the different methods utilized for diabetic retinopathy (DR) recognition employing deep learning (DL) and machine learning (ML) techniques, and also to address the difficulties faced in various datasets used by DR. Approach: DR is a progressive illness and may become a reason for vision loss. Early identification of DR lesions is, therefore, helpful and prevents damage to the retina. However, it is a complex job in view of the fact that it is symptomless earlier, and also ophthalmologists have been needed in traditional approaches. Recently, automated identification of DR-based studies has been stated based on image processing, ML, and DL. We analyze the recent literature and provide a comparative study that also includes the limitations of the literature and future work directions. Results: A relative analysis among the databases used, performance metrics employed, and ML and DL techniques adopted recently in DR detection based on various DR features is presented. Conclusion: Our review paper discusses the methods employed in DR detection along with the technical and clinical challenges that are encountered, which is missing in existing reviews, as well as future scopes to assist researchers in the field of retinal imaging.
Collapse
Affiliation(s)
- Shreya Shekar
- College of Engineering Pune, Department of Electronics and Telecommunication Engineering, Pune, Maharashtra, India
| | - Nitin Satpute
- Aarhus University, Department of Electrical and Computer Engineering, Aarhus, Denmark
| | - Aditya Gupta
- College of Engineering Pune, Department of Electronics and Telecommunication Engineering, Pune, Maharashtra, India
| |
Collapse
|
114
|
Yang Q, Ma B, Cui H, Ma J. AMF-NET: Attention-aware Multi-scale Fusion Network for Retinal Vessel Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3277-3280. [PMID: 34891940 DOI: 10.1109/embc46164.2021.9630756] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Automatic retinal vessel segmentation in fundus image can assist effective and efficient diagnosis of retina disease. Microstructure estimation of capillaries is a prolonged challenging issue. To tackle this problem, we propose attention-aware multi-scale fusion network (AMF-Net). Our network is with dense convolutions to perceive microscopic capillaries. Additionally, multi-scale features are extracted and fused with adaptive weights by channel attention module to improve the segmentation performance. Finally, spatial attention is introduced by position attention modules to capture long-distance feature dependencies. The proposed model is evaluated using two public datasets including DRIVE and CHASE_DB1. Extensive experiments demonstrate that our model outperforms existing methods. Ablation study valid the effectiveness of the proposed components.
Collapse
|
115
|
Owler J, Rockett P. Influence of background preprocessing on the performance of deep learning retinal vessel detection. J Med Imaging (Bellingham) 2021; 8:064001. [PMID: 34746333 PMCID: PMC8562352 DOI: 10.1117/1.jmi.8.6.064001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Accepted: 10/18/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Segmentation of the vessel tree from retinal fundus images can be used to track changes in the retina and be an important first step in a diagnosis. Manual segmentation is a time-consuming process that is prone to error; effective and reliable automation can alleviate these problems but one of the difficulties is uneven image background, which may affect segmentation performance. Approach: We present a patch-based deep learning framework, based on a modified U-Net architecture, that automatically segments the retinal blood vessels from fundus images. In particular, we evaluate how various pre-processing techniques, images with either no processing, N4 bias field correction, contrast limited adaptive histogram equalization (CLAHE), or a combination of N4 and CLAHE, can compensate for uneven image background and impact final segmentation performance. Results: We achieved competitive results on three publicly available datasets as a benchmark for our comparisons of pre-processing techniques. In addition, we introduce Bayesian statistical testing, which indicates little practical difference ( Pr > 0.99 ) between pre-processing methods apart from the sensitivity metric. In terms of sensitivity and pre-processing, the combination of N4 correction and CLAHE performs better in comparison to unprocessed and N4 pre-processing ( Pr > 0.87 ); but compared to CLAHE alone, the differences are not significant ( Pr ≈ 0.38 to 0.88). Conclusions: We conclude that deep learning is an effective method for retinal vessel segmentation and that CLAHE pre-processing has the greatest positive impact on segmentation performance, with N4 correction helping only in images with extremely inhomogeneous background illumination.
Collapse
Affiliation(s)
- James Owler
- University of Sheffield, Bioengineering—Interdisciplinary Programmes Engineering, United Kingdom
| | - Peter Rockett
- University of Sheffield, Department of Electronic and Electrical Engineering, Sheffield, United Kingdom
| |
Collapse
|
116
|
Zou B, Dai Y, He Q, Zhu C, Liu G, Su Y, Tang R. Multi-Label Classification Scheme Based on Local Regression for Retinal Vessel Segmentation. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:2586-2597. [PMID: 32175869 DOI: 10.1109/tcbb.2020.2980233] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Segmenting small retinal vessels with width less than 2 pixels in fundus images is a challenging task. In this paper, in order to effectively segment the vessels, especially the narrow parts, we propose a local regression scheme to enhance the narrow parts, along with a novel multi-label classification method based on this scheme. We consider five labels for blood vessels and background in particular: the center of big vessels, the edge of big vessels, the center as well as the edge of small vessels, the center of background, and the edge of background. We first determine the multi-label by the local de-regression model according to the vessel pattern from the ground truth images. Then, we train a convolutional neural network (CNN) for multi-label classification. Next, we perform a local regression method to transform the previous multi-label into binary label to better locate small vessels and generate an entire retinal vessel image. Our method is evaluated using two publicly available datasets and compared with several state-of-the-art studies. The experimental results have demonstrated the effectiveness of our method in segmenting retinal vessels.
Collapse
|
117
|
Guo S. Fundus image segmentation via hierarchical feature learning. Comput Biol Med 2021; 138:104928. [PMID: 34662814 DOI: 10.1016/j.compbiomed.2021.104928] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Revised: 10/06/2021] [Accepted: 10/06/2021] [Indexed: 01/28/2023]
Abstract
Fundus Image Segmentation (FIS) is an essential procedure for the automated diagnosis of ophthalmic diseases. Recently, deep fully convolutional networks have been widely used for FIS with state-of-the-art performance. The representative deep model is the U-Net, which follows an encoder-decoder architecture. I believe it is suboptimal for FIS because consecutive pooling operations in the encoder lead to low-resolution representation and loss of detailed spatial information, which is particularly important for the segmentation of tiny vessels and lesions. Motivated by this, a high-resolution hierarchical network (HHNet) is proposed to learn semantic-rich high-resolution representations and preserve spatial details simultaneously. Specifically, a High-resolution Feature Learning (HFL) module with increasing dilation rates was first designed to learn the high-level high-resolution representations. Then, the HHNet was constructed by incorporating three HFL modules and two feature aggregation modules. The HHNet runs in a coarse-to-fine manner, and fine segmentation maps are output at the last level. Extensive experiments were conducted on fundus lesion segmentation, vessel segmentation, and optic cup segmentation. The experimental results reveal that the proposed method shows highly competitive or even superior performance in terms of segmentation performance and computation cost, indicating its potential advantages in clinical application.
Collapse
Affiliation(s)
- Song Guo
- School of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an, 710055, China.
| |
Collapse
|
118
|
Li C, Ma W, Sun L, Ding X, Huang Y, Wang G, Yu Y. Hierarchical deep network with uncertainty-aware semi-supervised learning for vessel segmentation. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06578-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
119
|
Ding L, Kuriyan AE, Ramchandran RS, Wykoff CC, Sharma G. Weakly-Supervised Vessel Detection in Ultra-Widefield Fundus Photography via Iterative Multi-Modal Registration and Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2748-2758. [PMID: 32991281 PMCID: PMC8513803 DOI: 10.1109/tmi.2020.3027665] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We propose a deep-learning based annotation-efficient framework for vessel detection in ultra-widefield (UWF) fundus photography (FP) that does not require de novo labeled UWF FP vessel maps. Our approach utilizes concurrently captured UWF fluorescein angiography (FA) images, for which effective deep learning approaches have recently become available, and iterates between a multi-modal registration step and a weakly-supervised learning step. In the registration step, the UWF FA vessel maps detected with a pre-trained deep neural network (DNN) are registered with the UWF FP via parametric chamfer alignment. The warped vessel maps can be used as the tentative training data but inevitably contain incorrect (noisy) labels due to the differences between FA and FP modalities and the errors in the registration. In the learning step, a robust learning method is proposed to train DNNs with noisy labels. The detected FP vessel maps are used for the registration in the following iteration. The registration and the vessel detection benefit from each other and are progressively improved. Once trained, the UWF FP vessel detection DNN from the proposed approach allows FP vessel detection without requiring concurrently captured UWF FA images. We validate the proposed framework on a new UWF FP dataset, PRIME-FP20, and on existing narrow-field FP datasets. Experimental evaluation, using both pixel-wise metrics and the CAL metrics designed to provide better agreement with human assessment, shows that the proposed approach provides accurate vessel detection, without requiring manually labeled UWF FP training data.
Collapse
|
120
|
Abstract
The segmentation of retinal vessels is critical for the diagnosis of some fundus diseases. Retinal vessel segmentation requires abundant spatial information and receptive fields with different sizes while existing methods usually sacrifice spatial resolution to achieve real-time reasoning speed, resulting in inadequate vessel segmentation of low-contrast regions and weak anti-noise interference ability. The asymmetry of capillaries in fundus images also increases the difficulty of segmentation. In this paper, we proposed a two-branch network based on multi-scale attention to alleviate the above problem. First, a coarse network with multi-scale U-Net as the backbone is designed to capture more semantic information and to generate high-resolution features. A multi-scale attention module is used to obtain enough receptive fields. The other branch is a fine network, which uses the residual block of a small convolution kernel to make up for the deficiency of spatial information. Finally, we use the feature fusion module to aggregate the information of the coarse and fine networks. The experiments were performed on the DRIVE, CHASE, and STARE datasets. Respectively, the accuracy reached 96.93%, 97.58%, and 97.70%. The specificity reached 97.72%, 98.52%, and 98.94%. The F-measure reached 83.82%, 81.39%, and 84.36%. Experimental results show that compared with some state-of-art methods such as Sine-Net, SA-Net, our proposed method has better performance on three datasets.
Collapse
|
121
|
Li Y, Yang J, Ni J, Elazab A, Wu J. TA-Net: Triple attention network for medical image segmentation. Comput Biol Med 2021; 137:104836. [PMID: 34507157 DOI: 10.1016/j.compbiomed.2021.104836] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2021] [Revised: 09/01/2021] [Accepted: 09/02/2021] [Indexed: 11/16/2022]
Abstract
The automatic segmentation of medical images has made continuous progress due to the development of convolutional neural networks (CNNs) and attention mechanism. However, previous works usually explore the attention features of a certain dimension in the image, thus may ignore the correlation between feature maps in other dimensions. Therefore, how to capture the global features of various dimensions is still facing challenges. To deal with this problem, we propose a triple attention network (TA-Net) by exploring the ability of the attention mechanism to simultaneously recognize global contextual information in the channel domain, spatial domain, and feature internal domain. Specifically, during the encoder step, we propose a channel with self-attention encoder (CSE) block to learn the long-range dependencies of pixels. The CSE effectively increases the receptive field and enhances the representation of target features. In the decoder step, we propose a spatial attention up-sampling (SU) block that makes the network pay more attention to the position of the useful pixels when fusing the low-level and high-level features. Extensive experiments were tested on four public datasets and one local dataset. The datasets include the following types: retinal blood vessels (DRIVE and STARE), cells (ISBI 2012), cutaneous melanoma (ISIC 2017), and intracranial blood vessels. Experimental results demonstrate that the proposed TA-Net is overall superior to previous state-of-the-art methods in different medical image segmentation tasks with high accuracy, promising robustness, and relatively low redundancy.
Collapse
Affiliation(s)
- Yang Li
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, China; University of Chinese Academy of Sciences, Beijing, China
| | - Jun Yang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, China; University of Chinese Academy of Sciences, Beijing, China
| | - Jiajia Ni
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, China
| | - Ahmed Elazab
- School of Biomedical Engineering, Shenzhen University, Shenzhen, China.
| | - Jianhuang Wu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, China; University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
122
|
Prokop-Piotrkowska M, Marszałek-Dziuba K, Moszczyńska E, Szalecki M, Jurkiewicz E. Traditional and New Methods of Bone Age Assessment-An Overview. J Clin Res Pediatr Endocrinol 2021; 13:251-262. [PMID: 33099993 PMCID: PMC8388057 DOI: 10.4274/jcrpe.galenos.2020.2020.0091] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
Bone age is one of biological indicators of maturity used in clinical practice and it is a very important parameter of a child’s assessment, especially in paediatric endocrinology. The most widely used method of bone age assessment is by performing a hand and wrist radiograph and its analysis with Greulich-Pyle or Tanner-Whitehouse atlases, although it has been about 60 years since they were published. Due to the progress in the area of Computer-Aided Diagnosis and application of artificial intelligence in medicine, lately, numerous programs for automatic bone age assessment have been created. Most of them have been verified in clinical studies in comparison to traditional methods, showing good precision while eliminating inter- and intra-rater variability and significantly reducing the time of assessment. Additionally, there are available methods for assessment of bone age which avoid X-ray exposure, using modalities such as ultrasound or magnetic resonance imaging.
Collapse
Affiliation(s)
- Monika Prokop-Piotrkowska
- Children’s Memorial Health Institute, Department of Endocrinology and Diabetology, Warsaw, Poland,* Address for Correspondence: Children’s Memorial Health Institute, Department of Endocrinology and Diabetology, Warsaw, Poland Phone: +48 608 523 869 E-mail:
| | - Kamila Marszałek-Dziuba
- Children’s Memorial Health Institute, Department of Endocrinology and Diabetology, Warsaw, Poland
| | - Elżbieta Moszczyńska
- Children’s Memorial Health Institute, Department of Endocrinology and Diabetology, Warsaw, Poland
| | | | - Elżbieta Jurkiewicz
- Children’s Memorial Health Institute, Department of Diagnostic Imaging, Warsaw, Poland
| |
Collapse
|
123
|
Hakim L, Kavitha MS, Yudistira N, Kurita T. Regularizer based on Euler characteristic for retinal blood vessel segmentation. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2021.05.023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
124
|
Detecting pulmonary Coccidioidomycosis with deep convolutional neural networks. MACHINE LEARNING WITH APPLICATIONS 2021. [DOI: 10.1016/j.mlwa.2021.100040] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023] Open
|
125
|
Ramírez-Correa PE, Rondán-Cataluña FJ, Arenas-Gaitán J, Grandón EE, Alfaro-Pérez JL, Ramírez-Santana M. Segmentation of Older Adults in the Acceptance of Social Networking Sites Using Machine Learning. Front Psychol 2021; 12:705715. [PMID: 34456818 PMCID: PMC8385199 DOI: 10.3389/fpsyg.2021.705715] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Accepted: 07/19/2021] [Indexed: 11/18/2022] Open
Abstract
This study analyzes the most important predictors of acceptance of social network sites in a sample of Chilean elder people (over 60). We employ a novelty procedure to explore this phenomenon. This procedure performs apriori segmentation based on gender and generation. It then applies the deep learning technique to identify the predictors (performance expectancy, effort expectancy, altruism, telepresence, social identity, facilitating conditions, hedonic motivation, perceived physical condition, social norms, habit, and trust) by segments. The predictor variables were taken from the literature on the use of social network sites, and an empirical study was carried out by quota sampling with a sample size of 395 older people. The results show different predictors of social network sites considering all the samples, baby boomer (born between 1947 and 1966) males and females, silent (born between 1927 and 1946) males and females. The high heterogeneity among older people is confirmed; this means that dealing with older adults as a uniform set of users of social network sites is a mistake. This study demonstrates that the four segments behave differently, and many diverse variables influence the acceptance of social network sites.
Collapse
Affiliation(s)
| | | | - Jorge Arenas-Gaitán
- Department of Business Administration and Marketing, University of Seville, Seville, Spain
| | | | | | | |
Collapse
|
126
|
SERR-U-Net: Squeeze-and-Excitation Residual and Recurrent Block-Based U-Net for Automatic Vessel Segmentation in Retinal Image. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:5976097. [PMID: 34422093 PMCID: PMC8371614 DOI: 10.1155/2021/5976097] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/02/2021] [Revised: 07/03/2021] [Accepted: 07/24/2021] [Indexed: 11/23/2022]
Abstract
Methods A new SERR-U-Net framework for retinal vessel segmentation is proposed, which leverages technologies including Squeeze-and-Excitation (SE), residual module, and recurrent block. First, the convolution layers of encoder and decoder are modified on the basis of U-Net, and the recurrent block is used to increase the network depth. Second, the residual module is utilized to alleviate the vanishing gradient problem. Finally, to derive more specific vascular features, we employed the SE structure to introduce attention mechanism into the U-shaped network. In addition, enhanced super-resolution generative adversarial networks (ESRGANs) are also deployed to remove the noise of retinal image. Results The effectiveness of this method was tested on two public datasets, DRIVE and STARE. In the experiment of DRIVE dataset, the accuracy and AUC (area under the curve) of our method were 0.9552 and 0.9784, respectively, and for SATRE dataset, 0.9796 and 0.9859 were achieved, respectively, which proved a high accuracy and promising prospect on clinical assistance. Conclusion An improved U-Net network combining SE, ResNet, and recurrent technologies is developed for automatic vessel segmentation from retinal image. This new model enables an improvement on the accuracy compared to learning-based methods, and its robustness in circumvent challenging cases such as small blood vessels and intersection of vessels is also well demonstrated and validated.
Collapse
|
127
|
Simultaneous segmentation and classification of the retinal arteries and veins from color fundus images. Artif Intell Med 2021; 118:102116. [PMID: 34412839 DOI: 10.1016/j.artmed.2021.102116] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 05/20/2021] [Accepted: 05/21/2021] [Indexed: 01/25/2023]
Abstract
BACKGROUND AND OBJECTIVES The study of the retinal vasculature represents a fundamental stage in the screening and diagnosis of many high-incidence diseases, both systemic and ophthalmic. A complete retinal vascular analysis requires the segmentation of the vascular tree along with the classification of the blood vessels into arteries and veins. Early automatic methods approach these complementary segmentation and classification tasks in two sequential stages. However, currently, these two tasks are approached as a joint semantic segmentation, because the classification results highly depend on the effectiveness of the vessel segmentation. In that regard, we propose a novel approach for the simultaneous segmentation and classification of the retinal arteries and veins from eye fundus images. METHODS We propose a novel method that, unlike previous approaches, and thanks to the proposal of a novel loss, decomposes the joint task into three segmentation problems targeting arteries, veins and the whole vascular tree. This configuration allows to handle vessel crossings intuitively and directly provides accurate segmentation masks of the different target vascular trees. RESULTS The provided ablation study on the public Retinal Images vessel Tree Extraction (RITE) dataset demonstrates that the proposed method provides a satisfactory performance, particularly in the segmentation of the different structures. Furthermore, the comparison with the state of the art shows that our method achieves highly competitive results in the artery/vein classification, while significantly improving the vascular segmentation. CONCLUSIONS The proposed multi-segmentation method allows to detect more vessels and better segment the different structures, while achieving a competitive classification performance. Also, in these terms, our approach outperforms the approaches of various reference works. Moreover, in contrast with previous approaches, the proposed method allows to directly detect the vessel crossings, as well as preserving the continuity of both arteries and veins at these complex locations.
Collapse
|
128
|
Xin M, Wen J, Wang Y, Yu W, Fang B, Hu J, Xu Y, Linghu C. Blood Vessel Segmentation Based on the 3D Residual U-Net. INT J PATTERN RECOGN 2021. [DOI: 10.1142/s021800142157007x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In this paper, we propose blood vessel segmentation based on the 3D residual U-Net method. First, we integrate the residual block structure into the 3D U-Net. By exploring the influence of adding residual blocks at different positions in the 3D U-Net, we establish a novel and effective 3D residual U-Net. In addition, to address the challenges of pixel imbalance, vessel boundary segmentation, and small vessel segmentation, we develop a new weighted Dice loss function with a better effect than the weighted cross-entropy loss function. When training the model, we adopted a two-stage method from coarse-to-fine. In the fine stage, a local segmentation method of 3D sliding window is added. In the model testing phase, we used the 3D fixed-point method. Furthermore, we employ the 3D morphological closed operation to smooth the surfaces of vessels and volume analysis to remove noise blocks. To verify the accuracy and stability of our method, we compare our method with FCN, 3D DenseNet, and 3D U-Net. The experimental results indicate that our method has higher accuracy and better stability than the other studied methods and that the average Dice coefficients for hepatic veins and portal veins reach 71.7% and 76.5% in the coarse stage and 72.5% and 77.2% in the fine stage, respectively. In order to verify the robustness of the model, we conducted the same comparative experiment on the brain vessel datasets, and the average Dice coefficient reached 87.2%.
Collapse
Affiliation(s)
- Mulin Xin
- College of Computer Science, Chongqing University, Chongqing 401331, P. R. China
| | - Jing Wen
- College of Computer Science, Chongqing University, Chongqing 401331, P. R. China
| | - Yi Wang
- College of Computer Science, Chongqing University, Chongqing 401331, P. R. China
| | - Wei Yu
- College of Computer Science, Chongqing University, Chongqing 401331, P. R. China
| | - Bin Fang
- College of Computer Science, Chongqing University, Chongqing 401331, P. R. China
| | - Jun Hu
- Southwest Hospital, Army Military Medical University, Chongqing 401331, P. R. China
| | - Yongmei Xu
- Southwest Hospital, Army Military Medical University, Chongqing 401331, P. R. China
| | - Chunhong Linghu
- Southwest Hospital, Army Military Medical University, Chongqing 401331, P. R. China
| |
Collapse
|
129
|
Adu K, Yu Y, Cai J, Dela Tattrah V, Adu Ansere J, Tashi N. S-CCCapsule: Pneumonia detection in chest X-ray images using skip-connected convolutions and capsule neural network. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-202638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The squash function in capsule networks (CapsNets) dynamic routing is less capable of performing discrimination of non-informative capsules which leads to abnormal activation value distribution of capsules. In this paper, we propose vertical squash (VSquash) to improve the original squash by preventing the activation values of capsules in the primary capsule layer to shrink non-informative capsules, promote discriminative capsules and avoid high information sensitivity. Furthermore, a new neural network, (i) skip-connected convolutional capsule (S-CCCapsule), (ii) Integrated skip-connected convolutional capsules (ISCC) and (iii) Ensemble skip-connected convolutional capsules (ESCC) based on CapsNets are presented where the VSquash is applied in the dynamic routing. In order to achieve uniform distribution of coupling coefficient of probabilities between capsules, we use the Sigmoid function rather than Softmax function. Experiments on Guangzhou Women and Children’s Medical Center (GWCMC), Radiological Society of North America (RSNA) and Mendeley CXR Pneumonia datasets were performed to validate the effectiveness of our proposed methods. We found that our proposed methods produce better accuracy compared to other methods based on model evaluation metrics such as confusion matrix, sensitivity, specificity and Area under the curve (AUC). Our method for pneumonia detection performs better than practicing radiologists. It minimizes human error and reduces diagnosis time.
Collapse
Affiliation(s)
- Kwabena Adu
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yongbin Yu
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Jingye Cai
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | | | - James Adu Ansere
- College of Internet of Things Engineering, Hohai University, China
| | - Nyima Tashi
- School of Information Science and Technology, Tibet University, Lhasa, China
| |
Collapse
|
130
|
Yang L, Wang H, Zeng Q, Liu Y, Bian G. A hybrid deep segmentation network for fundus vessels via deep-learning framework. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.085] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
131
|
Xu R, Liu T, Ye X, Liu F, Lin L, Li L, Tanaka S, Chen YW. Joint Extraction of Retinal Vessels and Centerlines Based on Deep Semantics and Multi-Scaled Cross-Task Aggregation. IEEE J Biomed Health Inform 2021; 25:2722-2732. [PMID: 33320815 DOI: 10.1109/jbhi.2020.3044957] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Retinal vessel segmentation and centerline extraction are crucial steps in building a computer-aided diagnosis system on retinal images. Previous works treat them as two isolated tasks, while ignoring their tight association. In this paper, we propose a deep semantics and multi-scaled cross-task aggregation network that takes advantage of the association to jointly improve their performances. Our network is featured by two sub-networks. The forepart is a deep semantics aggregation sub-network that aggregates strong semantic information to produce more powerful features for both tasks, and the tail is a multi-scaled cross-task aggregation sub-network that explores complementary information to refine the results. We evaluate the proposed method on three public databases, which are DRIVE, STARE and CHASE_DB1. Experimental results show that our method can not only simultaneously extract retinal vessels and their centerlines but also achieve the state-of-the-art performances on both tasks.
Collapse
|
132
|
Yan Q, Wang B, Zhang W, Luo C, Xu W, Xu Z, Zhang Y, Shi Q, Zhang L, You Z. Attention-Guided Deep Neural Network With Multi-Scale Feature Fusion for Liver Vessel Segmentation. IEEE J Biomed Health Inform 2021; 25:2629-2642. [PMID: 33264097 DOI: 10.1109/jbhi.2020.3042069] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Liver vessel segmentation is fast becoming a key instrument in the diagnosis and surgical planning of liver diseases. In clinical practice, liver vessels are normally manual annotated by clinicians on each slice of CT images, which is extremely laborious. Several deep learning methods exist for liver vessel segmentation, however, promoting the performance of segmentation remains a major challenge due to the large variations and complex structure of liver vessels. Previous methods mainly using existing UNet architecture, but not all features of the encoder are useful for segmentation and some even cause interferences. To overcome this problem, we propose a novel deep neural network for liver vessel segmentation, called LVSNet, which employs special designs to obtain the accurate structure of the liver vessel. Specifically, we design Attention-Guided Concatenation (AGC) module to adaptively select the useful context features from low-level features guided by high-level features. The proposed AGC module focuses on capturing rich complemented information to obtain more details. In addition, we introduce an innovative multi-scale fusion block by constructing hierarchical residual-like connections within one single residual block, which is of great importance for effectively linking the local blood vessel fragments together. Furthermore, we construct a new dataset containing 40 thin thickness cases (0.625 mm) which consist of CT volumes and annotated vessels. To evaluate the effectiveness of the method with minor vessels, we also propose an automatic stratification method to split major and minor liver vessels. Extensive experimental results demonstrate that the proposed LVSNet outperforms previous methods on liver vessel segmentation datasets. Additionally, we conduct a series of ablation studies that comprehensively support the superiority of the underlying concepts.
Collapse
|
133
|
V S, G I, A SR. Parallel Architecture of Fully Convolved Neural Network for Retinal Vessel Segmentation. J Digit Imaging 2021; 33:168-180. [PMID: 31342298 DOI: 10.1007/s10278-019-00250-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Retinal blood vessel extraction is considered to be the indispensable action for the diagnostic purpose of many retinal diseases. In this work, a parallel fully convolved neural network-based architecture is proposed for the retinal blood vessel segmentation. Also, the network performance improvement is studied by applying different levels of preprocessed images. The proposed method is experimented on DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (STructured Analysis of the Retina) which are the widely accepted public database for this research area. The proposed work attains high accuracy, sensitivity, and specificity of about 96.37%, 86.53%, and 98.18% respectively. Data independence is also proved by testing abnormal STARE images with DRIVE trained model. The proposed architecture shows better result in the vessel extraction irrespective of vessel thickness. The obtained results show that the proposed work outperforms most of the existing segmentation methodologies, and it can be implemented as the real time application tool since the entire work is carried out on CPU. The proposed work is executed with low-cost computation; at the same time, it takes less than 2 s per image for vessel extraction.
Collapse
Affiliation(s)
- Sathananthavathi V
- Department of ECE, Mepco Schlenk Engineering College, Sivakasi, Tamilnadu, 626005, India.
| | - Indumathi G
- Department of ECE, Mepco Schlenk Engineering College, Sivakasi, Tamilnadu, 626005, India
| | - Swetha Ranjani A
- Department of ECE, Mepco Schlenk Engineering College, Sivakasi, Tamilnadu, 626005, India
| |
Collapse
|
134
|
Jiang Y, Wu C, Wang G, Yao HX, Liu WH. MFI-Net: A multi-resolution fusion input network for retinal vessel segmentation. PLoS One 2021; 16:e0253056. [PMID: 34252111 PMCID: PMC8274903 DOI: 10.1371/journal.pone.0253056] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 05/27/2021] [Indexed: 11/19/2022] Open
Abstract
Segmentation of retinal vessels is important for doctors to diagnose some diseases. The segmentation accuracy of retinal vessels can be effectively improved by using deep learning methods. However, most of the existing methods are incomplete for shallow feature extraction, and some superficial features are lost, resulting in blurred vessel boundaries and inaccurate segmentation of capillaries in the segmentation results. At the same time, the "layer-by-layer" information fusion between encoder and decoder makes the feature information extracted from the shallow layer of the network cannot be smoothly transferred to the deep layer of the network, resulting in noise in the segmentation features. In this paper, we propose the MFI-Net (Multi-resolution fusion input network) network model to alleviate the above problem to a certain extent. The multi-resolution input module in MFI-Net avoids the loss of coarse-grained feature information in the shallow layer by extracting local and global feature information in different resolutions. We have reconsidered the information fusion method between the encoder and the decoder, and used the information aggregation method to alleviate the information isolation between the shallow and deep layers of the network. MFI-Net is verified on three datasets, DRIVE, CHASE_DB1 and STARE. The experimental results show that our network is at a high level in several metrics, with F1 higher than U-Net by 2.42%, 2.46% and 1.61%, higher than R2U-Net by 1.47%, 2.22% and 0.08%, respectively. Finally, this paper proves the robustness of MFI-Net through experiments and discussions on the stability and generalization ability of MFI-Net.
Collapse
Affiliation(s)
- Yun Jiang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Chao Wu
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
- * E-mail:
| | - Ge Wang
- Columbia University in the City of New York, New York, New York, United States of America
| | - Hui-Xia Yao
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Wen-Huan Liu
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| |
Collapse
|
135
|
Wan T, Chen J, Zhang Z, Li D, Qin Z. Automatic vessel segmentation in X-ray angiogram using spatio-temporal fully-convolutional neural network. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102646] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
|
136
|
Zhao X, Liu Y, Zhang W, Meng L, Lv B, Lv C, Xie G, Chen Y. Relationships Between Retinal Vascular Characteristics and Renal Function in Patients With Type 2 Diabetes Mellitus. Transl Vis Sci Technol 2021; 10:20. [PMID: 34003905 PMCID: PMC7884293 DOI: 10.1167/tvst.10.2.20] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To develop a deep learning-based method to achieve vessel segmentation and measurement on fundus images, and explore the quantitative relationships between retinal vascular characteristics and the clinical indicators of renal function. Methods We recruited patients with type 2 diabetes mellitus with different stages of diabetic retinopathy (DR), collecting their fundus photographs and results of renal function tests. A deep learning framework for retinal vessel segmentation and measurement was developed. The correlation between the renal function indicators and the severity of DR were explored, then the correlation coefficients between indicators of renal function and retinal vascular characteristics were analyzed. Results We included 418 patients (eyes) with type 2 diabetes mellitus. The albumin to creatinine ratio, blood uric acid, blood creatinine, blood albumin, and estimated glomerular filtration rate were significantly correlated with the progression of DR (P < 0.05); no correlation existed in other metrics (P > 0.05). The fractal dimension was found to significantly correlate with most of the clinical parameters of renal function (P < 0.05). Conclusions The albumin to creatinine ratio, blood uric acid, blood creatinine, blood albumin, and estimated glomerular filtration rate have significant correlation with the progression of moderate to proliferative DR. Through deep learning-based vessel segmentation and measurement, the fractal dimension was found to significantly correlate with most clinical parameters of renal function. Translational Relevance Deep learning-based vessel segmentation and measurement on color fundus photographs could explore the relationships between retinal characteristics and renal function, facilitating earlier detection and intervention of type 2 diabetes mellitus complications.
Collapse
Affiliation(s)
- Xinyu Zhao
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.,Key Lab of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, China
| | - Yang Liu
- Ping An Healthcare Technology, Beijing, China
| | - Wenfei Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.,Key Lab of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, China
| | - Lihui Meng
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.,Key Lab of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, China
| | - Bin Lv
- Ping An Healthcare Technology, Beijing, China
| | | | - Guotong Xie
- Ping An Healthcare Technology, Beijing, China.,Ping An Health Cloud Company Limited, Shenzhen, China.,Ping An International Smart City Technology Company Limited, Shenzhen, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.,Key Lab of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, China
| |
Collapse
|
137
|
Ashraf MN, Hussain M, Habib Z. Review of Various Tasks Performed in the Preprocessing Phase of a Diabetic Retinopathy Diagnosis System. Curr Med Imaging 2021; 16:397-426. [PMID: 32410541 DOI: 10.2174/1573405615666190219102427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 12/31/2018] [Accepted: 01/20/2019] [Indexed: 12/15/2022]
Abstract
Diabetic Retinopathy (DR) is a major cause of blindness in diabetic patients. The increasing population of diabetic patients and difficulty to diagnose it at an early stage are limiting the screening capabilities of manual diagnosis by ophthalmologists. Color fundus images are widely used to detect DR lesions due to their comfortable, cost-effective and non-invasive acquisition procedure. Computer Aided Diagnosis (CAD) of DR based on these images can assist ophthalmologists and help in saving many sight years of diabetic patients. In a CAD system, preprocessing is a crucial phase, which significantly affects its performance. Commonly used preprocessing operations are the enhancement of poor contrast, balancing the illumination imbalance due to the spherical shape of a retina, noise reduction, image resizing to support multi-resolution, color normalization, extraction of a field of view (FOV), etc. Also, the presence of blood vessels and optic discs makes the lesion detection more challenging because these two artifacts exhibit specific attributes, which are similar to those of DR lesions. Preprocessing operations can be broadly divided into three categories: 1) fixing the native defects, 2) segmentation of blood vessels, and 3) localization and segmentation of optic discs. This paper presents a review of the state-of-the-art preprocessing techniques related to three categories of operations, highlighting their significant aspects and limitations. The survey is concluded with the most effective preprocessing methods, which have been shown to improve the accuracy and efficiency of the CAD systems.
Collapse
Affiliation(s)
| | - Muhammad Hussain
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Zulfiqar Habib
- Department of Computer Science, COMSATS University Islamabad, Lahore, Pakistan
| |
Collapse
|
138
|
Yuan Y, Zhang L, Wang L, Huang H. Multi-level Attention Network for Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2021; 26:312-323. [PMID: 34129508 DOI: 10.1109/jbhi.2021.3089201] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Automatic vessel segmentation in the fundus images plays an important role in the screening, diagnosis, treatment, and evaluation of various cardiovascular and ophthalmologic diseases. However, due to the limited well-annotated data, varying size of vessels, and intricate vessel structures, retinal vessel segmentation has become a long-standing challenge. In this paper, a novel deep learning model called AACA-MLA-D-UNet is proposed to fully utilize the low-level detailed information and the complementary information encoded in different layers to accurately distinguish the vessels from the background with low model complexity. The architecture of the proposed model is based on U-Net, and the dropout dense block is proposed to preserve maximum vessel information between convolution layers and mitigate the over-fitting problem. The adaptive atrous channel attention module is embedded in the contracting path to sort the importance of each feature channel automatically. After that, the multi-level attention module is proposed to integrate the multi-level features extracted from the expanding path, and use them to refine the features at each individual layer via attention mechanism. The proposed method has been validated on the three publicly available databases, i.e. the DRIVE, STARE, and CHASE DB1. The experimental results demonstrate that the proposed method can achieve better or comparable performance on retinal vessel segmentation with lower model complexity. Furthermore, the proposed method can also deal with some challenging cases and has strong generalization ability.
Collapse
|
139
|
Chen S, Zou Y, Liu PX. IBA-U-Net: Attentive BConvLSTM U-Net with Redesigned Inception for medical image segmentation. Comput Biol Med 2021; 135:104551. [PMID: 34157471 DOI: 10.1016/j.compbiomed.2021.104551] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Revised: 05/16/2021] [Accepted: 06/02/2021] [Indexed: 10/21/2022]
Abstract
Accurate segmentation of medical images plays an essential role in their analysis and has a wide range of research and application values in fields of practice such as medical research, disease diagnosis, disease analysis, and auxiliary surgery. In recent years, deep convolutional neural networks have been developed that show strong performance in medical image segmentation. However, because of the inherent challenges of medical images, such as irregularities of the dataset and the existence of outliers, segmentation approaches have not demonstrated sufficiently accurate and reliable results for clinical employment. Our method is based on three key ideas: (1) integrating the BConvLSTM block and the Attention block to reduce the semantic gap between the encoder and decoder feature maps to make the two feature maps more homogeneous, (2) factorizing convolutions with a large filter size by Redesigned Inception, which uses a multiscale feature fusion method to significantly increase the effective receptive field, and (3) devising a deep convolutional neural network with multiscale feature fusion and a Attentive BConvLSTM mechanism, which integrates the Attentive BConvLSTM block and the Redesigned Inception block into an encoder-decoder model called Attentive BConvLSTM U-Net with Redesigned Inception (IBA-U-Net). Our proposed architecture, IBA-U-Net, has been compared with the U-Net and state-of-the-art segmentation methods on three publicly available datasets, the lung image segmentation dataset, skin lesion image dataset, and retinal blood vessel image segmentation dataset, each with their unique challenges, and it has improved the prediction performance even with slightly less calculation expense and fewer network parameters. By devising a deep convolutional neural network with a multiscale feature fusion and Attentive BConvLSTM mechanism, medical image segmentation of different tasks can be completed effectively and accurately with only 45% of U-Net parameters.
Collapse
Affiliation(s)
- Siyuan Chen
- The School of Information Engineering, Nanchang University, Jiangxi, Nanchang, 330031, China
| | - Yanni Zou
- The School of Information Engineering, Nanchang University, Jiangxi, Nanchang, 330031, China.
| | - Peter X Liu
- Department of Systems and Computer Engineering, Carleton University, Ottawa, ON, KIS 5B6, Canada
| |
Collapse
|
140
|
Tang X, Peng J, Zhong B, Li J, Yan Z. Introducing frequency representation into convolution neural networks for medical image segmentation via twin-Kernel Fourier convolution. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106110. [PMID: 33910149 DOI: 10.1016/j.cmpb.2021.106110] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Accepted: 04/07/2021] [Indexed: 05/28/2023]
Abstract
BACKGROUND AND OBJECTIVE For medical image segmentation, deep learning-based methods have achieved state-of-the-art performance. However, the powerful spectral representation in the field of image processing is rarely considered in these models. METHODS In this work, we propose to introduce frequency representation into convolution neural networks (CNNs) and design a novel model, tKFC-Net, to combine powerful feature representation in both frequency and spatial domains. Through the Fast Fourier Transform (FFT) operation, frequency representation is employed on pooling, upsampling, and convolution without any adjustments to the network architecture. Furthermore, we replace original convolution with twin-Kernel Fourier Convolution (t-KFC), a new designed convolution layer, to specify the convolution kernels for particular functions and extract features from different frequency components. RESULTS We experimentally show that our method has an edge over other models in the task of medical image segmentation. Evaluated on four datasets-skin lesion segmentation (ISIC 2018), retinal blood vessel segmentation (DRIVE), lung segmentation (COVID-19-CT-Seg), and brain tumor segmentation (BraTS 2019), the proposed model achieves outstanding results: the metric F1-Score is 0.878 for ISIC 2018, 0.8185 for DRIVE, 0.9830 for COVID-19-CT-Seg, and 0.8457 for BraTS 2019. CONCLUSION The introduction of spectral representation retains spectral features which result in more accurate segmentation. The proposed method is orthogonal to other topology improvement methods and very convenient to be combined.
Collapse
Affiliation(s)
- Xianlun Tang
- Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Jiangping Peng
- Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Bing Zhong
- Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Jie Li
- College of Mobile Telecommunications, Chongqing University of Posts and Telecom, Chongqing 401520, China
| | - Zhenfu Yan
- Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|
141
|
|
142
|
Fu W, Breininger K, Schaffert R, Pan Z, Maier A. "Keep it simple, scholar": an experimental analysis of few-parameter segmentation networks for retinal vessels in fundus imaging. Int J Comput Assist Radiol Surg 2021; 16:967-978. [PMID: 33929676 PMCID: PMC8166700 DOI: 10.1007/s11548-021-02340-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 02/25/2021] [Indexed: 11/02/2022]
Abstract
PURPOSE With the recent development of deep learning technologies, various neural networks have been proposed for fundus retinal vessel segmentation. Among them, the U-Net is regarded as one of the most successful architectures. In this work, we start with simplification of the U-Net, and explore the performance of few-parameter networks on this task. METHODS We firstly modify the model with popular functional blocks and additional resolution levels, then we switch to exploring the limits for compression of the network architecture. Experiments are designed to simplify the network structure, decrease the number of trainable parameters, and reduce the amount of training data. Performance evaluation is carried out on four public databases, namely DRIVE, STARE, HRF and CHASE_DB1. In addition, the generalization ability of the few-parameter networks are compared against the state-of-the-art segmentation network. RESULTS We demonstrate that the additive variants do not significantly improve the segmentation performance. The performance of the models are not severely harmed unless they are harshly degenerated: one level, or one filter in the input convolutional layer, or trained with one image. We also demonstrate that few-parameter networks have strong generalization ability. CONCLUSION It is counter-intuitive that the U-Net produces reasonably good segmentation predictions until reaching the mentioned limits. Our work has two main contributions. On the one hand, the importance of different elements of the U-Net is evaluated, and the minimal U-Net which is capable of the task is presented. On the other hand, our work demonstrates that retinal vessel segmentation can be tackled by surprisingly simple configurations of U-Net reaching almost state-of-the-art performance. We also show that the simple configurations have better generalization ability than state-of-the-art models with high model complexity. These observations seem to be in contradiction to the current trend of continued increase in model complexity and capacity for the task under consideration.
Collapse
Affiliation(s)
- Weilin Fu
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- International Max Planck Research School for Physics of Light, Erlangen, Germany
| | - Katharina Breininger
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Roman Schaffert
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Zhaoya Pan
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Erlangen Graduate School in Advanced Optical Technologies, Erlangen, Germany
| |
Collapse
|
143
|
Li D, Rahardja S. BSEResU-Net: An attention-based before-activation residual U-Net for retinal vessel segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106070. [PMID: 33857703 DOI: 10.1016/j.cmpb.2021.106070] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Accepted: 03/22/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES Retinal vessels are a major feature used for the physician to diagnose many retinal diseases, such as cardiovascular disease and Glaucoma. Therefore, the designing of an auto-segmentation algorithm for retinal vessel draw great attention in medical field. Recently, deep learning methods, especially convolutional neural networks (CNNs) show extraordinary potential for the task of vessel segmentation. However, most of the deep learning methods only take advantage of the shallow networks with a traditional cross-entropy objective, which becomes the main obstacle to further improve the performance on a task that is imbalanced. We therefore propose a new type of residual U-Net called Before-activation Squeeze-and-Excitation ResU-Net (BSEResu-Net) to tackle the aforementioned issues. METHODS Our BSEResU-Net can be viewed as an encoder/decoder framework that constructed by Before-activation Squeeze-and-Excitation blocks (BSE Blocks). In comparison to the current existing CNN structures, we utilize a new type of residual block structure, namely BSE block, in which the attention mechanism is combined with skip connection to boost the performance. What's more, the network could consistently gain accuracy from the increasing depth as we incorporate more residual blocks, attributing to the dropblock mechanism used in BSE blocks. A joint loss function which is based on the dice and cross-entropy loss functions is also introduced to achieve more balanced segmentation between the vessel and non-vessel pixels. RESULTS The proposed BSEResU-Net is evaluated on the publicly available DRIVE, STARE and HRF datasets. It achieves the F1-score of 0.8324, 0.8368 and 0.8237 on DRIVE, STARE and HRF dataset, respectively. Experimental results show that the proposed BSEResU-Net outperforms current state-of-the-art algorithms. CONCLUSIONS The proposed algorithm utilizes a new type of residual blocks called BSE residual blocks for vessel segmentation. Together with a joint loss function, it shows outstanding performance both on low and high-resolution fundus images.
Collapse
Affiliation(s)
- Di Li
- Centre of Intelligent Acoustics and Immersive Communications, School of Marine Science and Technology, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, P.R. China.
| | - Susanto Rahardja
- Centre of Intelligent Acoustics and Immersive Communications, School of Marine Science and Technology, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, P.R. China.
| |
Collapse
|
144
|
Gegundez-Arias ME, Marin-Santos D, Perez-Borrero I, Vasallo-Vazquez MJ. A new deep learning method for blood vessel segmentation in retinal images based on convolutional kernels and modified U-Net model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106081. [PMID: 33882418 DOI: 10.1016/j.cmpb.2021.106081] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Accepted: 03/28/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic monitoring of retinal blood vessels proves very useful for the clinical assessment of ocular vascular anomalies or retinopathies. This paper presents an efficient and accurate deep learning-based method for vessel segmentation in eye fundus images. METHODS The approach consists of a convolutional neural network based on a simplified version of the U-Net architecture that combines residual blocks and batch normalization in the up- and downscaling phases. The network receives patches extracted from the original image as input and is trained with a novel loss function that considers the distance of each pixel to the vascular tree. At its output, it generates the probability of each pixel of the input patch belonging to the vascular structure. The application of the network to the patches in which a retinal image can be divided allows obtaining the pixel-wise probability map of the complete image. This probability map is then binarized with a certain threshold to generate the blood vessel segmentation provided by the method. RESULTS The method has been developed and evaluated in the DRIVE, STARE and CHASE_Db1 databases, which offer a manual segmentation of the vascular tree by each of its images. Using this set of images as ground truth, the accuracy of the vessel segmentations obtained for an operating point proposal (established by a single threshold value for each database) was quantified. The overall performance was measured using the area of its receiver operating characteristic curve. The method demonstrated robustness in the face of the variability of the fundus images of diverse origin, being capable of working with the highest level of accuracy in the entire set of possible points of operation, compared to those provided by the most accurate methods found in literature. CONCLUSIONS The analysis of results concludes that the proposed method reaches better performance than the rest of state-of-art methods and can be considered the most promising for integration into a real tool for vascular structure segmentation.
Collapse
Affiliation(s)
- Manuel E Gegundez-Arias
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Diego Marin-Santos
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Isaac Perez-Borrero
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Manuel J Vasallo-Vazquez
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| |
Collapse
|
145
|
Lian S, Li L, Lian G, Xiao X, Luo Z, Li S. A Global and Local Enhanced Residual U-Net for Accurate Retinal Vessel Segmentation. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:852-862. [PMID: 31095493 DOI: 10.1109/tcbb.2019.2917188] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Retinal vessel segmentation is a critical procedure towards the accurate visualization, diagnosis, early treatment, and surgery planning of ocular diseases. Recent deep learning-based approaches have achieved impressive performance in retinal vessel segmentation. However, they usually apply global image pre-processing and take the whole retinal images as input during network training, which have two drawbacks for accurate retinal vessel segmentation. First, these methods lack the utilization of the local patch information. Second, they overlook the geometric constraint that retina only occurs in a specific area within the whole image or the extracted patch. As a consequence, these global-based methods suffer in handling details, such as recognizing the small thin vessels, discriminating the optic disk, etc. To address these drawbacks, this study proposes a Global and Local enhanced residual U-nEt (GLUE) for accurate retinal vessel segmentation, which benefits from both the globally and locally enhanced information inside the retinal region. Experimental results on two benchmark datasets demonstrate the effectiveness of the proposed method, which consistently improves the segmentation accuracy over a conventional U-Net and achieves competitive performance compared to the state-of-the-art.
Collapse
|
146
|
Zhou Y, Chen Z, Shen H, Zheng X, Zhao R, Duan X. A refined equilibrium generative adversarial network for retinal vessel segmentation. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.06.143] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
147
|
Dharmawan DA. Assessing fairness in performance evaluation of publicly available retinal blood vessel segmentation algorithms. J Med Eng Technol 2021; 45:351-360. [PMID: 33843422 DOI: 10.1080/03091902.2021.1906342] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
In the literature, various algorithms have been proposed for automatically extracting blood vessels from retinal images. In general, they are developed and evaluated using several publicly available datasets such as the DRIVE and STARE datasets. For performance evaluation, several metrics such as Sensitivity, Specificity, and Accuracy have been widely used. However, not all methods in the literature have been fairly evaluated and compared among their counterparts. In particular, for some publicly available algorithms, the performance is measured only for the area inside the field of view (FOV) of each retinal image while the rest use the complete image for the performance evaluation. Therefore, performing a comparison of the performance of methods in the latter group with those in the former group may lead to a misleading justification. This study aims to assess fairness in the performance evaluation of various publicly available retinal blood vessel segmentation algorithms. The conducted study allows getting several meaningful results: (i) a guideline to assess fairness in performance evaluation of retinal vessel segmentation algorithms, (ii) a more proper performance comparison of retinal vessel segmentation algorithms in the literature, and (iii) a suggestion regarding the use of performance evaluation metrics that will not lead to misleading comparison and justification.
Collapse
|
148
|
Lagatuz M, Vyas RJ, Predovic M, Lim S, Jacobs N, Martinho M, Valizadegan H, Kao D, Oza N, Theriot CA, Zanello SB, Taibbi G, Vizzeri G, Dupont M, Grant MB, Lindner DJ, Reinecker HC, Pinhas A, Chui TY, Rosen RB, Moldovan N, Vickerman MB, Radhakrishnan K, Parsons-Wingerter P. Vascular Patterning as Integrative Readout of Complex Molecular and Physiological Signaling by VESsel GENeration Analysis. J Vasc Res 2021; 58:207-230. [PMID: 33839725 PMCID: PMC9903340 DOI: 10.1159/000514211] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Accepted: 12/23/2020] [Indexed: 11/19/2022] Open
Abstract
The molecular signaling cascades that regulate angiogenesis and microvascular remodeling are fundamental to normal development, healthy physiology, and pathologies such as inflammation and cancer. Yet quantifying such complex, fractally branching vascular patterns remains difficult. We review application of NASA's globally available, freely downloadable VESsel GENeration (VESGEN) Analysis software to numerous examples of 2D vascular trees, networks, and tree-network composites. Upon input of a binary vascular image, automated output includes informative vascular maps and quantification of parameters such as tortuosity, fractal dimension, vessel diameter, area, length, number, and branch point. Previous research has demonstrated that cytokines and therapeutics such as vascular endothelial growth factor, basic fibroblast growth factor (fibroblast growth factor-2), transforming growth factor-beta-1, and steroid triamcinolone acetonide specify unique "fingerprint" or "biomarker" vascular patterns that integrate dominant signaling with physiological response. In vivo experimental examples described here include vascular response to keratinocyte growth factor, a novel vessel tortuosity factor; angiogenic inhibition in humanized tumor xenografts by the anti-angiogenesis drug leronlimab; intestinal vascular inflammation with probiotic protection by Saccharomyces boulardii, and a workflow programming of vascular architecture for 3D bioprinting of regenerative tissues from 2D images. Microvascular remodeling in the human retina is described for astronaut risks in microgravity, vessel tortuosity in diabetic retinopathy, and venous occlusive disease.
Collapse
Affiliation(s)
- Mark Lagatuz
- Redline Performance Solutions, Ames Research Center, National Aeronautics and Space Administration, Moffett Field CA, USA
| | - Ruchi J. Vyas
- Mori Associates, Space Biology Division, Ames Research Center, National Aeronautics and Space Administration, Moffett Field CA, USA
| | - Marina Predovic
- Blue Marble Space Institute of Science, Space Biology Division, Ames Research Center, National Aeronautics and Space Administration, Moffett Field CA, USA
| | - Shiyin Lim
- Blue Marble Space Institute of Science, Space Biology Division, Ames Research Center, National Aeronautics and Space Administration, Moffett Field CA, USA
| | - Nicole Jacobs
- Blue Marble Space Institute of Science, Space Biology Division, Ames Research Center, National Aeronautics and Space Administration, Moffett Field CA, USA
| | - Miguel Martinho
- Universities Space Research Association, Intelligent Systems Division, Exploration Technology Directorate, Ames Research Center, National Aeronautics and Space Administration, Moffett Field CA, USA
| | - Hamed Valizadegan
- Universities Space Research Association, Intelligent Systems Division, Exploration Technology Directorate, Ames Research Center, National Aeronautics and Space Administration, Moffett Field CA, USA
| | - David Kao
- Advanced Supercomputing & Intelligent Systems Divisions, Exploration Technology Directorate, Ames Research Center, National Aeronautics and Space Administration, Moffett Field CA, USA
| | - Nikunj Oza
- Advanced Supercomputing & Intelligent Systems Divisions, Exploration Technology Directorate, Ames Research Center, National Aeronautics and Space Administration, Moffett Field CA, USA
| | - Corey A. Theriot
- Department of Preventive Medicine and Community Health, The University of Texas Medical Branch at Galveston, Galveston, TX, USA
- KBRWyle, Johnson Space Center, National Aeronautics and Space Administration, Houston, TX, USA
| | - Susana B. Zanello
- KBRWyle, Johnson Space Center, National Aeronautics and Space Administration, Houston, TX, USA
| | - Giovanni Taibbi
- Department of Ophthalmology and Visual Sciences, The University of Texas Medical Branch at Galveston, Galveston, TX, USA
| | - Gianmarco Vizzeri
- Department of Ophthalmology and Visual Sciences, The University of Texas Medical Branch at Galveston, Galveston, TX, USA
| | - Mariana Dupont
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Alabama, Birmingham AL, USA
| | - Maria B. Grant
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Alabama, Birmingham AL, USA
| | - Daniel J. Lindner
- Taussig Cancer Institute, Cleveland Clinic Foundation, Cleveland OH, USA
| | - Hans-Christian Reinecker
- Departments of Medicine and Immunology, Division of Digestive and Liver Diseases, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Alexander Pinhas
- Department of Ophthalmology, New York Eye and Ear Infirmary of Mount Sinai, New York, NY, USA
| | - Toco Y. Chui
- Department of Ophthalmology, New York Eye and Ear Infirmary of Mount Sinai, New York, NY, USA
| | - Richard B. Rosen
- Department of Ophthalmology, New York Eye and Ear Infirmary of Mount Sinai, New York, NY, USA
- Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Nicanor Moldovan
- Department of Ophthalmology, Indiana University School of Medicine and Indiana University Purdue University at Indianapolis IN, USA
- Richard L. Roudebush VA Medical Center, Veteran’s Administration, Indianapolis IN, USA
| | - Mary B. Vickerman
- Data Systems Branch, John Glenn Research Center, National Aeronautics and Space Administration, Cleveland, OH, USA (retired)
| | - Krishnan Radhakrishnan
- Center for Behavioral Health Statistics and Quality, Substance Abuse and Mental Health Services Administration, U.S. Department of Health and Human Services, Rockville, MD, USA
- College of Medicine, University of Kentucky, Lexington, KY, USA
| | - Patricia Parsons-Wingerter
- Space Biology Division, Space Technology Mission Directorate, Ames Research Center, National Aeronautics and Space Administration, Moffett Field, CA, USA
- Low Gravity Exploration Technology, Research and Engineering Directorate, John Glenn Research Center, National Aeronautics and Space Administration, Cleveland, OH, USA
| |
Collapse
|
149
|
Wang B, Wang S, Qiu S, Wei W, Wang H, He H. CSU-Net: A Context Spatial U-Net for Accurate Blood Vessel Segmentation in Fundus Images. IEEE J Biomed Health Inform 2021; 25:1128-1138. [PMID: 32750968 DOI: 10.1109/jbhi.2020.3011178] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Blood vessel segmentation in fundus images is a critical procedure in the diagnosis of ophthalmic diseases. Recent deep learning methods achieve high accuracy in vessel segmentation but still face the challenge to segment the microvascular and detect the vessel boundary. This is due to the fact that common Convolutional Neural Networks (CNN) are unable to preserve rich spatial information and a large receptive field simultaneously. Besides, CNN models for vessel segmentation usually are trained by equal pixel level cross-entropy loss, which tend to miss fine vessel structures. In this paper, we propose a novel Context Spatial U-Net (CSU-Net) for blood vessel segmentation. Compared with the other U-Net based models, we design a two-channel encoder: a context channel with multi-scale convolution to capture more receptive field and a spatial channel with large kernel to retain spatial information. Also, to combine and strengthen the features extracted from two paths, we introduce a feature fusion module (FFM) and an attention skip module (ASM). Furthermore, we propose a structure loss, which adds a spatial weight to cross-entropy loss and guide the network to focus more on the thin vessels and boundaries. We evaluated this model on three public datasets: DRIVE, CHASE-DB1 and STARE. The results show that the CSU-Net achieves higher segmentation accuracy than the current state-of-the-art methods.
Collapse
|
150
|
Hajiyavand AM, Graham MJ, Dearn KD. Diameter Estimation of Fallopian Tubes Using Visual Sensing. BIOSENSORS-BASEL 2021; 11:bios11040100. [PMID: 33915708 PMCID: PMC8066605 DOI: 10.3390/bios11040100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 03/10/2021] [Accepted: 03/17/2021] [Indexed: 11/16/2022]
Abstract
Calculating an accurate diameter of arbitrary vessel-like shapes from 2D images is of great use in various applications within medical and biomedical fields. Understanding the changes in morphological dimensioning of the biological vessels provides a better understanding of their properties and functionality. Estimating the diameter of the tubes is very challenging as the dimensions change continuously along its length. This paper describes a novel algorithm that estimates the diameter of biological tubes with a continuously changing cross-section. The algorithm, evaluated using various controlled images, provides an automated diameter estimation with higher and better accuracy than manual measurements and provides precise information about the diametrical changes along the tube. It is demonstrated that the automated algorithm provides more accurate results in a much shorter time. This methodology has the potential to speed up diagnostic procedures in a wide range of medical fields.
Collapse
|