151
|
Mookiah MRK, Hogg S, MacGillivray T, Trucco E. On the quantitative effects of compression of retinal fundus images on morphometric vascular measurements in VAMPIRE. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 202:105969. [PMID: 33631639 DOI: 10.1016/j.cmpb.2021.105969] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Accepted: 01/30/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES This paper reports a quantitative analysis of the effects of joint photographic experts group (JPEG) image compression of retinal fundus camera images on automatic vessel segmentation and on morphometric vascular measurements derived from it, including vessel width, tortuosity and fractal dimension. METHODS Measurements are computed with vascular assessment and measurement platform for images of the retina (VAMPIRE), a specialized software application adopted in many international studies on retinal biomarkers. For reproducibility, we use three public archives of fundus images (digital retinal images for vessel extraction (DRIVE), automated retinal image analyzer (ARIA), high-resolution fundus (HRF)). We generate compressed versions of original images in a range of representative levels. RESULTS We compare the resulting vessel segmentations with ground truth maps and morphological measurements of the vascular network with those obtained from the original (uncompressed) images. We assess the segmentation quality with sensitivity, specificity, accuracy, area under the curve and Dice coefficient. We assess the agreement between VAMPIRE measurements from compressed and uncompressed images with correlation, intra-class correlation and Bland-Altman analysis. CONCLUSIONS Results suggest that VAMPIRE width-related measurements (central retinal artery equivalent (CRAE), central retinal vein equivalent (CRVE), arteriolar-venular width ratio (AVR)), the fractal dimension (FD) and arteriolar tortuosity have excellent agreement with those from the original images, remaining substantially stable even for strong loss of quality (20% of the original), suggesting the suitability of VAMPIRE in association studies with compressed images.
Collapse
|
152
|
Ramos-Soto O, Rodríguez-Esparza E, Balderas-Mata SE, Oliva D, Hassanien AE, Meleppat RK, Zawadzki RJ. An efficient retinal blood vessel segmentation in eye fundus images by using optimized top-hat and homomorphic filtering. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 201:105949. [PMID: 33567382 DOI: 10.1016/j.cmpb.2021.105949] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Accepted: 01/18/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic segmentation of retinal blood vessels makes a major contribution in CADx of various ophthalmic and cardiovascular diseases. A procedure to segment thin and thick retinal vessels is essential for medical analysis and diagnosis of related diseases. In this article, a novel methodology for robust vessel segmentation is proposed, handling the existing challenges presented in the literature. METHODS The proposed methodology consists of three stages, pre-processing, main processing, and post-processing. The first stage consists of applying filters for image smoothing. The main processing stage is divided into two configurations, the first to segment thick vessels through the new optimized top-hat, homomorphic filtering, and median filter. Then, the second configuration is used to segment thin vessels using the proposed optimized top-hat, homomorphic filtering, matched filter, and segmentation using the MCET-HHO multilevel algorithm. Finally, morphological image operations are carried out in the post-processing stage. RESULTS The proposed approach was assessed by using two publicly available databases (DRIVE and STARE) through three performance metrics: specificity, sensitivity, and accuracy. Analyzing the obtained results, an average of 0.9860, 0.7578 and 0.9667 were respectively achieved for DRIVE dataset and 0.9836, 0.7474 and 0.9580 for STARE dataset. CONCLUSIONS The numerical results obtained by the proposed technique, achieve competitive average values with the up-to-date techniques. The proposed approach outperform all leading unsupervised methods discussed in terms of specificity and accuracy. In addition, it outperforms most of the state-of-the-art supervised methods without the computational cost associated with these algorithms. Detailed visual analysis has shown that a more precise segmentation of thin vessels was possible with the proposed approach when compared with other procedures.
Collapse
Affiliation(s)
- Oscar Ramos-Soto
- División de Electrónica y Computación, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, C.P. 44430, Guadalajara, Jal., Mexico.
| | - Erick Rodríguez-Esparza
- División de Electrónica y Computación, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, C.P. 44430, Guadalajara, Jal., Mexico; DeustoTech, Faculty of Engineering, University of Deusto, Av. Universidades, 24, 48007 Bilbao, Spain.
| | - Sandra E Balderas-Mata
- División de Electrónica y Computación, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, C.P. 44430, Guadalajara, Jal., Mexico.
| | - Diego Oliva
- División de Electrónica y Computación, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, C.P. 44430, Guadalajara, Jal., Mexico; IN3 - Computer Science Dept., Universitat Oberta de Catalunya, Castelldefels, Spain.
| | | | - Ratheesh K Meleppat
- UC Davis Eyepod Imaging Laboratory, Dept. of Cell Biology and Human Anatomy, University of California Davis, Davis, CA 95616, USA; Dept. of Ophthalmology & Vision Science, University of California Davis, Sacramento, CA, USA.
| | - Robert J Zawadzki
- UC Davis Eyepod Imaging Laboratory, Dept. of Cell Biology and Human Anatomy, University of California Davis, Davis, CA 95616, USA; Dept. of Ophthalmology & Vision Science, University of California Davis, Sacramento, CA, USA.
| |
Collapse
|
153
|
Lightweight pyramid network with spatial attention mechanism for accurate retinal vessel segmentation. Int J Comput Assist Radiol Surg 2021; 16:673-682. [PMID: 33751370 DOI: 10.1007/s11548-021-02344-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 03/04/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE The morphological characteristics of retinal vessels are vital for the early diagnosis of pathological diseases such as diabetes and hypertension. However, the low contrast and complex morphology pose a challenge to automatic retinal vessel segmentation. To extract precise semantic features, more convolution and pooling operations are adopted, but some structural information is potentially ignored. METHODS In the paper, we propose a novel lightweight pyramid network (LPN) fusing multi-scale features with spatial attention mechanism to preserve the structure information of retinal vessels. The pyramid hierarchy model is constructed to generate multi-scale representations, and its semantic features are strengthened with the introduction of the attention mechanism. The combination of multi-scale features contributes to its accurate prediction. RESULTS The LPN is evaluated on benchmark datasets DRIVE, STARE and CHASE, and the results indicate its state-of-the-art performance (e.g., ACC of 97.09[Formula: see text]/97.49[Formula: see text]/97.48[Formula: see text], AUC of 98.79[Formula: see text]/99.01[Formula: see text]/98.91[Formula: see text] on the DRIVE, STARE and CHASE datasets, respectively). The robustness and generalization ability of the LPN are further proved in cross-training experiment. CONCLUSION The visualization experiment reveals the semantic gap between various scales of the pyramid and verifies the effectiveness of the attention mechanism, which provide a potential basis for the pyramid hierarchy model in multi-scale vessel segmentation task. Furthermore, the number of model parameters is greatly reduced.
Collapse
|
154
|
Fast and efficient retinal blood vessel segmentation method based on deep learning network. Comput Med Imaging Graph 2021; 90:101902. [PMID: 33892389 DOI: 10.1016/j.compmedimag.2021.101902] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 03/04/2021] [Accepted: 03/06/2021] [Indexed: 01/28/2023]
Abstract
The segmentation of the retinal vascular tree presents a major step for detecting ocular pathologies. The clinical context expects higher segmentation performance with a reduced processing time. For higher accurate segmentation, several automated methods have been based on Deep Learning (DL) networks. However, the used convolutional layers bring to a higher computational complexity and so for execution times. For such need, this work presents a new DL based method for retinal vessel tree segmentation. Our main contribution consists in suggesting a new U-form DL architecture using lightweight convolution blocks in order to preserve a higher segmentation performance while reducing the computational complexity. As a second main contribution, preprocessing and data augmentation steps are proposed with respect to the retinal image and blood vessel characteristics. The proposed method is tested on DRIVE and STARE databases, which can achieve a better trade-off between the retinal blood vessel detection rate and the detection time with average accuracy of 0.978 and 0.98 in 0.59 s and 0.48 s per fundus image on GPU NVIDIA GTX 980 platforms, respectively for DRIVE and STARE database fundus images.
Collapse
|
155
|
Wu H, Wang W, Zhong J, Lei B, Wen Z, Qin J. SCS-Net: A Scale and Context Sensitive Network for Retinal Vessel Segmentation. Med Image Anal 2021; 70:102025. [PMID: 33721692 DOI: 10.1016/j.media.2021.102025] [Citation(s) in RCA: 77] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Revised: 02/24/2021] [Accepted: 02/25/2021] [Indexed: 02/01/2023]
Abstract
Accurately segmenting retinal vessel from retinal images is essential for the detection and diagnosis of many eye diseases. However, it remains a challenging task due to (1) the large variations of scale in the retinal vessels and (2) the complicated anatomical context of retinal vessels, including complex vasculature and morphology, the low contrast between some vessels and the background, and the existence of exudates and hemorrhage. It is difficult for a model to capture representative and distinguishing features for retinal vessels under such large scale and semantics variations. Limited training data also make this task even harder. In order to comprehensively tackle these challenges, we propose a novel scale and context sensitive network (a.k.a., SCS-Net) for retinal vessel segmentation. We first propose a scale-aware feature aggregation (SFA) module, aiming at dynamically adjusting the receptive fields to effectively extract multi-scale features. Then, an adaptive feature fusion (AFF) module is designed to guide efficient fusion between adjacent hierarchical features to capture more semantic information. Finally, a multi-level semantic supervision (MSS) module is employed to learn more distinctive semantic representation for refining the vessel maps. We conduct extensive experiments on the six mainstream retinal image databases (DRIVE, CHASEDB1, STARE, IOSTAR, HRF, and LES-AV). The experimental results demonstrate the effectiveness of the proposed SCS-Net, which is capable of achieving better segmentation performance than other state-of-the-art approaches, especially for the challenging cases with large scale variations and complex context environments.
Collapse
Affiliation(s)
- Huisi Wu
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China, 518060
| | - Wei Wang
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China, 518060
| | - Jiafu Zhong
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China, 518060
| | - Baiying Lei
- School of Biomedical Engineering, Health Science Centers, Shenzhen University, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Marshall Laboratory of Biomedical Engineering, AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen, China, 518060.
| | - Zhenkun Wen
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China, 518060
| | - Jing Qin
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| |
Collapse
|
156
|
Jayanthi J, Jayasankar T, Krishnaraj N, Prakash NB, Sagai Francis Britto A, Vinoth Kumar K. An Intelligent Particle Swarm Optimization with Convolutional Neural Network for Diabetic Retinopathy Classification Model. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3362] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Diabetic retinopathy (DR), a major cause of vision loss and it raises a major issue among diabetes people. DR considerably affect the financial condition of the society specially in medicinal sector. Once proper treatment is given to the DR patients, roughly 90% of patients can be saved
from vision loss. So, it is needed to develop a DR classification model for classifying the stages and severity level of DR to offer better treatment. This article develops a novel Particle Swarm Optimization (PSO) algorithm based Convolutional Neural Network (CNN) Model called PSO-CNN model
to detect and classify DR from the color fundus images. The proposed PSO-CNN model comprises three stages namely preprocessing, feature extraction and classification. Initially, preprocessing is carried out as a noise removal process to discard the noise present in the input image. Then, feature
extraction process using PSO-CNN model is applied to extract the useful subset of features. Finally, the filtered features are given as input to the decision tree (DT) model for classifying the set of DR images. The simulation of the PSO-CNN model takes place using a benchmark DR database
and the experimental outcome stated that the PSO-CNN model has outperformed all the compared methods in a significant way. The outcome of the simulation process indicated that the PSO-CNN model has offered maximum results.
Collapse
Affiliation(s)
- J. Jayanthi
- Department of Computer Science and Engineering, Sona College of Technology, Salem 636005, Tamilnadu, India
| | - T. Jayasankar
- Department of Electronics and Communication Engineering, University College of Engineering, BIT Campus, Anna University, Tiruchirappalli 620024, Tamilnadu, India
| | - N. Krishnaraj
- Department of Computer Science and Engineering, Sasi Institute of Technology & Engineering, Tadeaplligudem 534101, Andhrapradesh, India
| | - N. B. Prakash
- Department of Electrical and Electronics Engineering, National Engineering College, K. R. Nagar, Kovilpatti 628503, India
| | - A. Sagai Francis Britto
- Department of Mechanical Engineering, Rohini College of Engineering and Technology, Palkulam 629401, Tamilnadu, India
| | - K. Vinoth Kumar
- Department of Electronics and Communication Engineering, SSM Institute of Engineering and Technology, Dindigul 624622, Tamil Nadu, India
| |
Collapse
|
157
|
Ma Y, Hao H, Xie J, Fu H, Zhang J, Yang J, Wang Z, Liu J, Zheng Y, Zhao Y. ROSE: A Retinal OCT-Angiography Vessel Segmentation Dataset and New Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:928-939. [PMID: 33284751 DOI: 10.1109/tmi.2020.3042802] [Citation(s) in RCA: 104] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Optical Coherence Tomography Angiography (OCTA) is a non-invasive imaging technique that has been increasingly used to image the retinal vasculature at capillary level resolution. However, automated segmentation of retinal vessels in OCTA has been under-studied due to various challenges such as low capillary visibility and high vessel complexity, despite its significance in understanding many vision-related diseases. In addition, there is no publicly available OCTA dataset with manually graded vessels for training and validation of segmentation algorithms. To address these issues, for the first time in the field of retinal image analysis we construct a dedicated Retinal OCTA SEgmentation dataset (ROSE), which consists of 229 OCTA images with vessel annotations at either centerline-level or pixel level. This dataset with the source code has been released for public access to assist researchers in the community in undertaking research in related topics. Secondly, we introduce a novel split-based coarse-to-fine vessel segmentation network for OCTA images (OCTA-Net), with the ability to detect thick and thin vessels separately. In the OCTA-Net, a split-based coarse segmentation module is first utilized to produce a preliminary confidence map of vessels, and a split-based refined segmentation module is then used to optimize the shape/contour of the retinal microvasculature. We perform a thorough evaluation of the state-of-the-art vessel segmentation models and our OCTA-Net on the constructed ROSE dataset. The experimental results demonstrate that our OCTA-Net yields better vessel segmentation performance in OCTA than both traditional and other deep learning methods. In addition, we provide a fractal dimension analysis on the segmented microvasculature, and the statistical analysis demonstrates significant differences between the healthy control and Alzheimer's Disease group. This consolidates that the analysis of retinal microvasculature may offer a new scheme to study various neurodegenerative diseases.
Collapse
|
158
|
Li X, Jiang Y, Li M, Yin S. Lightweight Attention Convolutional Neural Network for Retinal Vessel Image Segmentation. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS 2021; 17:1958-1967. [DOI: 10.1109/tii.2020.2993842] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
159
|
Jia D, Zhuang X. Learning-based algorithms for vessel tracking: A review. Comput Med Imaging Graph 2021; 89:101840. [PMID: 33548822 DOI: 10.1016/j.compmedimag.2020.101840] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Revised: 10/07/2020] [Accepted: 12/03/2020] [Indexed: 11/24/2022]
Abstract
Developing efficient vessel-tracking algorithms is crucial for imaging-based diagnosis and treatment of vascular diseases. Vessel tracking aims to solve recognition problems such as key (seed) point detection, centerline extraction, and vascular segmentation. Extensive image-processing techniques have been developed to overcome the problems of vessel tracking that are mainly attributed to the complex morphologies of vessels and image characteristics of angiography. This paper presents a literature review on vessel-tracking methods, focusing on machine-learning-based methods. First, the conventional machine-learning-based algorithms are reviewed, and then, a general survey of deep-learning-based frameworks is provided. On the basis of the reviewed methods, the evaluation issues are introduced. The paper is concluded with discussions about the remaining exigencies and future research.
Collapse
Affiliation(s)
- Dengqiang Jia
- School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiahai Zhuang
- School of Data Science, Fudan University, Shanghai, China.
| |
Collapse
|
160
|
Hemelings R, Elen B, Blaschko MB, Jacob J, Stalmans I, De Boever P. Pathological myopia classification with simultaneous lesion segmentation using deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 199:105920. [PMID: 33412285 DOI: 10.1016/j.cmpb.2020.105920] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 12/21/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES Pathological myopia (PM) is the seventh leading cause of blindness, with a reported global prevalence up to 3%. Early and automated PM detection from fundus images could aid to prevent blindness in a world population that is characterized by a rising myopia prevalence. We aim to assess the use of convolutional neural networks (CNNs) for the detection of PM and semantic segmentation of myopia-induced lesions from fundus images on a recently introduced reference data set. METHODS This investigation reports on the results of CNNs developed for the recently introduced Pathological Myopia (PALM) dataset, which consists of 1200 images. Our CNN bundles lesion segmentation and PM classification, as the two tasks are heavily intertwined. Domain knowledge is also inserted through the introduction of a new Optic Nerve Head (ONH)-based prediction enhancement for the segmentation of atrophy and fovea localization. Finally, we are the first to approach fovea localization using segmentation instead of detection or regression models. Evaluation metrics include area under the receiver operating characteristic curve (AUC) for PM detection, Euclidean distance for fovea localization, and Dice and F1 metrics for the semantic segmentation tasks (optic disc, retinal atrophy and retinal detachment). RESULTS Models trained with 400 available training images achieved an AUC of 0.9867 for PM detection, and a Euclidean distance of 58.27 pixels on the fovea localization task, evaluated on a test set of 400 images. Dice and F1 metrics for semantic segmentation of lesions scored 0.9303 and 0.9869 on optic disc, 0.8001 and 0.9135 on retinal atrophy, and 0.8073 and 0.7059 on retinal detachment, respectively. CONCLUSIONS We report a successful approach for a simultaneous classification of pathological myopia and segmentation of associated lesions. Our work was acknowledged with an award in the context of the "Pathological Myopia detection from retinal images" challenge held during the IEEE International Symposium on Biomedical Imaging (April 2019). Considering that (pathological) myopia cases are often identified as false positives and negatives in glaucoma deep learning models, we envisage that the current work could aid in future research to discriminate between glaucomatous and highly-myopic eyes, complemented by the localization and segmentation of landmarks such as fovea, optic disc and atrophy.
Collapse
Affiliation(s)
- Ruben Hemelings
- Research Group Ophthalmology, KU Leuven, Herestraat 49, 3000 Leuven, Belgium; VITO NV, Boeretang 200, 2400 Mol, Belgium.
| | - Bart Elen
- VITO NV, Boeretang 200, 2400 Mol, Belgium
| | | | - Julie Jacob
- Ophthalmology Department, UZ Leuven, Herestraat 49, 3000 Leuven, Belgium
| | - Ingeborg Stalmans
- Research Group Ophthalmology, KU Leuven, Herestraat 49, 3000 Leuven, Belgium; Ophthalmology Department, UZ Leuven, Herestraat 49, 3000 Leuven, Belgium
| | - Patrick De Boever
- Hasselt University, Agoralaan building D, 3590 Diepenbeek, Belgium; VITO NV, Boeretang 200, 2400 Mol, Belgium
| |
Collapse
|
161
|
Wang Y, Yan G, Zhu H, Buch S, Wang Y, Haacke EM, Hua J, Zhong Z. VC-Net: Deep Volume-Composition Networks for Segmentation and Visualization of Highly Sparse and Noisy Image Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1301-1311. [PMID: 33048701 DOI: 10.1109/tvcg.2020.3030374] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The fundamental motivation of the proposed work is to present a new visualization-guided computing paradigm to combine direct 3D volume processing and volume rendered clues for effective 3D exploration. For example, extracting and visualizing microstructures in-vivo have been a long-standing challenging problem. However, due to the high sparseness and noisiness in cerebrovasculature data as well as highly complex geometry and topology variations of micro vessels, it is still extremely challenging to extract the complete 3D vessel structure and visualize it in 3D with high fidelity. In this paper, we present an end-to-end deep learning method, VC-Net, for robust extraction of 3D microvascular structure through embedding the image composition, generated by maximum intensity projection (MIP), into the 3D volumetric image learning process to enhance the overall performance. The core novelty is to automatically leverage the volume visualization technique (e.g., MIP - a volume rendering scheme for 3D volume images) to enhance the 3D data exploration at the deep learning level. The MIP embedding features can enhance the local vessel signal (through canceling out the noise) and adapt to the geometric variability and scalability of vessels, which is of great importance in microvascular tracking. A multi-stream convolutional neural network (CNN) framework is proposed to effectively learn the 3D volume and 2D MIP feature vectors, respectively, and then explore their inter-dependencies in a joint volume-composition embedding space by unprojecting the 2D feature vectors into the 3D volume embedding space. It is noted that the proposed framework can better capture the small/micro vessels and improve the vessel connectivity. To our knowledge, this is the first time that a deep learning framework is proposed to construct a joint convolutional embedding space, where the computed vessel probabilities from volume rendering based 2D projection and 3D volume can be explored and integrated synergistically. Experimental results are evaluated and compared with the traditional 3D vessel segmentation methods and the state-of-the-art in deep learning, by using extensive public and real patient (micro- )cerebrovascular image datasets. The application of this accurate segmentation and visualization of sparse and complicated 3D microvascular structure facilitated by our method demonstrates the potential in a powerful MR arteriogram and venogram diagnosis of vascular disease.
Collapse
|
162
|
Efficient BFCN for Automatic Retinal Vessel Segmentation. J Ophthalmol 2021; 2020:6439407. [PMID: 33489334 PMCID: PMC7803293 DOI: 10.1155/2020/6439407] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Revised: 09/03/2020] [Accepted: 09/09/2020] [Indexed: 11/22/2022] Open
Abstract
Retinal vessel segmentation has high value for the research on the diagnosis of diabetic retinopathy, hypertension, and cardiovascular and cerebrovascular diseases. Most methods based on deep convolutional neural networks (DCNN) do not have large receptive fields or rich spatial information and cannot capture global context information of the larger areas. Therefore, it is difficult to identify the lesion area, and the segmentation efficiency is poor. This paper presents a butterfly fully convolutional neural network (BFCN). First, in view of the low contrast between blood vessels and the background in retinal blood vessel images, this paper uses automatic color enhancement (ACE) technology to increase the contrast between blood vessels and the background. Second, using the multiscale information extraction (MSIE) module in the backbone network can capture the global contextual information in a larger area to reduce the loss of feature information. At the same time, using the transfer layer (T_Layer) can not only alleviate gradient vanishing problem and repair the information loss in the downsampling process but also obtain rich spatial information. Finally, for the first time in the paper, the segmentation image is postprocessed, and the Laplacian sharpening method is used to improve the accuracy of vessel segmentation. The method mentioned in this paper has been verified by the DRIVE, STARE, and CHASE datasets, with the accuracy of 0.9627, 0.9735, and 0.9688, respectively.
Collapse
|
163
|
Cai Y, Yu JG, Chen Y, Liu C, Xiao L, M Grais E, Zhao F, Lan L, Zeng S, Zeng J, Wu M, Su Y, Li Y, Zheng Y. Investigating the use of a two-stage attention-aware convolutional neural network for the automated diagnosis of otitis media from tympanic membrane images: a prediction model development and validation study. BMJ Open 2021; 11:e041139. [PMID: 33478963 PMCID: PMC7825258 DOI: 10.1136/bmjopen-2020-041139] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 12/18/2020] [Accepted: 12/28/2020] [Indexed: 12/03/2022] Open
Abstract
OBJECTIVES This study investigated the usefulness and performance of a two-stage attention-aware convolutional neural network (CNN) for the automated diagnosis of otitis media from tympanic membrane (TM) images. DESIGN A classification model development and validation study in ears with otitis media based on otoscopic TM images. Two commonly used CNNs were trained and evaluated on the dataset. On the basis of a Class Activation Map (CAM), a two-stage classification pipeline was developed to improve accuracy and reliability, and simulate an expert reading the TM images. SETTING AND PARTICIPANTS This is a retrospective study using otoendoscopic images obtained from the Department of Otorhinolaryngology in China. A dataset was generated with 6066 otoscopic images from 2022 participants comprising four kinds of TM images, that is, normal eardrum, otitis media with effusion (OME) and two stages of chronic suppurative otitis media (CSOM). RESULTS The proposed method achieved an overall accuracy of 93.4% using ResNet50 as the backbone network in a threefold cross-validation. The F1 Score of classification for normal images was 94.3%, and 96.8% for OME. There was a small difference between the active and inactive status of CSOM, achieving 91.7% and 82.4% F1 scores, respectively. The results demonstrate a classification performance equivalent to the diagnosis level of an associate professor in otolaryngology. CONCLUSIONS CNNs provide a useful and effective tool for the automated classification of TM images. In addition, having a weakly supervised method such as CAM can help the network focus on discriminative parts of the image and improve performance with a relatively small database. This two-stage method is beneficial to improve the accuracy of diagnosis of otitis media for junior otolaryngologists and physicians in other disciplines.
Collapse
Affiliation(s)
- Yuexin Cai
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| | - Jin-Gang Yu
- Department of Automation Science and Engineering, South China University of Technology School, Guangzhou, Guangdong, China
| | - Yuebo Chen
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| | - Chu Liu
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| | - Lichao Xiao
- Department of Automation Science and Engineering, South China University of Technology School, Guangzhou, Guangdong, China
| | - Emad M Grais
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, UK
| | - Fei Zhao
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, UK
| | - Liping Lan
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| | - Shengxin Zeng
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| | - Junbo Zeng
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| | - Minjian Wu
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| | - Yuejia Su
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| | - Yuanqing Li
- Department of Automation Science and Engineering, South China University of Technology School, Guangzhou, Guangdong, China
| | - Yiqing Zheng
- Department of Otolaryngology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
- Institute of Hearing and Speech-Language Science, Sun Yat-Sen University, Guangzhou, Guangdong Province, China
| |
Collapse
|
164
|
Li T, Bo W, Hu C, Kang H, Liu H, Wang K, Fu H. Applications of deep learning in fundus images: A review. Med Image Anal 2021; 69:101971. [PMID: 33524824 DOI: 10.1016/j.media.2021.101971] [Citation(s) in RCA: 99] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 01/12/2021] [Indexed: 02/06/2023]
Abstract
The use of fundus images for the early screening of eye diseases is of great clinical importance. Due to its powerful performance, deep learning is becoming more and more popular in related applications, such as lesion segmentation, biomarkers segmentation, disease diagnosis and image synthesis. Therefore, it is very necessary to summarize the recent developments in deep learning for fundus images with a review paper. In this review, we introduce 143 application papers with a carefully designed hierarchy. Moreover, 33 publicly available datasets are presented. Summaries and analyses are provided for each task. Finally, limitations common to all tasks are revealed and possible solutions are given. We will also release and regularly update the state-of-the-art results and newly-released datasets at https://github.com/nkicsl/Fundus_Review to adapt to the rapid development of this field.
Collapse
Affiliation(s)
- Tao Li
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Wang Bo
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Chunyu Hu
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hong Kang
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hanruo Liu
- Beijing Tongren Hospital, Capital Medical University, Address, Beijing 100730 China
| | - Kai Wang
- College of Computer Science, Nankai University, Tianjin 300350, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE
| |
Collapse
|
165
|
Xie H, Tang C, Zhang W, Shen Y, Lei Z. Multi-scale retinal vessel segmentation using encoder-decoder network with squeeze-and-excitation connection and atrous spatial pyramid pooling. APPLIED OPTICS 2021; 60:239-249. [PMID: 33448945 DOI: 10.1364/ao.409512] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 12/07/2020] [Indexed: 06/12/2023]
Abstract
The segmentation of blood vessels in retinal images is crucial to the diagnosis of many diseases. We propose a deep learning method for vessel segmentation based on an encoder-decoder network combined with squeeze-and-excitation connection and atrous spatial pyramid pooling. In our implementation, the atrous spatial pyramid pooling allows the network to capture features at multiple scales, and the high-level semantic information is combined with low-level features through the encoder-decoder architecture to generate segmentations. Meanwhile, the squeeze-and-excitation connections in the proposed network can adaptively recalibrate features according to the relationship between different channels of features. The proposed network can achieve precise segmentation of retinal vessels without hand-crafted features or specific post-processing. The performance of our model is evaluated in terms of visual effects and quantitative evaluation metrics on two publicly available datasets of retinal images, the Digital Retinal Images for Vessel Extraction and Structured Analysis of the Retina datasets, with comparison to 12 representative methods. Furthermore, the proposed network is applied to vessel segmentation on local retinal images, which demonstrates promising application prospect in medical practices.
Collapse
|
166
|
|
167
|
Samuel PM, Veeramalai T. VSSC Net: Vessel Specific Skip chain Convolutional Network for blood vessel segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105769. [PMID: 33039919 DOI: 10.1016/j.cmpb.2020.105769] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 09/18/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning techniques are instrumental in developing network models that aid in the early diagnosis of life-threatening diseases. To screen and diagnose the retinal fundus and coronary blood vessel disorders, the most important step is the proper segmentation of the blood vessels. METHODS This paper aims to segment the blood vessels from both the coronary angiogram and the retinal fundus images using a single VSSC Net after performing the image-specific preprocessing. The VSSC Net uses two-vessel extraction layers with added supervision on top of the base VGG-16 network. The vessel extraction layers comprise of the vessel-specific convolutional blocks to localize the blood vessels, skip chain convolutional layers to enable rich feature propagation, and a unique feature map summation. Supervision is associated with the two-vessel extraction layers using separate loss/sigmoid function. Finally, the weighted fusion of the individual loss/sigmoid function produces the desired blood vessel probability map. It is then binary segmented and validated for performance. RESULTS The VSSC Net shows improved accuracy values on the standard retinal and coronary angiogram datasets respectively. The computational time required to segment the blood vessels is 0.2 seconds using GPU. Moreover, the vessel extraction layer uses a lesser parameter count of 0.4 million parameters to accurately segment the blood vessels. CONCLUSION The proposed VSSC Net that segments blood vessels from both the retinal fundus images and coronary angiogram can be used for the early diagnosis of vessel disorders. Moreover, it could aid the physician to analyze the blood vessel structure of images obtained from multiple imaging sources.
Collapse
Affiliation(s)
- Pearl Mary Samuel
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India.
| | | |
Collapse
|
168
|
Retinal blood vessels segmentation using classical edge detection filters and the neural network. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100521] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
|
169
|
Abdelsalam MM, Zahran MA. A Novel Approach of Diabetic Retinopathy Early Detection Based on Multifractal Geometry Analysis for OCTA Macular Images Using Support Vector Machine. IEEE ACCESS 2021; 9:22844-22858. [DOI: 10.1109/access.2021.3054743] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
170
|
Toliušis R, Kurasova O, Bernatavičienė J. Semantic Segmentation of Eye Fundus Images Using Convolutional Neural Networks. INFORMACIJOS MOKSLAI 2020. [DOI: 10.15388/im.2020.90.53] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The article reviews the problems of eye bottom fundus analysis and semantic segmentation algorithms used to distinguish eye vessels, optical disk. Various diseases, such as glaucoma, hypertension, diabetic retinopathy, macular degeneration, etc., can be diagnosed by changes and anomalies of vesssels and optical disk. For semantic segmentation convolutional neural networks, especially U-Net architecture, are well suited. Recently a number of U-Net modifications have been developed that deliver excellent performance results.
Collapse
|
171
|
Abstract
Accurate segmentation of retinal blood vessels is a key step in the diagnosis of fundus diseases, among which cataracts, glaucoma, and diabetic retinopathy (DR) are the main diseases that cause blindness. Most segmentation methods based on deep convolutional neural networks can effectively extract features. However, convolution and pooling operations also filter out some useful information, and the final segmented retinal vessels have problems such as low classification accuracy. In this paper, we propose a multi-scale residual attention network called MRA-UNet. Multi-scale inputs enable the network to learn features at different scales, which increases the robustness of the network. In the encoding phase, we reduce the negative influence of the background and eliminate noise by using the residual attention module. We use the bottom reconstruction module to aggregate the feature information under different receptive fields, so that the model can extract the information of different thicknesses of blood vessels. Finally, the spatial activation module is used to process the up-sampled image to further increase the difference between blood vessels and background, which promotes the recovery of small blood vessels at the edges. Our method was verified on the DRIVE, CHASE, and STARE datasets. Respectively, the segmentation accuracy rates reached 96.98%, 97.58%, and 97.63%; the specificity reached 98.28%, 98.54%, and 98.73%; and the F-measure scores reached 82.93%, 81.27%, and 84.22%. We compared the experimental results with some state-of-art methods, such as U-Net, R2U-Net, and AG-UNet in terms of accuracy, sensitivity, specificity, F-measure, and AUCROC. Particularly, MRA-UNet outperformed U-Net by 1.51%, 3.44%, and 0.49% on DRIVE, CHASE, and STARE datasets, respectively.
Collapse
|
172
|
Sarhan MH, Nasseri MA, Zapp D, Maier M, Lohmann CP, Navab N, Eslami A. Machine Learning Techniques for Ophthalmic Data Processing: A Review. IEEE J Biomed Health Inform 2020; 24:3338-3350. [PMID: 32750971 DOI: 10.1109/jbhi.2020.3012134] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Machine learning and especially deep learning techniques are dominating medical image and data analysis. This article reviews machine learning approaches proposed for diagnosing ophthalmic diseases during the last four years. Three diseases are addressed in this survey, namely diabetic retinopathy, age-related macular degeneration, and glaucoma. The review covers over 60 publications and 25 public datasets and challenges related to the detection, grading, and lesion segmentation of the three considered diseases. Each section provides a summary of the public datasets and challenges related to each pathology and the current methods that have been applied to the problem. Furthermore, the recent machine learning approaches used for retinal vessels segmentation, and methods of retinal layers and fluid segmentation are reviewed. Two main imaging modalities are considered in this survey, namely color fundus imaging, and optical coherence tomography. Machine learning approaches that use eye measurements and visual field data for glaucoma detection are also included in the survey. Finally, the authors provide their views, expectations and the limitations of the future of these techniques in the clinical practice.
Collapse
|
173
|
Rodrigues EO, Conci A, Liatsis P. ELEMENT: Multi-Modal Retinal Vessel Segmentation Based on a Coupled Region Growing and Machine Learning Approach. IEEE J Biomed Health Inform 2020; 24:3507-3519. [PMID: 32750920 DOI: 10.1109/jbhi.2020.2999257] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Vascular structures in the retina contain important information for the detection and analysis of ocular diseases, including age-related macular degeneration, diabetic retinopathy and glaucoma. Commonly used modalities in diagnosis of these diseases are fundus photography, scanning laser ophthalmoscope (SLO) and fluorescein angiography (FA). Typically, retinal vessel segmentation is carried out either manually or interactively, which makes it time consuming and prone to human errors. In this research, we propose a new multi-modal framework for vessel segmentation called ELEMENT (vEsseL sEgmentation using Machine lEarning and coNnecTivity). This framework consists of feature extraction and pixel-based classification using region growing and machine learning. The proposed features capture complementary evidence based on grey level and vessel connectivity properties. The latter information is seamlessly propagated through the pixels at the classification phase. ELEMENT reduces inconsistencies and speeds up the segmentation throughput. We analyze and compare the performance of the proposed approach against state-of-the-art vessel segmentation algorithms in three major groups of experiments, for each of the ocular modalities. Our method produced higher overall performance, with an overall accuracy of 97.40%, compared to 25 of the 26 state-of-the-art approaches, including six works based on deep learning, evaluated on the widely known DRIVE fundus image dataset. In the case of the STARE, CHASE-DB, VAMPIRE FA, IOSTAR SLO and RC-SLO datasets, the proposed framework outperformed all of the state-of-the-art methods with accuracies of 98.27%, 97.78%, 98.34%, 98.04% and 98.35%, respectively.
Collapse
|
174
|
Wang D, Haytham A, Pottenburgh J, Saeedi O, Tao Y. Hard Attention Net for Automatic Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2020; 24:3384-3396. [DOI: 10.1109/jbhi.2020.3002985] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
175
|
Chen L, Sun J, Canton G, Balu N, Hippe DS, Zhao X, Li R, Hatsukami TS, Hwang JN, Yuan C. Automated Artery Localization and Vessel Wall Segmentation using Tracklet Refinement and Polar Conversion. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:217603-217614. [PMID: 33777593 PMCID: PMC7996631 DOI: 10.1109/access.2020.3040616] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Quantitative analysis of blood vessel wall structures is important to study atherosclerotic diseases and assess cardiovascular event risks. To achieve this, accurate identification of vessel luminal and outer wall contours is needed. Computer-assisted tools exist, but manual preprocessing steps, such as region of interest identification and/or boundary initialization, are still needed. In addition, prior knowledge of the ring shape of vessel walls has not been fully explored in designing segmentation methods. In this work, a fully automated artery localization and vessel wall segmentation system is proposed. A tracklet refinement algorithm was adapted to robustly identify the artery of interest from a neural network-based artery centerline identification architecture. Image patches were extracted from the centerlines and converted in a polar coordinate system for vessel wall segmentation. The segmentation method used 3D polar information and overcame problems such as contour discontinuity, complex vessel geometry, and interference from neighboring vessels. Verified by a large (>32000 images) carotid artery dataset collected from multiple sites, the proposed system was shown to better automatically segment the vessel wall than traditional vessel wall segmentation methods or standard convolutional neural network approaches. In addition, a segmentation uncertainty score was estimated to effectively identify slices likely to have errors and prompt manual confirmation of the segmentation. This robust vessel wall segmentation system has applications in different vascular beds and will facilitate vessel wall feature extraction and cardiovascular risk assessment.
Collapse
Affiliation(s)
- Li Chen
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, 98195, USA
| | - Jie Sun
- Department of Radiology, University of Washington, Seattle, WA, 98195, USA
| | - Gador Canton
- Department of Radiology, University of Washington, Seattle, WA, 98195, USA
| | - Niranjan Balu
- Department of Radiology, University of Washington, Seattle, WA, 98195, USA
| | - Daniel S. Hippe
- Department of Radiology, University of Washington, Seattle, WA, 98195, USA
| | - Xihai Zhao
- Department of Biomedical Engineering, Tsinghua University School of Medicine, Beijing, China
| | - Rui Li
- Department of Biomedical Engineering, Tsinghua University School of Medicine, Beijing, China
| | | | - Jenq-Neng Hwang
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, 98195, USA
| | - Chun Yuan
- Department of Radiology, University of Washington, Seattle, WA, 98195, USA
| |
Collapse
|
176
|
Mookiah MRK, Hogg S, MacGillivray TJ, Prathiba V, Pradeepa R, Mohan V, Anjana RM, Doney AS, Palmer CNA, Trucco E. A review of machine learning methods for retinal blood vessel segmentation and artery/vein classification. Med Image Anal 2020; 68:101905. [PMID: 33385700 DOI: 10.1016/j.media.2020.101905] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 11/10/2020] [Accepted: 11/11/2020] [Indexed: 12/20/2022]
Abstract
The eye affords a unique opportunity to inspect a rich part of the human microvasculature non-invasively via retinal imaging. Retinal blood vessel segmentation and classification are prime steps for the diagnosis and risk assessment of microvascular and systemic diseases. A high volume of techniques based on deep learning have been published in recent years. In this context, we review 158 papers published between 2012 and 2020, focussing on methods based on machine and deep learning (DL) for automatic vessel segmentation and classification for fundus camera images. We divide the methods into various classes by task (segmentation or artery-vein classification), technique (supervised or unsupervised, deep and non-deep learning, hand-crafted methods) and more specific algorithms (e.g. multiscale, morphology). We discuss advantages and limitations, and include tables summarising results at-a-glance. Finally, we attempt to assess the quantitative merit of DL methods in terms of accuracy improvement compared to other methods. The results allow us to offer our views on the outlook for vessel segmentation and classification for fundus camera images.
Collapse
Affiliation(s)
| | - Stephen Hogg
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| | - Tom J MacGillivray
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh EH16 4SB, UK
| | - Vijayaraghavan Prathiba
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Rajendra Pradeepa
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Viswanathan Mohan
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Ranjit Mohan Anjana
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Alexander S Doney
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Colin N A Palmer
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Emanuele Trucco
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| |
Collapse
|
177
|
Retinal Vessel Segmentation by Deep Residual Learning with Wide Activation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2020; 2020:8822407. [PMID: 33101403 PMCID: PMC7569427 DOI: 10.1155/2020/8822407] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 09/14/2020] [Accepted: 09/21/2020] [Indexed: 11/17/2022]
Abstract
Purpose Retinal blood vessel image segmentation is an important step in ophthalmological analysis. However, it is difficult to segment small vessels accurately because of low contrast and complex feature information of blood vessels. The objective of this study is to develop an improved retinal blood vessel segmentation structure (WA-Net) to overcome these challenges. Methods This paper mainly focuses on the width of deep learning. The channels of the ResNet block were broadened to propagate more low-level features, and the identity mapping pathway was slimmed to maintain parameter complexity. A residual atrous spatial pyramid module was used to capture the retinal vessels at various scales. We applied weight normalization to eliminate the impacts of the mini-batch and improve segmentation accuracy. The experiments were performed on the DRIVE and STARE datasets. To show the generalizability of WA-Net, we performed cross-training between datasets. Results The global accuracy and specificity within datasets were 95.66% and 96.45% and 98.13% and 98.71%, respectively. The accuracy and area under the curve of the interdataset diverged only by 1%∼2% compared with the performance of the corresponding intradataset. Conclusion All the results show that WA-Net extracts more detailed blood vessels and shows superior performance on retinal blood vessel segmentation tasks.
Collapse
|
178
|
Escorcia-Gutierrez J, Torrents-Barrena J, Gamarra M, Romero-Aroca P, Valls A, Puig D. Convexity shape constraints for retinal blood vessel segmentation and foveal avascular zone detection. Comput Biol Med 2020; 127:104049. [PMID: 33099218 DOI: 10.1016/j.compbiomed.2020.104049] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 10/06/2020] [Accepted: 10/07/2020] [Indexed: 11/17/2022]
Abstract
Diabetic retinopathy (DR) has become a major worldwide health problem due to the increase in blindness among diabetics at early ages. The detection of DR pathologies such as microaneurysms, hemorrhages and exudates through advanced computational techniques is of utmost importance in patient health care. New computer vision techniques are needed to improve upon traditional screening of color fundus images. The segmentation of the entire anatomical structure of the retina is a crucial phase in detecting these pathologies. This work proposes a novel framework for fast and fully automatic blood vessel segmentation and fovea detection. The preprocessing method involved both contrast limited adaptive histogram equalization and the brightness preserving dynamic fuzzy histogram equalization algorithms to enhance image contrast and eliminate noise artifacts. Afterwards, the color spaces and their intrinsic components were examined to identify the most suitable color model to reveal the foreground pixels against the entire background. Several samples were then collected and used by the renowned convexity shape prior segmentation algorithm. The proposed methodology achieved an average vasculature segmentation accuracy exceeding 96%, 95%, 98% and 94% for the DRIVE, STARE, HRF and Messidor publicly available datasets, respectively. An additional validation step reached an average accuracy of 94.30% using an in-house dataset provided by the Hospital Sant Joan of Reus (Spain). Moreover, an outstanding detection accuracy of over 98% was achieved for the foveal avascular zone. An extensive state-of-the-art comparison was also conducted. The proposed approach can thus be integrated into daily clinical practice to assist medical experts in the diagnosis of DR.
Collapse
Affiliation(s)
- José Escorcia-Gutierrez
- Electronic and Telecommunications Program, Universidad Autónoma Del Caribe, Barranquilla, Colombia; Departament D'Enginyeria Informàtica I Matemàtiques, Escola Técnica Superior D'Enginyeria, Universitat Rovira I Virgili, Tarragona, Spain.
| | - Jordina Torrents-Barrena
- Departament D'Enginyeria Informàtica I Matemàtiques, Escola Técnica Superior D'Enginyeria, Universitat Rovira I Virgili, Tarragona, Spain.
| | - Margarita Gamarra
- Departament of Computational Science and Electronic, Universidad de La Costa, CUC, Barranquilla, Colombia
| | - Pedro Romero-Aroca
- Ophthalmology Service, Universitari Hospital Sant Joan, Institut de Investigacio Sanitaria Pere Virgili [IISPV], Reus, Spain
| | - Aida Valls
- Departament D'Enginyeria Informàtica I Matemàtiques, Escola Técnica Superior D'Enginyeria, Universitat Rovira I Virgili, Tarragona, Spain.
| | - Domenec Puig
- Departament D'Enginyeria Informàtica I Matemàtiques, Escola Técnica Superior D'Enginyeria, Universitat Rovira I Virgili, Tarragona, Spain.
| |
Collapse
|
179
|
Mou L, Zhao Y, Fu H, Liu Y, Cheng J, Zheng Y, Su P, Yang J, Chen L, Frangi AF, Akiba M, Liu J. CS 2-Net: Deep learning segmentation of curvilinear structures in medical imaging. Med Image Anal 2020; 67:101874. [PMID: 33166771 DOI: 10.1016/j.media.2020.101874] [Citation(s) in RCA: 134] [Impact Index Per Article: 26.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 08/26/2020] [Accepted: 10/05/2020] [Indexed: 12/20/2022]
Abstract
Automated detection of curvilinear structures, e.g., blood vessels or nerve fibres, from medical and biomedical images is a crucial early step in automatic image interpretation associated to the management of many diseases. Precise measurement of the morphological changes of these curvilinear organ structures informs clinicians for understanding the mechanism, diagnosis, and treatment of e.g. cardiovascular, kidney, eye, lung, and neurological conditions. In this work, we propose a generic and unified convolution neural network for the segmentation of curvilinear structures and illustrate in several 2D/3D medical imaging modalities. We introduce a new curvilinear structure segmentation network (CS2-Net), which includes a self-attention mechanism in the encoder and decoder to learn rich hierarchical representations of curvilinear structures. Two types of attention modules - spatial attention and channel attention - are utilized to enhance the inter-class discrimination and intra-class responsiveness, to further integrate local features with their global dependencies and normalization, adaptively. Furthermore, to facilitate the segmentation of curvilinear structures in medical images, we employ a 1×3 and a 3×1 convolutional kernel to capture boundary features. Besides, we extend the 2D attention mechanism to 3D to enhance the network's ability to aggregate depth information across different layers/slices. The proposed curvilinear structure segmentation network is thoroughly validated using both 2D and 3D images across six different imaging modalities. Experimental results across nine datasets show the proposed method generally outperforms other state-of-the-art algorithms in various metrics.
Collapse
Affiliation(s)
- Lei Mou
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Yonghuai Liu
- Department of Computer Science, Edge Hill University, Ormskirk, UK
| | - Jun Cheng
- UBTech Research, UBTech Robotics Corp Ltd, Shenzhen, China
| | - Yalin Zheng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, UK; Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Pan Su
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Jianlong Yang
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Li Chen
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
| | - Alejandro F Frangi
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China; Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing and School of Medicine, University of Leeds, Leeds, UK; Leeds Institute of Cardiovascular and Metabolic Medicine, School of Medicine, University of Leeds, Leeds, UK; Medical Imaging Research Centre (MIRC), University Hospital Gasthuisberg, Cardiovascular Sciences and Electrical Engineering Departments, KU Leuven, Leuven, Belgium
| | | | - Jiang Liu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China; Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China; Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China.
| |
Collapse
|
180
|
Zhang L, Zhang J, Li Z, Song Y. A multiple-channel and atrous convolution network for ultrasound image segmentation. Med Phys 2020; 47:6270-6285. [PMID: 33007105 DOI: 10.1002/mp.14512] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 09/12/2020] [Accepted: 09/22/2020] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Ultrasound image segmentation is a challenging task due to a low signal-to-noise ratio and poor image quality. Although several approaches based on the convolutional neural network (CNN) have been applied to ultrasound image segmentation, they have weak generalization ability. We propose an end-to-end, multiple-channel and atrous CNN designed to extract a greater amount of semantic information for segmentation of ultrasound images. METHOD A multiple-channel and atrous convolution network is developed, referred to as MA-Net. Similar to U-Net, MA-Net is based on an encoder-decoder architecture and includes five modules: the encoder, atrous convolution, pyramid pooling, decoder, and residual skip pathway modules. In the encoder module, we aim to capture more information with multiple-channel convolution and use large kernel convolution instead of small filters in each convolution operation. In the last layer, atrous convolution and pyramid pooling are used to extract multi-scale features. The architecture of the decoder is similar to that of the encoder module, except that up-sampling is used instead of down-sampling. Furthermore, the residual skip pathway module connects the subnetworks of the encoder and decoder to optimize learning from the deeper layer and improve the accuracy of segmentation. During the learning process, we adopt multi-task learning to enhance segmentation performance. Five types of datasets are used in our experiments. Because the original training data are limited, we apply data augmentation (e.g., horizontal and vertical flipping, random rotations, and random scaling) to our training data. We use the Dice score, precision, recall, Hausdorff distance (HD), average symmetric surface distance (ASD), and root mean square symmetric surface distance (RMSD) as the metrics for segmentation evaluation. Meanwhile, Friedman test was performed as the nonparametric statistical analysis to evaluate the algorithms. RESULTS For the datasets of brachia plexus (BP), fetal head, and lymph node segmentations, MA-Net achieved average Dice scores of 0.776, 0.973, and 0.858, respectively; with average precisions of 0.787, 0.968, and 0.854, respectively; average recalls of 0.788, 0.978, and 0.885, respectively; average HDs (mm) of 13.591, 10.924, and 19.245, respectively; average ASDs (mm) of 4.822, 4.152, and 4.312, respectively; and average RMSDs (mm) of 4.979, 4.161, and 4.930, respectively. Compared with U-Net, U-Net++, M-Net, and Dilated U-Net, the average performance of the MA-Net increased by approximately 5.68%, 2.85%, 6.59%, 36.03%, 23.64%, and 31.71% for Dice, precision, recall, HD, ASD, and RMSD, respectively. Moreover, we verified the generalization of MA-Net segmentation to lower grade brain glioma MRI and lung CT images. In addition, the MA-Net achieved the highest mean rank in the Friedman test. CONCLUSION The proposed MA-Net accurately segments ultrasound images with high generalization, and therefore, it offers a useful tool for diagnostic application in ultrasound images.
Collapse
Affiliation(s)
- Lun Zhang
- School of Information Science and Engineering, Yunnan University, Kunming, Yunnan, 650091, China.,Yunnan Vocational Institute of Energy Technology, Qujing, Yunnan, 655001, China
| | - Junhua Zhang
- School of Information Science and Engineering, Yunnan University, Kunming, Yunnan, 650091, China
| | - Zonggui Li
- School of Information Science and Engineering, Yunnan University, Kunming, Yunnan, 650091, China
| | - Yingchao Song
- School of Information Science and Engineering, Yunnan University, Kunming, Yunnan, 650091, China
| |
Collapse
|
181
|
Segmentation of Cerebrovascular Anatomy from TOF-MRA Using Length-Strained Enhancement and Random Walker. BIOMED RESEARCH INTERNATIONAL 2020; 2020:9347215. [PMID: 33015187 PMCID: PMC7525292 DOI: 10.1155/2020/9347215] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Accepted: 07/30/2020] [Indexed: 11/17/2022]
Abstract
Cerebrovascular rupture can cause a severe stroke. Three-dimensional time-of-flight (TOF) magnetic resonance angiography (MRA) is a common method of obtaining vascular information. This work proposes a fully automated segmentation method for extracting the vascular anatomy from TOF-MRA. The steps of the method are as follows. First, the brain is extracted on the basis of regional growth and path planning. Next, the brain's highlighted connected area is explored to obtain seed point information, and the Hessian matrix is used to enhance the contrast of image. Finally, a random walker combined with seed points and enhanced images is used to complete vascular anatomy segmentation. The method is tested using 12 sets of data and compared with two traditional vascular segmentation methods. Results show that the described method obtains an average Dice coefficient of 90.68%, and better results were obtained in comparison with the traditional methods.
Collapse
|
182
|
Gieraerts C, Dangis A, Janssen L, Demeyere A, De Bruecker Y, De Brucker N, van Den Bergh A, Lauwerier T, Heremans A, Frans E, Laurent M, Ector B, Roosen J, Smismans A, Frans J, Gillis M, Symons R. Prognostic Value and Reproducibility of AI-assisted Analysis of Lung Involvement in COVID-19 on Low-Dose Submillisievert Chest CT: Sample Size Implications for Clinical Trials. Radiol Cardiothorac Imaging 2020; 2:e200441. [PMID: 33778634 PMCID: PMC7586438 DOI: 10.1148/ryct.2020200441] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
PURPOSE To compare the prognostic value and reproducibility of visual versus AI-assisted analysis of lung involvement on submillisievert low-dose chest CT in COVID-19 patients. MATERIALS AND METHODS This was a HIPAA-compliant, institutional review board-approved retrospective study. From March 15 to June 1, 2020, 250 RT-PCR confirmed COVID-19 patients were studied with low-dose chest CT at admission. Visual and AI-assisted analysis of lung involvement was performed by using a semi-quantitative CT score and a quantitative percentage of lung involvement. Adverse outcome was defined as intensive care unit (ICU) admission or death. Cox regression analysis, Kaplan-Meier curves, and cross-validated receiver operating characteristic curve with area under the curve (AUROC) analysis was performed to compare model performance. Intraclass correlation coefficients (ICCs) and Bland- Altman analysis was used to assess intra- and interreader reproducibility. RESULTS Adverse outcome occurred in 39 patients (11 deaths, 28 ICU admissions). AUC values from AI-assisted analysis were significantly higher than those from visual analysis for both semi-quantitative CT scores and percentages of lung involvement (all P<0.001). Intrareader and interreader agreement rates were significantly higher for AI-assisted analysis than visual analysis (all ICC ≥0.960 versus ≥0.885). AI-assisted variability for quantitative percentage of lung involvement was 17.2% (coefficient of variation) versus 34.7% for visual analysis. The sample size to detect a 5% change in lung involvement with 90% power and an α error of 0.05 was 250 patients with AI-assisted analysis and 1014 patients with visual analysis. CONCLUSION AI-assisted analysis of lung involvement on submillisievert low-dose chest CT outperformed conventional visual analysis in predicting outcome in COVID-19 patients while reducing CT variability. Lung involvement on chest CT could be used as a reliable metric in future clinical trials.
Collapse
Affiliation(s)
- Christopher Gieraerts
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Anthony Dangis
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Lode Janssen
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Annick Demeyere
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Yves De Bruecker
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Nele De Brucker
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Annelies van Den Bergh
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Tine Lauwerier
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - André Heremans
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Eric Frans
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Michaël Laurent
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Bavo Ector
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - John Roosen
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Annick Smismans
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Johan Frans
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Marc Gillis
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| | - Rolf Symons
- From the Department of Radiology – Imelda Hospital, Bonheiden, Belgium (C.G., A.D., L.J., A.D., Y.D.B., R.S.); Department of Pulmonology – Imelda Hospital, Bonheiden, Belgium (N.D.B., A.V.D.B., T.L., A.H., E.F.); Department of Intensive Care Medicine – Imelda Hospital, Bonheiden, Belgium (E.F.); Department of Geriatrics – Imelda Hospital, Bonheiden, Belgium (M.L.); Department of Cardiology – Imelda Hospital, Bonheiden, Belgium (B.E., J.R.); Department of Medical Microbiology – Imelda Hospital, Bonheiden, Belgium (A.S., J.F.); Department of Emergency Medicine – Imelda Hospital, Bonheiden, Belgium (M.G.)
| |
Collapse
|
183
|
Abstract
Fundus blood vessel image segmentation plays an important role in the diagnosis and treatment of diseases and is the basis of computer-aided diagnosis. Feature information from the retinal blood vessel image is relatively complicated, and the existing algorithms are sometimes difficult to perform effective segmentation with. Aiming at the problems of low accuracy and low sensitivity of the existing segmentation methods, an improved U-shaped neural network (MRU-NET) segmentation method for retinal vessels was proposed. Firstly, the image enhancement algorithm and random segmentation method are used to solve the problems of low contrast and insufficient image data of the original image. Moreover, smaller image blocks after random segmentation are helpful to reduce the complexity of the U-shaped neural network model; secondly, the residual learning is introduced into the encoder and decoder to improve the efficiency of feature use and to reduce information loss, and a feature fusion module is introduced between the encoder and decoder to extract image features with different granularities; and finally, a feature balancing module is added to the skip connections to resolve the semantic gap between low-dimensional features in the encoder and high-dimensional features in decoder. Experimental results show that our method has better accuracy and sensitivity on the DRIVE and STARE datasets (accuracy (ACC) = 0.9611, sensitivity (SE) = 0.8613; STARE: ACC = 0.9662, SE = 0.7887) than some of the state-of-the-art methods.
Collapse
|
184
|
Zhang L, Zhang J, Shen P, Zhu G, Li P, Lu X, Zhang H, Shah SA, Bennamoun M. Block Level Skip Connections Across Cascaded V-Net for Multi-Organ Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2782-2793. [PMID: 32091995 DOI: 10.1109/tmi.2020.2975347] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Multi-organ segmentation is a challenging task due to the label imbalance and structural differences between different organs. In this work, we propose an efficient cascaded V-Net model to improve the performance of multi-organ segmentation by establishing dense Block Level Skip Connections (BLSC) across cascaded V-Net. Our model can take full advantage of features from the first stage network and make the cascaded structure more efficient. We also combine stacked small and large kernels with an inception-like structure to help our model to learn more patterns, which produces superior results for multi-organ segmentation. In addition, some small organs are commonly occluded by large organs and have unclear boundaries with other surrounding tissues, which makes them hard to be segmented. We therefore first locate the small organs through a multi-class network and crop them randomly with the surrounding region, then segment them with a single-class network. We evaluated our model on SegTHOR 2019 challenge unseen testing set and Multi-Atlas Labeling Beyond the Cranial Vault challenge validation set. Our model has achieved an average dice score gain of 1.62 percents and 3.90 percents compared to traditional cascaded networks on these two datasets, respectively. For hard-to-segment small organs, such as the esophagus in SegTHOR 2019 challenge, our technique has achieved a gain of 5.63 percents on dice score, and four organs in Multi-Atlas Labeling Beyond the Cranial Vault challenge have achieved a gain of 5.27 percents on average dice score.
Collapse
|
185
|
|
186
|
Mohammedhasan M, Uğuz H. A New Deeply Convolutional Neural Network Architecture for Retinal Blood Vessel Segmentation. INT J PATTERN RECOGN 2020. [DOI: 10.1142/s0218001421570019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper proposes an incoming Deep Convolutional Neural Network (CNN) architecture for segmenting retinal blood vessels automatically from fundus images. Automatic segmentation performs a substantial role in computer-aided diagnosis of retinal diseases; it is of considerable significance as eye diseases as well as some other systemic diseases give rise to perceivable pathologic changes. Retinal blood vessel segmentation is challenging because of the excessive changes in the morphology of the vessels on a noisy background. Previous deep learning-based supervised methods suffer from the insufficient use of low-level features which is advantageous in semantic segmentation tasks. The proposed architecture makes use of both high-level features and low-level features to segment retinal blood vessels. The major contribution of the proposed architecture concentrates on two important factors; the first in its supplying of extremely modularized network architecture of aggregated residual connections which enable us to copy the learned layers from the shallower model and developing additional layers to identity mapping. The second is to improve the utilization of computing resources within the network. This is achieved through a skillfully crafted design that allows for increased depth and width of the network while maintaining the stability of its computational budget. Experimental results show the effectiveness of using aggregated residual connections in segmenting retinal vessels more accurately and clearly. Compared to the best existing methods, the proposed method outperformed other existing methods in different measures, comprised less false positives at fine vessels, and caressed more clear lines with sufficient details like the human annotator.
Collapse
Affiliation(s)
- Mali Mohammedhasan
- Department of Computer Engineering, Selçuk Üniversitesi, Selçuklu, Konya 42130, Turkey
| | - Harun Uğuz
- Department of Computer Engineering, Selçuk Üniversitesi, Selçuklu, Konya 42130, Turkey
| |
Collapse
|
187
|
El Damrawi G, Zahran MA, Amin E, Abdelsalam MM. Enforcing artificial neural network in the early detection of diabetic retinopathy OCTA images analysed by multifractal geometry. JOURNAL OF TAIBAH UNIVERSITY FOR SCIENCE 2020. [DOI: 10.1080/16583655.2020.1796244] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- G. El Damrawi
- Glass Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura, Egypt
| | - M. A. Zahran
- Theoretical Physics Group, Physics Department, Faculty of Science, Mansoura University, Mansoura, Egypt
| | - ElShaimaa Amin
- Physics Department (Biophysics), Faculty of Science, Mansoura University, Mansoura, Egypt
| | - Mohamed M. Abdelsalam
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| |
Collapse
|
188
|
Huang F, Tan T, Dashtbozorg B, Zhou Y, Romeny BMTH. From Local to Global: A Graph Framework for Retinal Artery/Vein Classification. IEEE Trans Nanobioscience 2020; 19:589-597. [PMID: 32746331 DOI: 10.1109/tnb.2020.3004481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Fundus photography has been widely used for inspecting eye disorders by ophthalmologists or computer algorithms. Biomarkers related to retinal vessels plays an essential role to detect early diabetes. To quantify vascular biomarkers or the corresponding changes, an accurate artery and vein classification is necessary. In this work, we propose a new framework to boost local vessel classification with a global vascular network model using graph convolution. We compare our proposed method with two traditional state-of-the-art methods on a testing dataset of 750 images from the Maastricht Study. After incorporating global information, our model achieves the best accuracy of 86.45% compared to 85.5% from convolutional neural networks (CNN) and 82.9% from handcrafted pixel feature classification (HPFC). Our model also obtains the best area under receiver operating characteristic curve (AUC) of 0.95, compared to 0.93 from CNN and 0.90 from HPFC. The new classification framework has the advantage of easy deployment on top of local classification features. It corrects the local classification error by minimizing global classification error and it brings free additional classification performance.
Collapse
|
189
|
Tang X, Zhong B, Peng J, Hao B, Li J. Multi-scale channel importance sorting and spatial attention mechanism for retinal vessels segmentation. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106353] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
190
|
Laha S, LaLonde R, Carmack AE, Foroosh H, Olson JC, Shaikh S, Bagci U. Analysis of Video Retinal Angiography With Deep Learning and Eulerian Magnification. FRONTIERS IN COMPUTER SCIENCE 2020. [DOI: 10.3389/fcomp.2020.00024] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
191
|
Feng F, Ashton‐Miller JA, DeLancey JOL, Luo J. Convolutional neural network‐based pelvic floor structure segmentation using magnetic resonance imaging in pelvic organ prolapse. Med Phys 2020; 47:4281-4293. [DOI: 10.1002/mp.14377] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 06/18/2020] [Accepted: 06/22/2020] [Indexed: 11/11/2022] Open
Affiliation(s)
- Fei Feng
- University of Michigan‐Shanghai Jiao Tong University Joint Institute Shanghai Jiao Tong University Shanghai200240China
| | | | - John O. L. DeLancey
- Department of Obstetrics and Gynecology University of Michigan Ann Arbor MI48109USA
| | - Jiajia Luo
- Biomedical Engineering Department Peking University Beijing100191China
| |
Collapse
|
192
|
A Multi-Scale Feature Fusion Method Based on U-Net for Retinal Vessel Segmentation. ENTROPY 2020; 22:e22080811. [PMID: 33286584 PMCID: PMC7517387 DOI: 10.3390/e22080811] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 07/22/2020] [Accepted: 07/22/2020] [Indexed: 11/17/2022]
Abstract
Computer-aided automatic segmentation of retinal blood vessels plays an important role in the diagnosis of diseases such as diabetes, glaucoma, and macular degeneration. In this paper, we propose a multi-scale feature fusion retinal vessel segmentation model based on U-Net, named MSFFU-Net. The model introduces the inception structure into the multi-scale feature extraction encoder part, and the max-pooling index is applied during the upsampling process in the feature fusion decoder of an improved network. The skip layer connection is used to transfer each set of feature maps generated on the encoder path to the corresponding feature maps on the decoder path. Moreover, a cost-sensitive loss function based on the Dice coefficient and cross-entropy is designed. Four transformations-rotating, mirroring, shifting and cropping-are used as data augmentation strategies, and the CLAHE algorithm is applied to image preprocessing. The proposed framework is tested and trained on DRIVE and STARE, and sensitivity (Sen), specificity (Spe), accuracy (Acc), and area under curve (AUC) are adopted as the evaluation metrics. Detailed comparisons with U-Net model, at last, it verifies the effectiveness and robustness of the proposed model. The Sen of 0.7762 and 0.7721, Spe of 0.9835 and 0.9885, Acc of 0.9694 and 0.9537 and AUC value of 0.9790 and 0.9680 were achieved on DRIVE and STARE databases, respectively. Results are also compared to other state-of-the-art methods, demonstrating that the performance of the proposed method is superior to that of other methods and showing its competitive results.
Collapse
|
193
|
Ni J, Wu J, Tong J, Chen Z, Zhao J. GC-Net: Global context network for medical image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 190:105121. [PMID: 31623863 DOI: 10.1016/j.cmpb.2019.105121] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Revised: 09/23/2019] [Accepted: 10/04/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Medical image segmentation plays an important role in many clinical applications such as disease diagnosis, surgery planning, and computer-assisted therapy. However, it is a very challenging task due to variant images qualities, complex shapes of objects, and the existence of outliers. Recently, researchers have presented deep learning methods to segment medical images. However, these methods often use the high-level features of the convolutional neural network directly or the high-level features combined with the shallow features, thus ignoring the role of the global context features for the segmentation task. Consequently, they have limited capability on extensive medical segmentation tasks. The purpose of this work is to devise a neural network with global context feature information for accomplishing medical image segmentation of different tasks. METHODS The proposed global context network (GC-Net) consists of two components; feature encoding and decoding modules. We use multiple convolutions and batch normalization layers in the encoding module. On the other hand, the decoding module is formed by a proposed global context attention (GCA) block and squeeze and excitation pyramid pooling (SEPP) block. The GCA module connects low-level and high-level features to produce more representative features, while the SEPP module increases the size of the receptive field and the ability of multi-scale feature fusion. Moreover, a weighted cross entropy loss is designed to better balance the segmented and non-segmented regions. RESULTS The proposed GC-Net is validated on three publicly available datasets and one local dataset. The tested medical segmentation tasks include segmentation of intracranial blood vessel, retinal vessels, cell contours, and lung. Experiments demonstrate that, our network outperforms state-of-the-art methods concerning several commonly used evaluation metrics. CONCLUSION Medical segmentation of different tasks can be accurately and effectively achieved by devising a deep convolutional neural network with a global context attention mechanism.
Collapse
Affiliation(s)
- Jiajia Ni
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China; College of Internet of Things Engineering, HoHai University Changzhou, China
| | - Jianhuang Wu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China.
| | - Jing Tong
- College of Internet of Things Engineering, HoHai University Changzhou, China
| | - Zhengming Chen
- College of Internet of Things Engineering, HoHai University Changzhou, China
| | - Junping Zhao
- Institute of Medical Informatics, Chinese PLA General Hospital, China
| |
Collapse
|
194
|
Pachade S, Porwal P, Kokare M, Giancardo L, Meriaudeau F. Retinal vasculature segmentation and measurement framework for color fundus and SLO images. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.03.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
195
|
Semi-Supervised Learning Method of U-Net Deep Learning Network for Blood Vessel Segmentation in Retinal Images. Symmetry (Basel) 2020. [DOI: 10.3390/sym12071067] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Blood vessel segmentation methods based on deep neural networks have achieved satisfactory results. However, these methods are usually supervised learning methods, which require large numbers of retinal images with high quality pixel-level ground-truth labels. In practice, the task of labeling these retinal images is very costly, financially and in human effort. To deal with these problems, we propose a semi-supervised learning method which can be used in blood vessel segmentation with limited labeled data. In this method, we use the improved U-Net deep learning network to segment the blood vessel tree. On this basis, we implement the U-Net network-based training dataset updating strategy. A large number of experiments are presented to analyze the segmentation performance of the proposed semi-supervised learning method. The experiment results demonstrate that the proposed methodology is able to avoid the problems of insufficient hand-labels, and achieve satisfactory performance.
Collapse
|
196
|
Hervella ÁS, Rouco J, Novo J, Ortega M. Learning the retinal anatomy from scarce annotated data using self-supervised multimodal reconstruction. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106210] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
197
|
Yu L, Qin Z, Zhuang T, Ding Y, Qin Z, Raymond Choo KK. A framework for hierarchical division of retinal vascular networks. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.11.113] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
198
|
Guo S, Wang K, Kang H, Liu T, Gao Y, Li T. Bin loss for hard exudates segmentation in fundus images. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.10.103] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
199
|
NFN+: A novel network followed network for retinal vessel segmentation. Neural Netw 2020; 126:153-162. [DOI: 10.1016/j.neunet.2020.02.018] [Citation(s) in RCA: 59] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 01/28/2020] [Accepted: 02/26/2020] [Indexed: 11/21/2022]
|
200
|
Cai L, Gao J, Zhao D. A review of the application of deep learning in medical image classification and segmentation. ANNALS OF TRANSLATIONAL MEDICINE 2020; 8:713. [PMID: 32617333 PMCID: PMC7327346 DOI: 10.21037/atm.2020.02.44] [Citation(s) in RCA: 163] [Impact Index Per Article: 32.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Accepted: 02/06/2020] [Indexed: 11/24/2022]
Abstract
Big medical data mainly include electronic health record data, medical image data, gene information data, etc. Among them, medical image data account for the vast majority of medical data at this stage. How to apply big medical data to clinical practice? This is an issue of great concern to medical and computer researchers, and intelligent imaging and deep learning provide a good answer. This review introduces the application of intelligent imaging and deep learning in the field of big data analysis and early diagnosis of diseases, combining the latest research progress of big data analysis of medical images and the work of our team in the field of big data analysis of medical imagec, especially the classification and segmentation of medical images.
Collapse
Affiliation(s)
- Lei Cai
- College of Information Engineering and Technology, Beijing University of Chemical Technology, Beijing, China
| | - Jingyang Gao
- College of Information Engineering and Technology, Beijing University of Chemical Technology, Beijing, China
| | - Di Zhao
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|