151
|
Lian S, Li L, Lian G, Xiao X, Luo Z, Li S. A Global and Local Enhanced Residual U-Net for Accurate Retinal Vessel Segmentation. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:852-862. [PMID: 31095493 DOI: 10.1109/tcbb.2019.2917188] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Retinal vessel segmentation is a critical procedure towards the accurate visualization, diagnosis, early treatment, and surgery planning of ocular diseases. Recent deep learning-based approaches have achieved impressive performance in retinal vessel segmentation. However, they usually apply global image pre-processing and take the whole retinal images as input during network training, which have two drawbacks for accurate retinal vessel segmentation. First, these methods lack the utilization of the local patch information. Second, they overlook the geometric constraint that retina only occurs in a specific area within the whole image or the extracted patch. As a consequence, these global-based methods suffer in handling details, such as recognizing the small thin vessels, discriminating the optic disk, etc. To address these drawbacks, this study proposes a Global and Local enhanced residual U-nEt (GLUE) for accurate retinal vessel segmentation, which benefits from both the globally and locally enhanced information inside the retinal region. Experimental results on two benchmark datasets demonstrate the effectiveness of the proposed method, which consistently improves the segmentation accuracy over a conventional U-Net and achieves competitive performance compared to the state-of-the-art.
Collapse
|
152
|
Zhou Y, Chen Z, Shen H, Zheng X, Zhao R, Duan X. A refined equilibrium generative adversarial network for retinal vessel segmentation. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.06.143] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
153
|
Çetinkaya MB, Duran H. A detailed and comparative work for retinal vessel segmentation based on the most effective heuristic approaches. ACTA ACUST UNITED AC 2021; 66:181-200. [PMID: 33768764 DOI: 10.1515/bmt-2020-0089] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Accepted: 09/28/2020] [Indexed: 11/15/2022]
Abstract
Computer based imaging and analysis techniques are frequently used for the diagnosis and treatment of retinal diseases. Although retinal images are of high resolution, the contrast of the retinal blood vessels is usually very close to the background of the retinal image. The detection of the retinal blood vessels with low contrast or with contrast close to the background of the retinal image is too difficult. Therefore, improving algorithms which can successfully distinguish retinal blood vessels from the retinal image has become an important area of research. In this work, clustering based heuristic artificial bee colony, particle swarm optimization, differential evolution, teaching learning based optimization, grey wolf optimization, firefly and harmony search algorithms were applied for accurate segmentation of retinal vessels and their performances were compared in terms of convergence speed, mean squared error, standard deviation, sensitivity, specificity. accuracy and precision. From the simulation results it is seen that the performance of the algorithms in terms of convergence speed and mean squared error is close to each other. It is observed from the statistical analyses that the algorithms show stable behavior and also the vessel and the background pixels of the retinal image can successfully be clustered by the heuristic algorithms.
Collapse
Affiliation(s)
- Mehmet Bahadır Çetinkaya
- Department of Mechatronics Engineering, Faculty of Engineering, University of Erciyes, Melikgazi, Kayseri, Turkey
| | - Hakan Duran
- Department of Mechatronics Engineering, Faculty of Engineering, University of Erciyes, Melikgazi, Kayseri, Turkey
| |
Collapse
|
154
|
Alharithi F, Almulihi A, Bourouis S, Alroobaea R, Bouguila N. Discriminative Learning Approach Based on Flexible Mixture Model for Medical Data Categorization and Recognition. SENSORS (BASEL, SWITZERLAND) 2021; 21:2450. [PMID: 33918120 PMCID: PMC8036303 DOI: 10.3390/s21072450] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 03/29/2021] [Accepted: 03/30/2021] [Indexed: 12/13/2022]
Abstract
In this paper, we propose a novel hybrid discriminative learning approach based on shifted-scaled Dirichlet mixture model (SSDMM) and Support Vector Machines (SVMs) to address some challenging problems of medical data categorization and recognition. The main goal is to capture accurately the intrinsic nature of biomedical images by considering the desirable properties of both generative and discriminative models. To achieve this objective, we propose to derive new data-based SVM kernels generated from the developed mixture model SSDMM. The proposed approach includes the following steps: the extraction of robust local descriptors, the learning of the developed mixture model via the expectation-maximization (EM) algorithm, and finally the building of three SVM kernels for data categorization and classification. The potential of the implemented framework is illustrated through two challenging problems that concern the categorization of retinal images into normal or diabetic cases and the recognition of lung diseases in chest X-rays (CXR) images. The obtained results demonstrate the merits of our hybrid approach as compared to other methods.
Collapse
Affiliation(s)
- Fahd Alharithi
- College of Computers and Information Technology, Taif University, Taif, P.O. Box 11099, Taif 21944, Saudi Arabia; (A.A.); (S.B.); (R.A.)
| | - Ahmed Almulihi
- College of Computers and Information Technology, Taif University, Taif, P.O. Box 11099, Taif 21944, Saudi Arabia; (A.A.); (S.B.); (R.A.)
| | - Sami Bourouis
- College of Computers and Information Technology, Taif University, Taif, P.O. Box 11099, Taif 21944, Saudi Arabia; (A.A.); (S.B.); (R.A.)
| | - Roobaea Alroobaea
- College of Computers and Information Technology, Taif University, Taif, P.O. Box 11099, Taif 21944, Saudi Arabia; (A.A.); (S.B.); (R.A.)
| | - Nizar Bouguila
- The Concordia Institute for Information Systems Engineering (CIISE), Concordia University, Montreal, QC H3G 1T7, Canada;
| |
Collapse
|
155
|
Wang C, Zhao Z, Yu Y. Fine retinal vessel segmentation by combining Nest U-net and patch-learning. Soft comput 2021. [DOI: 10.1007/s00500-020-05552-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
156
|
Deng H, Qiao H, Dai Q, Ma C. Deep learning in photoacoustic imaging: a review. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200374VRR. [PMID: 33837678 PMCID: PMC8033250 DOI: 10.1117/1.jbo.26.4.040901] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 03/18/2021] [Indexed: 05/18/2023]
Abstract
SIGNIFICANCE Photoacoustic (PA) imaging can provide structural, functional, and molecular information for preclinical and clinical studies. For PA imaging (PAI), non-ideal signal detection deteriorates image quality, and quantitative PAI (QPAI) remains challenging due to the unknown light fluence spectra in deep tissue. In recent years, deep learning (DL) has shown outstanding performance when implemented in PAI, with applications in image reconstruction, quantification, and understanding. AIM We provide (i) a comprehensive overview of the DL techniques that have been applied in PAI, (ii) references for designing DL models for various PAI tasks, and (iii) a summary of the future challenges and opportunities. APPROACH Papers published before November 2020 in the area of applying DL in PAI were reviewed. We categorized them into three types: image understanding, reconstruction of the initial pressure distribution, and QPAI. RESULTS When applied in PAI, DL can effectively process images, improve reconstruction quality, fuse information, and assist quantitative analysis. CONCLUSION DL has become a powerful tool in PAI. With the development of DL theory and technology, it will continue to boost the performance and facilitate the clinical translation of PAI.
Collapse
Affiliation(s)
- Handi Deng
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
| | - Hui Qiao
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Qionghai Dai
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Cheng Ma
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
- Beijing Innovation Center for Future Chip, Beijing, China
| |
Collapse
|
157
|
Lightweight pyramid network with spatial attention mechanism for accurate retinal vessel segmentation. Int J Comput Assist Radiol Surg 2021; 16:673-682. [PMID: 33751370 DOI: 10.1007/s11548-021-02344-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 03/04/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE The morphological characteristics of retinal vessels are vital for the early diagnosis of pathological diseases such as diabetes and hypertension. However, the low contrast and complex morphology pose a challenge to automatic retinal vessel segmentation. To extract precise semantic features, more convolution and pooling operations are adopted, but some structural information is potentially ignored. METHODS In the paper, we propose a novel lightweight pyramid network (LPN) fusing multi-scale features with spatial attention mechanism to preserve the structure information of retinal vessels. The pyramid hierarchy model is constructed to generate multi-scale representations, and its semantic features are strengthened with the introduction of the attention mechanism. The combination of multi-scale features contributes to its accurate prediction. RESULTS The LPN is evaluated on benchmark datasets DRIVE, STARE and CHASE, and the results indicate its state-of-the-art performance (e.g., ACC of 97.09[Formula: see text]/97.49[Formula: see text]/97.48[Formula: see text], AUC of 98.79[Formula: see text]/99.01[Formula: see text]/98.91[Formula: see text] on the DRIVE, STARE and CHASE datasets, respectively). The robustness and generalization ability of the LPN are further proved in cross-training experiment. CONCLUSION The visualization experiment reveals the semantic gap between various scales of the pyramid and verifies the effectiveness of the attention mechanism, which provide a potential basis for the pyramid hierarchy model in multi-scale vessel segmentation task. Furthermore, the number of model parameters is greatly reduced.
Collapse
|
158
|
Maharjan A, Alsadoon A, Prasad PWC, AlSallami N, Rashid TA, Alrubaie A, Haddad S. A novel solution of using mixed reality in bowel and oral and maxillofacial surgical telepresence: 3D mean value cloning algorithm. Int J Med Robot 2021; 17:e2224. [PMID: 33426753 DOI: 10.1002/rcs.2224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 09/01/2020] [Accepted: 09/01/2020] [Indexed: 11/07/2022]
Abstract
BACKGROUND AND AIM Most of the mixed reality models used in the surgical telepresence are suffering from the discrepancies in the boundary area and spatial-temporal inconsistency due to the illumination variation in the video frames. The aim behind this work is to propose a new solution that helps produce the composite video by merging the augmented video of the surgery site and virtual hand of the remote expertise surgeon. The purpose of the proposed solution is to decrease the processing time and enhance the accuracy of merged video by decreasing the overlay and visualization error and removing occlusion and artefacts. METHODOLOGY The proposed system enhanced mean-value cloning algorithm that helps to maintain the spatial-temporal consistency of the final composite video. The enhanced algorithm includes the three-dimensional mean-value coordinates and improvised mean-value interpolant in the image cloning process, which helps to reduce the sawtooth, smudging and discolouration artefacts around the blending region. RESULTS The accuracy in terms of overlay error of the proposed solution is improved from 1.01 to 0.80 mm, whereas the accuracy in terms of visualization error is improved from 98.8% to 99.4%. The processing time is reduced to 0.173 s from 0.211 s. The processing time and the accuracy of the proposed solution are enhanced as compared to the state-of-art solution. CONCLUSION Our solution helps make the object of interest consistent with the light intensity of the target image by adding the space distance that helps maintain the spatial consistency in the final merged video.
Collapse
Affiliation(s)
- Arjina Maharjan
- School of Computing and Mathematics, Charles Sturt University (CSU), Sydney, Australia
| | - Abeer Alsadoon
- School of Computing and Mathematics, Charles Sturt University (CSU), Sydney, Australia.,School of Computer Data and Mathematical Sciences, University of Western Sydney (UWS), Sydney, Australia.,School of Information Technology, Southern Cross University (SCU), Sydney, Australia.,Information Technology Department, Asia Pacific International College (APIC), Sydney, Australia
| | - P W C Prasad
- School of Computing and Mathematics, Charles Sturt University (CSU), Sydney, Australia
| | - Nada AlSallami
- Computer Science Department, Worcester State University, Massachusetts, USA
| | - Tarik A Rashid
- Computer Science and Engineering, University of Kurdistan Hewler, Erbil, KRG, Iraq
| | - Ahmad Alrubaie
- Faculty of Medicine, University of New South Wales, Sydney, Australia
| | - Sami Haddad
- Department of Oral and Maxillofacial Services, Greater Western Sydney Area Health Services, Australia.,Department of Oral and Maxillofacial Services, Central Coast Area Health, Australia
| |
Collapse
|
159
|
Fukutsu K, Saito M, Noda K, Murata M, Kase S, Shiba R, Isogai N, Asano Y, Hanawa N, Dohke M, Kase M, Ishida S. A Deep Learning Architecture for Vascular Area Measurement in Fundus Images. OPHTHALMOLOGY SCIENCE 2021; 1:100004. [PMID: 36246007 PMCID: PMC9560649 DOI: 10.1016/j.xops.2021.100004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 02/06/2021] [Accepted: 02/16/2021] [Indexed: 12/27/2022]
Abstract
Purpose To develop a novel evaluation system for retinal vessel alterations caused by hypertension using a deep learning algorithm. Design Retrospective study. Participants Fundus photographs (n = 10 571) of health-check participants (n = 5598). Methods The participants were analyzed using a fully automatic architecture assisted by a deep learning system, and the total area of retinal arterioles and venules was assessed separately. The retinal vessels were extracted automatically from each photograph and categorized as arterioles or venules. Subsequently, the total arteriolar area (AA) and total venular area (VA) were measured. The correlations among AA, VA, age, systolic blood pressure (SBP), and diastolic blood pressure were analyzed. Six ophthalmologists manually evaluated the arteriovenous ratio (AVR) in fundus images (n = 102), and the correlation between the SBP and AVR was evaluated manually. Main Outcome Measures Total arteriolar area and VA. Results The deep learning algorithm demonstrated favorable properties of vessel segmentation and arteriovenous classification, comparable with pre-existing techniques. Using the algorithm, a significant positive correlation was found between AA and VA. Both AA and VA demonstrated negative correlations with age and blood pressure. Furthermore, the SBP showed a higher negative correlation with AA measured by the algorithm than with AVR. Conclusions The current data demonstrated that the retinal vascular area measured with the deep learning system could be a novel index of hypertension-related vascular changes.
Collapse
Affiliation(s)
- Kanae Fukutsu
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
| | - Michiyuki Saito
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
| | - Kousuke Noda
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Ocular Circulation and Metabolism, Hokkaido University, Sapporo, Japan
- Correspondence: Kousuke Noda, MD, PhD, Department of Ophthalmology, Hokkaido University Graduate School of Medicine, N-15, W-7, Kita-ku, Sapporo 060-8638, Japan.
| | - Miyuki Murata
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Ocular Circulation and Metabolism, Hokkaido University, Sapporo, Japan
| | - Satoru Kase
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
| | | | | | | | | | | | - Manabu Kase
- Department of Ophthalmology, Teine Keijinkai Hospital, Sapporo, Japan
| | - Susumu Ishida
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Ocular Circulation and Metabolism, Hokkaido University, Sapporo, Japan
| |
Collapse
|
160
|
Li X, Jiang Y, Li M, Yin S. Lightweight Attention Convolutional Neural Network for Retinal Vessel Image Segmentation. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS 2021; 17:1958-1967. [DOI: 10.1109/tii.2020.2993842] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
161
|
Bilal A, Sun G, Mazhar S. Survey on recent developments in automatic detection of diabetic retinopathy. J Fr Ophtalmol 2021; 44:420-440. [PMID: 33526268 DOI: 10.1016/j.jfo.2020.08.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 08/24/2020] [Indexed: 12/13/2022]
Abstract
Diabetic retinopathy (DR) is a disease facilitated by the rapid spread of diabetes worldwide. DR can blind diabetic individuals. Early detection of DR is essential to restoring vision and providing timely treatment. DR can be detected manually by an ophthalmologist, examining the retinal and fundus images to analyze the macula, morphological changes in blood vessels, hemorrhage, exudates, and/or microaneurysms. This is a time consuming, costly, and challenging task. An automated system can easily perform this function by using artificial intelligence, especially in screening for early DR. Recently, much state-of-the-art research relevant to the identification of DR has been reported. This article describes the current methods of detecting non-proliferative diabetic retinopathy, exudates, hemorrhage, and microaneurysms. In addition, the authors point out future directions in overcoming current challenges in the field of DR research.
Collapse
Affiliation(s)
- A Bilal
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China.
| | - G Sun
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| | - S Mazhar
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| |
Collapse
|
162
|
Xie H, Tang C, Zhang W, Shen Y, Lei Z. Multi-scale retinal vessel segmentation using encoder-decoder network with squeeze-and-excitation connection and atrous spatial pyramid pooling. APPLIED OPTICS 2021; 60:239-249. [PMID: 33448945 DOI: 10.1364/ao.409512] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 12/07/2020] [Indexed: 06/12/2023]
Abstract
The segmentation of blood vessels in retinal images is crucial to the diagnosis of many diseases. We propose a deep learning method for vessel segmentation based on an encoder-decoder network combined with squeeze-and-excitation connection and atrous spatial pyramid pooling. In our implementation, the atrous spatial pyramid pooling allows the network to capture features at multiple scales, and the high-level semantic information is combined with low-level features through the encoder-decoder architecture to generate segmentations. Meanwhile, the squeeze-and-excitation connections in the proposed network can adaptively recalibrate features according to the relationship between different channels of features. The proposed network can achieve precise segmentation of retinal vessels without hand-crafted features or specific post-processing. The performance of our model is evaluated in terms of visual effects and quantitative evaluation metrics on two publicly available datasets of retinal images, the Digital Retinal Images for Vessel Extraction and Structured Analysis of the Retina datasets, with comparison to 12 representative methods. Furthermore, the proposed network is applied to vessel segmentation on local retinal images, which demonstrates promising application prospect in medical practices.
Collapse
|
163
|
Lin TH, Jhang JY, Huang CR, Tsai YC, Cheng HC, Sheu BS. Deep Ensemble Feature Network for Gastric Section Classification. IEEE J Biomed Health Inform 2021; 25:77-87. [PMID: 32750926 DOI: 10.1109/jbhi.2020.2999731] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In this paper, we propose a novel deep ensemble feature (DEF) network to classify gastric sections from endoscopic images. Different from recent deep ensemble learning methods, which need to train deep features and classifiers individually to obtain fused classification results, the proposed method can simultaneously learn the deep ensemble feature from arbitrary number of convolutional neural networks (CNNs) and the decision classifier in an end-to-end trainable manner. It comprises two sub networks, the ensemble feature network and the decision network. The former sub network learns the deep ensemble feature from multiple CNNs to represent endoscopic images. The latter sub network learns to obtain the classification labels by using the deep ensemble feature. Both sub networks are optimized based on the proposed ensemble feature loss and the decision loss which guide the learning of deep features and decisions. As shown in the experimental results, the proposed method outperforms the state-of-the-art deep learning, ensemble learning, and deep ensemble learning methods.
Collapse
|
164
|
Retinal blood vessels segmentation using classical edge detection filters and the neural network. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100521] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
|
165
|
Wang S, Yu L, Li K, Yang X, Fu CW, Heng PA. DoFE: Domain-Oriented Feature Embedding for Generalizable Fundus Image Segmentation on Unseen Datasets. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4237-4248. [PMID: 32776876 DOI: 10.1109/tmi.2020.3015224] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Deep convolutional neural networks have significantly boosted the performance of fundus image segmentation when test datasets have the same distribution as the training datasets. However, in clinical practice, medical images often exhibit variations in appearance for various reasons, e.g., different scanner vendors and image quality. These distribution discrepancies could lead the deep networks to over-fit on the training datasets and lack generalization ability on the unseen test datasets. To alleviate this issue, we present a novel Domain-oriented Feature Embedding (DoFE) framework to improve the generalization ability of CNNs on unseen target domains by exploring the knowledge from multiple source domains. Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains to make the semantic features more discriminative. Specifically, we introduce a Domain Knowledge Pool to learn and memorize the prior information extracted from multi-source domains. Then the original image features are augmented with domain-oriented aggregated features, which are induced from the knowledge pool based on the similarity between the input image and multi-source domain images. We further design a novel domain code prediction branch to infer this similarity and employ an attention-guided mechanism to dynamically combine the aggregated features with the semantic features. We comprehensively evaluate our DoFE framework on two fundus image segmentation tasks, including the optic cup and disc segmentation and vessel segmentation. Our DoFE framework generates satisfying segmentation results on unseen datasets and surpasses other domain generalization and network regularization methods.
Collapse
|
166
|
Wang D, Haytham A, Pottenburgh J, Saeedi O, Tao Y. Hard Attention Net for Automatic Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2020; 24:3384-3396. [DOI: 10.1109/jbhi.2020.3002985] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
167
|
Mookiah MRK, Hogg S, MacGillivray TJ, Prathiba V, Pradeepa R, Mohan V, Anjana RM, Doney AS, Palmer CNA, Trucco E. A review of machine learning methods for retinal blood vessel segmentation and artery/vein classification. Med Image Anal 2020; 68:101905. [PMID: 33385700 DOI: 10.1016/j.media.2020.101905] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 11/10/2020] [Accepted: 11/11/2020] [Indexed: 12/20/2022]
Abstract
The eye affords a unique opportunity to inspect a rich part of the human microvasculature non-invasively via retinal imaging. Retinal blood vessel segmentation and classification are prime steps for the diagnosis and risk assessment of microvascular and systemic diseases. A high volume of techniques based on deep learning have been published in recent years. In this context, we review 158 papers published between 2012 and 2020, focussing on methods based on machine and deep learning (DL) for automatic vessel segmentation and classification for fundus camera images. We divide the methods into various classes by task (segmentation or artery-vein classification), technique (supervised or unsupervised, deep and non-deep learning, hand-crafted methods) and more specific algorithms (e.g. multiscale, morphology). We discuss advantages and limitations, and include tables summarising results at-a-glance. Finally, we attempt to assess the quantitative merit of DL methods in terms of accuracy improvement compared to other methods. The results allow us to offer our views on the outlook for vessel segmentation and classification for fundus camera images.
Collapse
Affiliation(s)
| | - Stephen Hogg
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| | - Tom J MacGillivray
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh EH16 4SB, UK
| | - Vijayaraghavan Prathiba
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Rajendra Pradeepa
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Viswanathan Mohan
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Ranjit Mohan Anjana
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Alexander S Doney
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Colin N A Palmer
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Emanuele Trucco
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| |
Collapse
|
168
|
Farahani A, Mohseni H. Medical image segmentation using customized U-Net with adaptive activation functions. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05396-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
169
|
Palanivel DA, Natarajan S, Gopalakrishnan S. Retinal vessel segmentation using multifractal characterization. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106439] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
170
|
Abstract
The 2019 novel coronavirus (COVID-19) has spread rapidly all over the world. The standard test for screening COVID-19 patients is the polymerase chain reaction test. As this method is time consuming, as an alternative, chest X-rays may be considered for quick screening. However, specialization is required to read COVID-19 chest X-ray images as they vary in features. To address this, we present a multi-channel pre-trained ResNet architecture to facilitate the diagnosis of COVID-19 chest X-ray. Three ResNet-based models were retrained to classify X-rays in a one-against-all basis from (a) normal or diseased, (b) pneumonia or non-pneumonia, and (c) COVID-19 or non-COVID19 individuals. Finally, these three models were ensembled and fine-tuned using X-rays from 1579 normal, 4245 pneumonia, and 184 COVID-19 individuals to classify normal, pneumonia, and COVID-19 cases in a one-against-one framework. Our results show that the ensemble model is more accurate than the single model as it extracts more relevant semantic features for each class. The method provides a precision of 94% and a recall of 100%. It could potentially help clinicians in screening patients for COVID-19, thus facilitating immediate triaging and treatment for better outcomes.
Collapse
|
171
|
Mohammedhasan M, Uğuz H. A New Deeply Convolutional Neural Network Architecture for Retinal Blood Vessel Segmentation. INT J PATTERN RECOGN 2020. [DOI: 10.1142/s0218001421570019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper proposes an incoming Deep Convolutional Neural Network (CNN) architecture for segmenting retinal blood vessels automatically from fundus images. Automatic segmentation performs a substantial role in computer-aided diagnosis of retinal diseases; it is of considerable significance as eye diseases as well as some other systemic diseases give rise to perceivable pathologic changes. Retinal blood vessel segmentation is challenging because of the excessive changes in the morphology of the vessels on a noisy background. Previous deep learning-based supervised methods suffer from the insufficient use of low-level features which is advantageous in semantic segmentation tasks. The proposed architecture makes use of both high-level features and low-level features to segment retinal blood vessels. The major contribution of the proposed architecture concentrates on two important factors; the first in its supplying of extremely modularized network architecture of aggregated residual connections which enable us to copy the learned layers from the shallower model and developing additional layers to identity mapping. The second is to improve the utilization of computing resources within the network. This is achieved through a skillfully crafted design that allows for increased depth and width of the network while maintaining the stability of its computational budget. Experimental results show the effectiveness of using aggregated residual connections in segmenting retinal vessels more accurately and clearly. Compared to the best existing methods, the proposed method outperformed other existing methods in different measures, comprised less false positives at fine vessels, and caressed more clear lines with sufficient details like the human annotator.
Collapse
Affiliation(s)
- Mali Mohammedhasan
- Department of Computer Engineering, Selçuk Üniversitesi, Selçuklu, Konya 42130, Turkey
| | - Harun Uğuz
- Department of Computer Engineering, Selçuk Üniversitesi, Selçuklu, Konya 42130, Turkey
| |
Collapse
|
172
|
Khanal A, Estrada R. Dynamic Deep Networks for Retinal Vessel Segmentation. FRONTIERS IN COMPUTER SCIENCE 2020. [DOI: 10.3389/fcomp.2020.00035] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
173
|
Accelerating ophthalmic artificial intelligence research: the role of an open access data repository. Curr Opin Ophthalmol 2020; 31:337-350. [PMID: 32740059 DOI: 10.1097/icu.0000000000000678] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
PURPOSE OF REVIEW Artificial intelligence has already provided multiple clinically relevant applications in ophthalmology. Yet, the explosion of nonstandardized reporting of high-performing algorithms are rendered useless without robust and streamlined implementation guidelines. The development of protocols and checklists will accelerate the translation of research publications to impact on patient care. RECENT FINDINGS Beyond technological scepticism, we lack uniformity in analysing algorithmic performance generalizability, and benchmarking impacts across clinical settings. No regulatory guardrails have been set to minimize bias or optimize interpretability; no consensus clinical acceptability thresholds or systematized postdeployment monitoring has been set. Moreover, stakeholders with misaligned incentives deepen the landscape complexity especially when it comes to the requisite data integration and harmonization to advance the field. Therefore, despite increasing algorithmic accuracy and commoditization, the infamous 'implementation gap' persists. Open clinical data repositories have been shown to rapidly accelerate research, minimize redundancies and disseminate the expertise and knowledge required to overcome existing barriers. Drawing upon the longstanding success of existing governance frameworks and robust data use and sharing agreements, the ophthalmic community has tremendous opportunity in ushering artificial intelligence into medicine. By collaboratively building a powerful resource of open, anonymized multimodal ophthalmic data, the next generation of clinicians can advance data-driven eye care in unprecedented ways. SUMMARY This piece demonstrates that with readily accessible data, immense progress can be achieved clinically and methodologically to realize artificial intelligence's impact on clinical care. Exponentially progressive network effects can be seen by consolidating, curating and distributing data amongst both clinicians and data scientists.
Collapse
|
174
|
Zhang Z, Wu C, Coleman S, Kerr D. DENSE-INception U-net for medical image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 192:105395. [PMID: 32163817 DOI: 10.1016/j.cmpb.2020.105395] [Citation(s) in RCA: 99] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Revised: 02/01/2020] [Accepted: 02/12/2020] [Indexed: 05/28/2023]
Abstract
BACKGROUND AND OBJECTIVE Convolutional neural networks (CNNs) play an important role in the field of medical image segmentation. Among many kinds of CNNs, the U-net architecture is one of the most famous fully convolutional network architectures for medical semantic segmentation tasks. Recent work shows that the U-net network can be substantially deeper thus resulting in improved performance on segmentation tasks. Though adding more layers directly into network is a popular way to make a network deeper, it may lead to gradient vanishing or redundant computation during training. METHODS A novel CNN architecture is proposed that integrates the Inception-Res module and densely connecting convolutional module into the U-net architecture. The proposed network model consists of the following parts: firstly, the Inception-Res block is designed to increase the width of the network by replacing the standard convolutional layers; secondly, the Dense-Inception block is designed to extract features and make the network more deep without additional parameters; thirdly, the down-sampling block is adopted to reduce the size of feature maps to accelerate learning and the up-sampling block is used to resize the feature maps. RESULTS The proposed model is tested on images of blood vessel segmentations from retina images, the lung segmentation of CT Data from the benchmark Kaggle datasets and the MRI scan brain tumor segmentation datasets from MICCAI BraTS 2017. The experimental results show that the proposed method can provide better performance on these two tasks compared with the state-of-the-art algorithms. The results reach an average Dice score of 0.9857 in the lung segmentation. For the blood vessel segmentation, the results reach an average Dice score of 0.9582. For the brain tumor segmentation, the results reach an average Dice score of 0.9867. CONCLUSIONS The experiments highlighted that combining the inception module with dense connections in the U-Net architecture is a promising approach for semantic medical image segmentation.
Collapse
Affiliation(s)
- Ziang Zhang
- Faculty of Robot Science and Engineering, Northeastern University, 110004, Shenyang, Liaoning Province, China.
| | - Chengdong Wu
- Faculty of Robot Science and Engineering, Northeastern University, 110004, Shenyang, Liaoning Province, China.
| | - Sonya Coleman
- School of Computing, Engineering and Intelligent Systems, Ulster University, Londonderry, BT48 7JL, Northern Ireland, United Kingdom.
| | - Dermot Kerr
- School of Computing, Engineering and Intelligent Systems, Ulster University, Londonderry, BT48 7JL, Northern Ireland, United Kingdom.
| |
Collapse
|
175
|
Yang T, Wu T, Li L, Zhu C. SUD-GAN: Deep Convolution Generative Adversarial Network Combined with Short Connection and Dense Block for Retinal Vessel Segmentation. J Digit Imaging 2020; 33:946-957. [PMID: 32323089 PMCID: PMC7522149 DOI: 10.1007/s10278-020-00339-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Since morphology of retinal blood vessels plays a key role in ophthalmological disease diagnosis, retinal vessel segmentation is an indispensable step for the screening and diagnosis of retinal diseases with fundus images. In this paper, deep convolution adversarial network combined with short connection and dense block is proposed to separate blood vessels from fundus image, named SUD-GAN. The generator adopts U-shape encode-decode structure and adds short connection block between convolution layers to prevent gradient dispersion caused by deep convolution network. The discriminator is all composed of convolution block, and dense connection structure is added to the middle part of the convolution network to strengthen the spread of features and enhance the network discrimination ability. The proposed method is evaluated on two publicly available databases, the DRIVE and STARE. The results show that the proposed method outperforms the state-of-the-art performance in sensitivity and specificity, which were 0.8340 and 0.9820, and 0.8334 and 0.9897 respectively on DRIVE and STARE, and can detect more tiny vessels and locate the edge of blood vessels more accurately.
Collapse
Affiliation(s)
- Tiejun Yang
- Key Laboratory of Grain Information Processing and Control (Henan University of Technology), Ministry of Education, ZhengZhou, 450001 China
- School of Artificial Intelligence and Big Data, Henan University of Technology, ZhengZhou, 450001 China
| | - Tingting Wu
- College of Information Science and Technology, Henan University of Technology, Zhengzhou, 450001 China
| | - Lei Li
- College of Information Science and Technology, Henan University of Technology, Zhengzhou, 450001 China
| | - Chunhua Zhu
- College of Information Science and Technology, Henan University of Technology, Zhengzhou, 450001 China
| |
Collapse
|
176
|
Zhou Y, Yen GG, Yi Z. Evolutionary Compression of Deep Neural Networks for Biomedical Image Segmentation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2916-2929. [PMID: 31536016 DOI: 10.1109/tnnls.2019.2933879] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Biomedical image segmentation is lately dominated by deep neural networks (DNNs) due to their surpassing expert-level performance. However, the existing DNN models for biomedical image segmentation are generally highly parameterized, which severely impede their deployment on real-time platforms and portable devices. To tackle this difficulty, we propose an evolutionary compression method (ECDNN) to automatically discover efficient DNN architectures for biomedical image segmentation. Different from the existing studies, ECDNN can optimize network loss and number of parameters simultaneously during the evolution, and search for a set of Pareto-optimal solutions in a single run, which is useful for quantifying the tradeoff in satisfying different objectives, and flexible for compressing DNN when preference information is uncertain. In particular, a set of novel genetic operators is proposed for automatically identifying less important filters over the whole network. Moreover, a pruning operator is designed for eliminating convolutional filters from layers involved in feature map concatenation, which is commonly adopted in DNN architectures for capturing multi-level features from biomedical images. Experiments carried out on compressing DNN for retinal vessel and neuronal membrane segmentation tasks show that ECDNN can not only improve the performance without any retraining but also discover efficient network architectures that well maintain the performance. The superiority of the proposed method is further validated by comparison with the state-of-the-art methods.
Collapse
|
177
|
A Multi-Scale Feature Fusion Method Based on U-Net for Retinal Vessel Segmentation. ENTROPY 2020; 22:e22080811. [PMID: 33286584 PMCID: PMC7517387 DOI: 10.3390/e22080811] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 07/22/2020] [Accepted: 07/22/2020] [Indexed: 11/17/2022]
Abstract
Computer-aided automatic segmentation of retinal blood vessels plays an important role in the diagnosis of diseases such as diabetes, glaucoma, and macular degeneration. In this paper, we propose a multi-scale feature fusion retinal vessel segmentation model based on U-Net, named MSFFU-Net. The model introduces the inception structure into the multi-scale feature extraction encoder part, and the max-pooling index is applied during the upsampling process in the feature fusion decoder of an improved network. The skip layer connection is used to transfer each set of feature maps generated on the encoder path to the corresponding feature maps on the decoder path. Moreover, a cost-sensitive loss function based on the Dice coefficient and cross-entropy is designed. Four transformations-rotating, mirroring, shifting and cropping-are used as data augmentation strategies, and the CLAHE algorithm is applied to image preprocessing. The proposed framework is tested and trained on DRIVE and STARE, and sensitivity (Sen), specificity (Spe), accuracy (Acc), and area under curve (AUC) are adopted as the evaluation metrics. Detailed comparisons with U-Net model, at last, it verifies the effectiveness and robustness of the proposed model. The Sen of 0.7762 and 0.7721, Spe of 0.9835 and 0.9885, Acc of 0.9694 and 0.9537 and AUC value of 0.9790 and 0.9680 were achieved on DRIVE and STARE databases, respectively. Results are also compared to other state-of-the-art methods, demonstrating that the performance of the proposed method is superior to that of other methods and showing its competitive results.
Collapse
|
178
|
Pachade S, Porwal P, Kokare M, Giancardo L, Meriaudeau F. Retinal vasculature segmentation and measurement framework for color fundus and SLO images. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.03.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
179
|
Semi-Supervised Learning Method of U-Net Deep Learning Network for Blood Vessel Segmentation in Retinal Images. Symmetry (Basel) 2020. [DOI: 10.3390/sym12071067] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Blood vessel segmentation methods based on deep neural networks have achieved satisfactory results. However, these methods are usually supervised learning methods, which require large numbers of retinal images with high quality pixel-level ground-truth labels. In practice, the task of labeling these retinal images is very costly, financially and in human effort. To deal with these problems, we propose a semi-supervised learning method which can be used in blood vessel segmentation with limited labeled data. In this method, we use the improved U-Net deep learning network to segment the blood vessel tree. On this basis, we implement the U-Net network-based training dataset updating strategy. A large number of experiments are presented to analyze the segmentation performance of the proposed semi-supervised learning method. The experiment results demonstrate that the proposed methodology is able to avoid the problems of insufficient hand-labels, and achieve satisfactory performance.
Collapse
|
180
|
Hervella ÁS, Rouco J, Novo J, Ortega M. Learning the retinal anatomy from scarce annotated data using self-supervised multimodal reconstruction. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106210] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
181
|
NFN+: A novel network followed network for retinal vessel segmentation. Neural Netw 2020; 126:153-162. [DOI: 10.1016/j.neunet.2020.02.018] [Citation(s) in RCA: 59] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 01/28/2020] [Accepted: 02/26/2020] [Indexed: 11/21/2022]
|
182
|
Retinal Blood Vessel Segmentation Using Hybrid Features and Multi-Layer Perceptron Neural Networks. Symmetry (Basel) 2020. [DOI: 10.3390/sym12060894] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Segmentation of retinal blood vessels is the first step for several computer aided-diagnosis systems (CAD), not only for ocular disease diagnosis such as diabetic retinopathy (DR) but also of non-ocular disease, such as hypertension, stroke and cardiovascular diseases. In this paper, a supervised learning-based method, using a multi-layer perceptron neural network and carefully selected vector of features, is proposed. In particular, for each pixel of a retinal fundus image, we construct a 24-D feature vector, encoding information on the local intensity, morphology transformation, principal moments of phase congruency, Hessian, and difference of Gaussian values. A post-processing technique depending on mathematical morphological operators is used to optimise the segmentation. Moreover, the selected feature vector succeeded in outfitting the symmetric features that provided the final blood vessel probability as a binary map image. The proposed method is tested on three known datasets: Digital Retinal Image for Extraction (DRIVE), Structure Analysis of the Retina (STARE), and CHASED_DB1 datasets. The experimental results, both visual and quantitative, testify to the robustness of the proposed method. This proposed method achieved 0.9607, 0.7542, and 0.9843 in DRIVE, 0.9632, 0.7806, and 0.9825 on STARE, 0.9577, 0.7585 and 0.9846 in CHASE_DB1, with respectable accuracy, sensitivity, and specificity performance metrics. Furthermore, they testify that the method is superior to seven similar state-of-the-art methods.
Collapse
|
183
|
Feng S, Zhuo Z, Pan D, Tian Q. CcNet: A cross-connected convolutional network for segmenting retinal vessels using multi-scale features. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.10.098] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
184
|
Mou L, Chen L, Cheng J, Gu Z, Zhao Y, Liu J. Dense Dilated Network With Probability Regularized Walk for Vessel Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1392-1403. [PMID: 31675323 DOI: 10.1109/tmi.2019.2950051] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The detection of retinal vessel is of great importance in the diagnosis and treatment of many ocular diseases. Many methods have been proposed for vessel detection. However, most of the algorithms neglect the connectivity of the vessels, which plays an important role in the diagnosis. In this paper, we propose a novel method for retinal vessel detection. The proposed method includes a dense dilated network to get an initial detection of the vessels and a probability regularized walk algorithm to address the fracture issue in the initial detection. The dense dilated network integrates newly proposed dense dilated feature extraction blocks into an encoder-decoder structure to extract and accumulate features at different scales. A multi-scale Dice loss function is adopted to train the network. To improve the connectivity of the segmented vessels, we also introduce a probability regularized walk algorithm to connect the broken vessels. The proposed method has been applied on three public data sets: DRIVE, STARE and CHASE_DB1. The results show that the proposed method outperforms the state-of-the-art methods in accuracy, sensitivity, specificity and also area under receiver operating characteristic curve.
Collapse
|
185
|
|
186
|
Shukla AK, Pandey RK, Pachori RB. A fractional filter based efficient algorithm for retinal blood vessel segmentation. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101883] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
187
|
Cheng YL, Ma MN, Zhang LJ, Jin CJ, Ma L, Zhou Y. Retinal blood vessel segmentation based on Densely Connected U-Net. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2020; 17:3088-3108. [PMID: 32987518 DOI: 10.3934/mbe.2020175] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The segmentation of blood vessels from retinal images is an important and challenging task in medical analysis and diagnosis. This paper proposes a new architecture of the U-Net network for retinal blood vessel segmentation. Adding dense block to U-Net network makes each layer's input come from the all previous layer's output which improves the segmentation accuracy of small blood vessels. The effectiveness of the proposed method has been evaluated on two public datasets (DRIVE and CHASE_DB1). The obtained results (DRIVE: Acc = 0.9559, AUC = 0.9793, CHASE_DB1: Acc = 0.9488, AUC = 0.9785) demonstrate the better performance of the proposed method compared to the state-of-the-art methods. Also, the results show that our method achieves better results for the segmentation of small blood vessels and can be helpful to evaluate related ophthalmic diseases.
Collapse
Affiliation(s)
- Yin Lin Cheng
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou 510006, China
- Department of Medical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510006, China
| | - Meng Nan Ma
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou 510006, China
- Department of Medical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510006, China
| | - Liang Jun Zhang
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou 510006, China
| | - Chen Jin Jin
- Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510006, China
| | - Li Ma
- Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510006, China
| | - Yi Zhou
- Department of Medical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510006, China
| |
Collapse
|
188
|
Zhou C, Zhang X, Chen H. A new robust method for blood vessel segmentation in retinal fundus images based on weighted line detector and hidden Markov model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105231. [PMID: 31786454 DOI: 10.1016/j.cmpb.2019.105231] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Revised: 11/08/2019] [Accepted: 11/17/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic vessel segmentation is a crucial preliminary processing step to facilitate ophthalmologist diagnosis in some diseases. But, due to the complexity of retinal fundus image, there are some problems on accurate segmentation of retinal vessel. In this paper, a new method for retinal vessel segmentation is proposed to handle two main problems: thin vessel missing and false detection in difficult regions. METHODS First, an improved line detector is proposed and used to fast extract the major structures of vessels. Then, Hidden Markov model (HMM) is applied to effectively detect vessel centerlines that include thin vessels. Finally, a denoising approach is presented to remove noises and two types of vessels are unified to obtain the complete segmentation results. RESULTS Our method is tested on two public databases (DRIVE and STARE databases), and five measures namely accuracy (Acc), sensitivity (Se), specificity (Sp), Dice coefficient (Dc), structural similarity index (SSIM) and feature similarity index (FSIM) are used to evaluate our segmentation performance. The respective values of the performance measures are 0.9475, 0.7262, 0.9803, 0.7781, 0.9992 and 0.9793 for DRIVE dataset and 0.9535, 0.7865, 0.9730, 0.7764, 0.9987 and 0.9742 for STARE dataset. CONCLUSIONS The experiment results show that our method outperforms most published state-of-the-art methods and is better the result of a human observer. Moreover, in term of specificity, our proposed algorithm can obtain the best score among the unsupervised methods. Meanwhile, there are excellent structure and feature similarities between our result and the ground truth according to achieved SSIM and FSIM. Visual inspection on the segmentation results shows that the proposed method produces more accurate segmentations on some difficult regions such as optic disc and central light reflex while detecting thin vessels effectively compared with the other methods.
Collapse
Affiliation(s)
- Chao Zhou
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082 China.
| | - Xiaogang Zhang
- College of Electrical and Information Engineering, Hunan University, Changsha, 410082 China.
| | - Hua Chen
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082 China.
| |
Collapse
|
189
|
|
190
|
Dash S, Senapati MR. Enhancing detection of retinal blood vessels by combined approach of DWT, Tyler Coye and Gamma correction. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101740] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
191
|
Khawaja A, Khan TM, Khan MAU, Nawaz SJ. A Multi-Scale Directional Line Detector for Retinal Vessel Segmentation. SENSORS (BASEL, SWITZERLAND) 2019; 19:E4949. [PMID: 31766276 PMCID: PMC6891360 DOI: 10.3390/s19224949] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Revised: 11/02/2019] [Accepted: 11/08/2019] [Indexed: 11/16/2022]
Abstract
The assessment of transformations in the retinal vascular structure has a strong potential in indicating a wide range of underlying ocular pathologies. Correctly identifying the retinal vessel map is a crucial step in disease identification, severity progression assessment, and appropriate treatment. Marking the vessels manually by a human expert is a tedious and time-consuming task, thereby reinforcing the need for automated algorithms capable of quick segmentation of retinal features and any possible anomalies. Techniques based on unsupervised learning methods utilize vessel morphology to classify vessel pixels. This study proposes a directional multi-scale line detector technique for the segmentation of retinal vessels with the prime focus on the tiny vessels that are most difficult to segment out. Constructing a directional line-detector, and using it on images having only the features oriented along the detector's direction, significantly improves the detection accuracy of the algorithm. The finishing step involves a binarization operation, which is again directional in nature, helps in achieving further performance improvements in terms of key performance indicators. The proposed method is observed to obtain a sensitivity of 0.8043, 0.8011, and 0.7974 for the Digital Retinal Images for Vessel Extraction (DRIVE), STructured Analysis of the Retina (STARE), and Child Heart And health Study in England (CHASE_DB1) datasets, respectively. These results, along with other performance enhancements demonstrated by the conducted experimental evaluation, establish the validity and applicability of directional multi-scale line detectors as a competitive framework for retinal image segmentation.
Collapse
Affiliation(s)
- Ahsan Khawaja
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan; (T.M.K.); (S.J.N.)
| | - Tariq M. Khan
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan; (T.M.K.); (S.J.N.)
| | | | - Syed Junaid Nawaz
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan; (T.M.K.); (S.J.N.)
| |
Collapse
|
192
|
Multiloss Function Based Deep Convolutional Neural Network for Segmentation of Retinal Vasculature into Arterioles and Venules. BIOMED RESEARCH INTERNATIONAL 2019; 2019:4747230. [PMID: 31111055 PMCID: PMC6487175 DOI: 10.1155/2019/4747230] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Revised: 02/20/2019] [Accepted: 03/20/2019] [Indexed: 02/02/2023]
Abstract
The arterioles and venules (AV) classification of retinal vasculature is considered as the first step in the development of an automated system for analysing the vasculature biomarker association with disease prognosis. Most of the existing AV classification methods depend on the accurate segmentation of retinal blood vessels. Moreover, the unavailability of large-scale annotated data is a major hindrance in the application of deep learning techniques for AV classification. This paper presents an encoder-decoder based fully convolutional neural network for classification of retinal vasculature into arterioles and venules, without requiring the preliminary step of vessel segmentation. An optimized multiloss function is used to learn the pixel-wise and segment-wise retinal vessel labels. The proposed method is trained and evaluated on DRIVE, AVRDB, and a newly created AV classification dataset; and it attains 96%, 98%, and 97% accuracy, respectively. The new AV classification dataset is comprised of 700 annotated retinal images, which will offer the researchers a benchmark to compare their AV classification results.
Collapse
|
193
|
Liu X, Guo S, Yang B, Ma S, Zhang H, Li J, Sun C, Jin L, Li X, Yang Q, Fu Y. Automatic Organ Segmentation for CT Scans Based on Super-Pixel and Convolutional Neural Networks. J Digit Imaging 2019; 31:748-760. [PMID: 29679242 DOI: 10.1007/s10278-018-0052-4] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Accurate segmentation of specific organ from computed tomography (CT) scans is a basic and crucial task for accurate diagnosis and treatment. To avoid time-consuming manual optimization and to help physicians distinguish diseases, an automatic organ segmentation framework is presented. The framework utilized convolution neural networks (CNN) to classify pixels. To reduce the redundant inputs, the simple linear iterative clustering (SLIC) of super-pixels and the support vector machine (SVM) classifier are introduced. To establish the perfect boundary of organs in one-pixel-level, the pixels need to be classified step-by-step. First, the SLIC is used to cut an image into grids and extract respective digital signatures. Next, the signature is classified by the SVM, and the rough edges are acquired. Finally, a precise boundary is obtained by the CNN, which is based on patches around each pixel-point. The framework is applied to abdominal CT scans of livers and high-resolution computed tomography (HRCT) scans of lungs. The experimental CT scans are derived from two public datasets (Sliver 07 and a Chinese local dataset). Experimental results show that the proposed method can precisely and efficiently detect the organs. This method consumes 38 s/slice for liver segmentation. The Dice coefficient of the liver segmentation results reaches to 97.43%. For lung segmentation, the Dice coefficient is 97.93%. This finding demonstrates that the proposed framework is a favorable method for lung segmentation of HRCT scans.
Collapse
Affiliation(s)
- Xiaoming Liu
- College of Electronic Science & Engineering, Jilin University, D451 Room of Tangaoqing Building, No. 2699 of Qianjin Street, Changchun, Jilin, China
| | - Shuxu Guo
- College of Electronic Science & Engineering, Jilin University, D451 Room of Tangaoqing Building, No. 2699 of Qianjin Street, Changchun, Jilin, China
| | - Bingtao Yang
- College of Communication Engineering, Jilin University, Changchun, 130012, China
| | - Shuzhi Ma
- LUSTER LightTech Group, Beijing, China
| | - Huimao Zhang
- Department of Radiology, The First Hospital of Jilin University, Changchun, China
| | - Jing Li
- Department of Radiology, The First Hospital of Jilin University, Changchun, China
| | - Changjian Sun
- College of Electronic Science & Engineering, Jilin University, D451 Room of Tangaoqing Building, No. 2699 of Qianjin Street, Changchun, Jilin, China
| | - Lanyi Jin
- College of Electronic Science & Engineering, Jilin University, D451 Room of Tangaoqing Building, No. 2699 of Qianjin Street, Changchun, Jilin, China
| | - Xueyan Li
- College of Electronic Science & Engineering, Jilin University, D451 Room of Tangaoqing Building, No. 2699 of Qianjin Street, Changchun, Jilin, China.
| | - Qi Yang
- Department of Radiology, The First Hospital of Jilin University, Changchun, China
| | - Yu Fu
- Department of Radiology, The First Hospital of Jilin University, Changchun, China
| |
Collapse
|
194
|
Kassim YM, Maude RJ, Palaniappan K. Sensitivity of Cross-Trained Deep CNNs for Retinal Vessel Extraction. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:2736-2739. [PMID: 30440967 DOI: 10.1109/embc.2018.8512764] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Automatic segmentation of vascular network is a critical step in quantitatively characterizing vessel remodeling in retinal images and other tissues. We proposed a deep learning architecture consists of 14 layers to extract blood vessels in fundoscopy images for the popular standard datasets DRIVE and STARE. Experimental results show that our CNN characterized by superior identifying for the foreground vessel regions. It produces results with sensitivity higher by 10% than other methods when trained by the same data set and more than 1% with cross training (trained on DRIVE, tested with STARE and vice versa). Further, our results have better accuracy $> 0 .95$% compared to state of the art algorithms.
Collapse
|
195
|
Cherukuri V, G VKB, Bala R, Monga V. Deep Retinal Image Segmentation with Regularization Under Geometric Priors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2552-2567. [PMID: 31613766 DOI: 10.1109/tip.2019.2946078] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Vessel segmentation of retinal images is a key diagnostic capability in ophthalmology. This problem faces several challenges including low contrast, variable vessel size and thickness, and presence of interfering pathology such as micro-aneurysms and hemorrhages. Early approaches addressing this problem employed hand-crafted filters to capture vessel structures, accompanied by morphological post-processing. More recently, deep learning techniques have been employed with significantly enhanced segmentation accuracy. We propose a novel domain enriched deep network that consists of two components: 1) a representation network that learns geometric features specific to retinal images, and 2) a custom designed computationally efficient residual task network that utilizes the features obtained from the representation layer to perform pixel-level segmentation. The representation and task networks are jointly learned for any given training set. To obtain physically meaningful and practically effective representation filters, we propose two new constraints that are inspired by expected prior structure on these filters: 1) orientation constraint that promotes geometric diversity of curvilinear features, and 2) a data adaptive noise regularizer that penalizes false positives. Multi-scale extensions are developed to enable accurate detection of thin vessels. Experiments performed on three challenging benchmark databases under a variety of training scenarios show that the proposed prior guided deep network outperforms state of the art alternatives as measured by common evaluation metrics, while being more economical in network size and inference time.
Collapse
|
196
|
DCCMED-Net: Densely connected and concatenated multi Encoder-Decoder CNNs for retinal vessel extraction from fundus images. Med Hypotheses 2019; 134:109426. [PMID: 31622926 DOI: 10.1016/j.mehy.2019.109426] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 10/09/2019] [Indexed: 11/22/2022]
Abstract
Recent studies have shown that convolutional neural networks (CNNs) can be more accurate, efficient and even deeper on their training if they include direct connections from the layers close to the input to those close to the output in order to transfer activation maps. Through this observation, this study introduces a new CNN model, namely Densely Connected and Concatenated Multi Encoder-Decoder (DCCMED) network. DCCMED contains concatenated multi encoder-decoder CNNs and connects certain layers to the corresponding input of the subsequent encoder-decoder block in a feed-forward fashion, for retinal vessel extraction from fundus image. The DCCMED model has assertive aspects such as reducing pixel-vanishing and encouraging features reuse. A patch-based data augmentation strategy is also developed for the training of the proposed DCCMED model that increases the generalization ability of the network. Experiments are carried out on two publicly available datasets, namely Digital Retinal Images for Vessel Extraction (DRIVE) and Structured Analysis of the Retina (STARE). Evaluation criterions such as sensitivity (Se), specificity (Sp), accuracy (Acc), dice and area under the receiver operating characteristic curve (AUC) are used for verifying the effectiveness of the proposed method. The obtained results are compared with several supervised and unsupervised state-of-the-art methods based on AUC scores. The obtained results demonstrate that the proposed DCCMED model yields the best performance compared with the-state-of-the-art methods according to accuracy and AUC scores.
Collapse
|
197
|
Yue K, Zou B, Chen Z, Liu Q. Retinal vessel segmentation using dense U-net with multiscale inputs. J Med Imaging (Bellingham) 2019; 6:034004. [PMID: 31572745 DOI: 10.1117/1.jmi.6.3.034004] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2019] [Accepted: 08/30/2019] [Indexed: 11/14/2022] Open
Abstract
A color fundus image is an image of the inner wall of the eyeball taken with a fundus camera. Doctors can observe retinal vessel changes in the image, and these changes can be used to diagnose many serious diseases such as atherosclerosis, glaucoma, and age-related macular degeneration. Automated segmentation of retinal vessels can facilitate more efficient diagnosis of these diseases. We propose an improved U-net architecture to segment retinal vessels. Multiscale input layer and dense block are introduced into the conventional U-net, so that the network can make use of richer spatial context information. The proposed method is evaluated on the public dataset DRIVE, achieving 0.8199 in sensitivity and 0.9561 in accuracy. Especially for thin blood vessels, which are difficult to detect because of their low contrast with the background pixels, the segmentation results have been improved.
Collapse
Affiliation(s)
- Kejuan Yue
- Central South University, School of Computer Science and Engineering, Changsha, China.,Central South University, Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China.,Hunan First Normal University, School of Information Science and Engineering, Changsha, China
| | - Beiji Zou
- Central South University, School of Computer Science and Engineering, Changsha, China.,Central South University, Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Zailiang Chen
- Central South University, School of Computer Science and Engineering, Changsha, China.,Central South University, Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Qing Liu
- Central South University, School of Computer Science and Engineering, Changsha, China.,Central South University, Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| |
Collapse
|
198
|
Arsalan M, Owais M, Mahmood T, Cho SW, Park KR. Aiding the Diagnosis of Diabetic and Hypertensive Retinopathy Using Artificial Intelligence-Based Semantic Segmentation. J Clin Med 2019; 8:E1446. [PMID: 31514466 PMCID: PMC6780110 DOI: 10.3390/jcm8091446] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 09/04/2019] [Accepted: 09/07/2019] [Indexed: 12/13/2022] Open
Abstract
Automatic segmentation of retinal images is an important task in computer-assisted medical image analysis for the diagnosis of diseases such as hypertension, diabetic and hypertensive retinopathy, and arteriosclerosis. Among the diseases, diabetic retinopathy, which is the leading cause of vision detachment, can be diagnosed early through the detection of retinal vessels. The manual detection of these retinal vessels is a time-consuming process that can be automated with the help of artificial intelligence with deep learning. The detection of vessels is difficult due to intensity variation and noise from non-ideal imaging. Although there are deep learning approaches for vessel segmentation, these methods require many trainable parameters, which increase the network complexity. To address these issues, this paper presents a dual-residual-stream-based vessel segmentation network (Vess-Net), which is not as deep as conventional semantic segmentation networks, but provides good segmentation with few trainable parameters and layers. The method takes advantage of artificial intelligence for semantic segmentation to aid the diagnosis of retinopathy. To evaluate the proposed Vess-Net method, experiments were conducted with three publicly available datasets for vessel segmentation: digital retinal images for vessel extraction (DRIVE), the Child Heart Health Study in England (CHASE-DB1), and structured analysis of retina (STARE). Experimental results show that Vess-Net achieved superior performance for all datasets with sensitivity (Se), specificity (Sp), area under the curve (AUC), and accuracy (Acc) of 80.22%, 98.1%, 98.2%, and 96.55% for DRVIE; 82.06%, 98.41%, 98.0%, and 97.26% for CHASE-DB1; and 85.26%, 97.91%, 98.83%, and 96.97% for STARE dataset.
Collapse
Affiliation(s)
- Muhammad Arsalan
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Muhammad Owais
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Tahir Mahmood
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Se Woon Cho
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| |
Collapse
|
199
|
Shin SY, Lee S, Yun ID, Lee KM. Deep vessel segmentation by learning graphical connectivity. Med Image Anal 2019; 58:101556. [PMID: 31536906 DOI: 10.1016/j.media.2019.101556] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 09/02/2019] [Accepted: 09/05/2019] [Indexed: 11/17/2022]
Abstract
We propose a novel deep learning based system for vessel segmentation. Existing methods using CNNs have mostly relied on local appearances learned on the regular image grid, without consideration of the graphical structure of vessel shape. Effective use of the strong relationship that exists between vessel neighborhoods can help improve the vessel segmentation accuracy. To this end, we incorporate a graph neural network into a unified CNN architecture to jointly exploit both local appearances and global vessel structures. We extensively perform comparative evaluations on four retinal image datasets and a coronary artery X-ray angiography dataset, showing that the proposed method outperforms or is on par with current state-of-the-art methods in terms of the average precision and the area under the receiver operating characteristic curve. Statistical significance on the performance difference between the proposed method and each comparable method is suggested by conducting a paired t-test. In addition, ablation studies support the particular choices of algorithmic detail and hyperparameter values of the proposed method. The proposed architecture is widely applicable since it can be applied to expand any type of CNN-based vessel segmentation method to enhance the performance.
Collapse
Affiliation(s)
- Seung Yeon Shin
- Department of Electrical and Computer Engineering, Automation and Systems Research Institute, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea
| | - Soochahn Lee
- School of Electrical Engineering, Kookmin University, Seoul, 02707, South Korea.
| | - Il Dong Yun
- Division of Computer and Electronic Systems Engineering, Hankuk University of Foreign Studies, Yongin, 17035, South Korea
| | - Kyoung Mu Lee
- Department of Electrical and Computer Engineering, Automation and Systems Research Institute, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea
| |
Collapse
|
200
|
Automatic Retinal Blood Vessel Segmentation Based on Fully Convolutional Neural Networks. Symmetry (Basel) 2019. [DOI: 10.3390/sym11091112] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Automated retinal vessel segmentation technology has become an important tool for disease screening and diagnosis in clinical medicine. However, most of the available methods of retinal vessel segmentation still have problems such as poor accuracy and low generalization ability. This is because the symmetrical and asymmetrical patterns between blood vessels are complicated, and the contrast between the vessel and the background is relatively low due to illumination and pathology. Robust vessel segmentation of the retinal image is essential for improving the diagnosis of diseases such as vein occlusions and diabetic retinopathy. Automated retinal vein segmentation remains a challenging task. In this paper, we proposed an automatic retinal vessel segmentation framework using deep fully convolutional neural networks (FCN), which integrate novel methods of data preprocessing, data augmentation, and full convolutional neural networks. It is an end-to-end framework that automatically and efficiently performs retinal vessel segmentation. The framework was evaluated on three publicly available standard datasets, achieving F1 score of 0.8321, 0.8531, and 0.8243, an average accuracy of 0.9706, 0.9777, and 0.9773, and average area under the Receiver Operating Characteristic (ROC) curve of 0.9880, 0.9923 and 0.9917 on the DRIVE, STARE, and CHASE_DB1 datasets, respectively. The experimental results show that our proposed framework achieves state-of-the-art vessel segmentation performance in all three benchmark tests.
Collapse
|