51
|
Pang S, Du A, Yu Z, Orgun MA. 2D medical image segmentation via learning multi-scale contextual dependencies. Methods 2021; 202:40-53. [PMID: 34029714 DOI: 10.1016/j.ymeth.2021.05.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 04/26/2021] [Accepted: 05/19/2021] [Indexed: 12/28/2022] Open
Abstract
Automatic medical image segmentation plays an important role as a diagnostic aid in the identification of diseases and their treatment in clinical settings. Recently proposed methods based on Convolutional Neural Networks (CNNs) have demonstrated their potential in image processing tasks, including some medical image analysis tasks. Those methods can learn various feature representations with numerous weight-shared convolutional kernels, however, the missed diagnosis rate of regions of interest (ROIs) is still high in medical image segmentation. Two crucial factors behind this shortcoming, which have been overlooked, are small ROIs from medical images and the limited context information from the existing network models. In order to reduce the missed diagnosis rate of ROIs from medical images, we propose a new segmentation framework which enhances the representative capability of small ROIs (particularly in deep layers) and explicitly learns global contextual dependencies in multi-scale feature spaces. In particular, the local features and their global dependencies from each feature space are adaptively aggregated based on both the spatial and the channel dimensions. Moreover, some visualization comparisons of the learned features from our framework further boost neural networks' interpretability. Experimental results show that, in comparison to some popular medical image segmentation and general image segmentation methods, our proposed framework achieves the state-of-the-art performance on the liver tumor segmentation task with 91.18% Sensitivity, the COVID-19 lung infection segmentation task with 75.73% Sensitivity and the retinal vessel detection task with 82.68% Sensitivity. Moreover, it is possible to integrate (parts of) the proposed framework into most of the recently proposed Fully CNN-based models, in order to improve their effectiveness in medical image segmentation tasks.
Collapse
Affiliation(s)
- Shuchao Pang
- Department of Computing, Macquarie University, North Ryde, NSW 2109, Australia.
| | - Anan Du
- School of Electrical and Data Engineering, University of Technology Sydney, NSW 2007, Australia.
| | - Zhenmei Yu
- School of Data and Computer Science, Shandong Women's University, Jinan 250014, China.
| | - Mehmet A Orgun
- Department of Computing, Macquarie University, North Ryde, NSW 2109, Australia; Faculty of Information Technology, Macau University of Science and Technology, Avenida Wai Long, Taipa 999078, Macau.
| |
Collapse
|
52
|
Ma B, Xie J, Yang T, Su P, Liu R, Sun T, Zhou Y, Wang H, Feng X, Ma S, Zhao Y, Qi H. Quantification of Increased Corneal Subbasal Nerve Tortuosity in Dry Eye Disease and Its Correlation With Clinical Parameters. Transl Vis Sci Technol 2021; 10:26. [PMID: 34015103 PMCID: PMC8142722 DOI: 10.1167/tvst.10.6.26] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Accepted: 04/08/2021] [Indexed: 11/24/2022] Open
Abstract
Purpose This study quantified corneal subbasal nerve tortuosity in dry eye disease (DED) and investigated its correlation with clinical parameters by proposing an aggregated measure of tortuosity (Tagg). Methods The sample consisted of 26 eyes of patients with DED and 23 eyes of healthy volunteers, which represented separately the dry eye group and the control group. Clinical evaluation of DED and in vivo confocal microscopy analysis of the central cornea were performed. Tagg incorporated six metrics of tortuosity. Corneal subbasal nerve images of subjects and a validation data set were analyzed using Tagg. Spearman's rank correlation was performed on Tagg and clinical parameters. Results Tagg was validated using 1501 corneal nerve images. Tagg was higher in patients with DED than in healthy volunteers (P < 0.001). Tagg was positively correlated with the ocular surface disease index (r = 0.418, P = 0.003) and negatively correlated with tear breakup time (r = -0.398, P = 0.007). There was no correlation between Tagg and visual analog scale scores, corneal fluorescein staining scores, or the Schirmer I test. Conclusions Tagg was validated for quantification of corneal subbasal nerve tortuosity and was higher in patients with DED than in healthy volunteers. A higher Tagg may be linked to ocular discomfort, visual function disturbance, and tear film instability. Translational Relevance Corneal subbasal nerve tortuosity is a potential biomarker for corneal neurobiology in DED.
Collapse
Affiliation(s)
- Baikai Ma
- Department of Ophthalmology, Peking University Third Hospital, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing, China
| | - Jianyang Xie
- Cixi Institute of BioMedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Tingting Yang
- Department of Ophthalmology, Peking University Third Hospital, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing, China
- Institute of Medical Technology, Peking University Health Science Center, Beijing, China
| | - Pan Su
- Cixi Institute of BioMedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Rongjun Liu
- Department of Ophthalmology, Peking University Third Hospital, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing, China
| | - Tong Sun
- Department of Ophthalmology, Peking University Third Hospital, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing, China
| | - Yifan Zhou
- Department of Ophthalmology, Peking University Third Hospital, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing, China
| | - Haiwei Wang
- Department of Ophthalmology, Peking University Third Hospital, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing, China
- Department of Ophthalmology, Fuxing Hospital, Capital Medical University, Beijing, China
| | - Xue Feng
- Department of Ophthalmology, Peking University Third Hospital, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing, China
- Department of Ophthalmology, Beijing Moslem People's Hospital, Beijing, China
| | - Siyi Ma
- Department of Ophthalmology, Peking University Third Hospital, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing, China
| | - Yitian Zhao
- Cixi Institute of BioMedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Hong Qi
- Department of Ophthalmology, Peking University Third Hospital, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Beijing, China
| |
Collapse
|
53
|
Zhou Y, Chen Z, Shen H, Zheng X, Zhao R, Duan X. A refined equilibrium generative adversarial network for retinal vessel segmentation. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.06.143] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
54
|
Hu J, Wang H, Wang J, Wang Y, He F, Zhang J. SA-Net: A scale-attention network for medical image segmentation. PLoS One 2021; 16:e0247388. [PMID: 33852577 PMCID: PMC8046243 DOI: 10.1371/journal.pone.0247388] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 02/06/2021] [Indexed: 11/24/2022] Open
Abstract
Semantic segmentation of medical images provides an important cornerstone for subsequent tasks of image analysis and understanding. With rapid advancements in deep learning methods, conventional U-Net segmentation networks have been applied in many fields. Based on exploratory experiments, features at multiple scales have been found to be of great importance for the segmentation of medical images. In this paper, we propose a scale-attention deep learning network (SA-Net), which extracts features of different scales in a residual module and uses an attention module to enforce the scale-attention capability. SA-Net can better learn the multi-scale features and achieve more accurate segmentation for different medical image. In addition, this work validates the proposed method across multiple datasets. The experiment results show SA-Net achieves excellent performances in the applications of vessel detection in retinal images, lung segmentation, artery/vein(A/V) classification in retinal images and blastocyst segmentation. To facilitate SA-Net utilization by the scientific community, the code implementation will be made publicly available.
Collapse
Affiliation(s)
- Jingfei Hu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China
- Hefei Innovation Research Institute, Beihang University, Hefei, China
- Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China
- School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Hua Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China
- Hefei Innovation Research Institute, Beihang University, Hefei, China
- Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China
- School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Jie Wang
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Yunqi Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Fang He
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China
- Hefei Innovation Research Institute, Beihang University, Hefei, China
- Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China
- School of Biomedical Engineering, Anhui Medical University, Hefei, China
- Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| |
Collapse
|
55
|
Ashir AM, Ibrahim S, Abdulghani M, Ibrahim AA, Anwar MS. Diabetic Retinopathy Detection Using Local Extrema Quantized Haralick Features with Long Short-Term Memory Network. Int J Biomed Imaging 2021; 2021:6618666. [PMID: 33953736 PMCID: PMC8068542 DOI: 10.1155/2021/6618666] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Revised: 02/20/2021] [Accepted: 03/31/2021] [Indexed: 11/18/2022] Open
Abstract
Diabetic retinopathy is one of the leading diseases affecting eyes. Lack of early detection and treatment can lead to total blindness of the diseased eyes. Recently, numerous researchers have attempted producing automatic diabetic retinopathy detection techniques to supplement diagnosis and early treatment of diabetic retinopathy symptoms. In this manuscript, a new approach has been proposed. The proposed approach utilizes the feature extracted from the fundus image using a local extrema information with quantized Haralick features. The quantized features encode not only the textural Haralick features but also exploit the multiresolution information of numerous symptoms in diabetic retinopathy. Long Short-Term Memory network together with local extrema pattern provides a probabilistic approach to analyze each segment of the image with higher precision which helps to suppress false positive occurrences. The proposed approach analyzes the retina vasculature and hard-exudate symptoms of diabetic retinopathy on two different public datasets. The experimental results evaluated using performance matrices such as specificity, accuracy, and sensitivity reveal promising indices. Similarly, comparison with the related state-of-the-art researches highlights the validity of the proposed method. The proposed approach performs better than most of the researches used for comparison.
Collapse
Affiliation(s)
- Abubakar M. Ashir
- Department of Computer Engineering, Tishk International University, Erbil, KRD, Iraq
| | - Salisu Ibrahim
- Department of Mathematic Education, Tishk International University, Erbil, KRD, Iraq
| | - Mohammed Abdulghani
- Department of Computer Engineering, Tishk International University, Erbil, KRD, Iraq
| | | | - Mohammed S. Anwar
- Department of Computer Engineering, Tishk International University, Erbil, KRD, Iraq
| |
Collapse
|
56
|
Dharmawan DA. Assessing fairness in performance evaluation of publicly available retinal blood vessel segmentation algorithms. J Med Eng Technol 2021; 45:351-360. [PMID: 33843422 DOI: 10.1080/03091902.2021.1906342] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
In the literature, various algorithms have been proposed for automatically extracting blood vessels from retinal images. In general, they are developed and evaluated using several publicly available datasets such as the DRIVE and STARE datasets. For performance evaluation, several metrics such as Sensitivity, Specificity, and Accuracy have been widely used. However, not all methods in the literature have been fairly evaluated and compared among their counterparts. In particular, for some publicly available algorithms, the performance is measured only for the area inside the field of view (FOV) of each retinal image while the rest use the complete image for the performance evaluation. Therefore, performing a comparison of the performance of methods in the latter group with those in the former group may lead to a misleading justification. This study aims to assess fairness in the performance evaluation of various publicly available retinal blood vessel segmentation algorithms. The conducted study allows getting several meaningful results: (i) a guideline to assess fairness in performance evaluation of retinal vessel segmentation algorithms, (ii) a more proper performance comparison of retinal vessel segmentation algorithms in the literature, and (iii) a suggestion regarding the use of performance evaluation metrics that will not lead to misleading comparison and justification.
Collapse
|
57
|
Mookiah MRK, Hogg S, MacGillivray T, Trucco E. On the quantitative effects of compression of retinal fundus images on morphometric vascular measurements in VAMPIRE. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 202:105969. [PMID: 33631639 DOI: 10.1016/j.cmpb.2021.105969] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Accepted: 01/30/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES This paper reports a quantitative analysis of the effects of joint photographic experts group (JPEG) image compression of retinal fundus camera images on automatic vessel segmentation and on morphometric vascular measurements derived from it, including vessel width, tortuosity and fractal dimension. METHODS Measurements are computed with vascular assessment and measurement platform for images of the retina (VAMPIRE), a specialized software application adopted in many international studies on retinal biomarkers. For reproducibility, we use three public archives of fundus images (digital retinal images for vessel extraction (DRIVE), automated retinal image analyzer (ARIA), high-resolution fundus (HRF)). We generate compressed versions of original images in a range of representative levels. RESULTS We compare the resulting vessel segmentations with ground truth maps and morphological measurements of the vascular network with those obtained from the original (uncompressed) images. We assess the segmentation quality with sensitivity, specificity, accuracy, area under the curve and Dice coefficient. We assess the agreement between VAMPIRE measurements from compressed and uncompressed images with correlation, intra-class correlation and Bland-Altman analysis. CONCLUSIONS Results suggest that VAMPIRE width-related measurements (central retinal artery equivalent (CRAE), central retinal vein equivalent (CRVE), arteriolar-venular width ratio (AVR)), the fractal dimension (FD) and arteriolar tortuosity have excellent agreement with those from the original images, remaining substantially stable even for strong loss of quality (20% of the original), suggesting the suitability of VAMPIRE in association studies with compressed images.
Collapse
|
58
|
Wang C, Zhao Z, Yu Y. Fine retinal vessel segmentation by combining Nest U-net and patch-learning. Soft comput 2021. [DOI: 10.1007/s00500-020-05552-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
59
|
Pang S, Du A, Orgun MA, Wang Y, Yu Z. Tumor attention networks: Better feature selection, better tumor segmentation. Neural Netw 2021; 140:203-222. [PMID: 33780873 DOI: 10.1016/j.neunet.2021.03.006] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 01/15/2021] [Accepted: 03/05/2021] [Indexed: 02/01/2023]
Abstract
Compared with the traditional analysis of computed tomography scans, automatic liver tumor segmentation can supply precise tumor volumes and reduce the inter-observer variability in estimating the tumor size and the tumor burden, which could further assist physicians to make better therapeutic choices for hepatic diseases and monitoring treatment. Among current mainstream segmentation approaches, multi-layer and multi-kernel convolutional neural networks (CNNs) have attracted much attention in diverse biomedical/medical image segmentation tasks with remarkable performance. However, an arbitrary stacking of feature maps makes CNNs quite inconsistent in imitating the cognition and the visual attention of human beings for a specific visual task. To mitigate the lack of a reasonable feature selection mechanism in CNNs, we exploit a novel and effective network architecture, called Tumor Attention Networks (TA-Net), for mining adaptive features by embedding Tumor Attention layers with multi-functional modules to assist the liver tumor segmentation task. In particular, each tumor attention layer can adaptively highlight valuable tumor features and suppress unrelated ones among feature maps from 3D and 2D perspectives. Moreover, an analysis of visualization results illustrates the effectiveness of our tumor attention modules and the interpretability of CNNs for liver tumor segmentation. Furthermore, we explore different arrangements of skip connections in information fusion. A deep ablation study is also conducted to illustrate the effects of different attention strategies for hepatic tumors. The results of extensive experiments demonstrate that the proposed TA-Net increases the liver tumor segmentation performance with a lower computation cost and a small parameter overhead over the state-of-the-art methods, under various evaluation metrics on clinical benchmark data. In addition, two additional medical image datasets are used to evaluate generalization capability of TA-Net, including the comparison with general semantic segmentation methods and a non-tumor segmentation task. All the program codes have been released at https://github.com/shuchao1212/TA-Net.
Collapse
Affiliation(s)
- Shuchao Pang
- Department of Computing, Macquarie University, Sydney, NSW 2109, Australia.
| | - Anan Du
- School of Electrical and Data Engineering, University of Technology Sydney, NSW 2007, Australia.
| | - Mehmet A Orgun
- Department of Computing, Macquarie University, Sydney, NSW 2109, Australia; Faculty of Information Technology, Macau University of Science and Technology, Avenida Wai Long, Taipa 999078, Macau, China.
| | - Yunyun Wang
- Department of Anesthesiology, China-Japan Union Hospital of Jilin University, Changchun 130012, China.
| | - Zhenmei Yu
- School of Data and Computer Science, Shandong Women's University, Jinan 250014, China.
| |
Collapse
|
60
|
Ni J, Wu J, Tong J, Wei M, Chen Z. SSCA-Net: Simultaneous Self- and Channel-Attention Neural Network for Multiscale Structure-Preserving Vessel Segmentation. BIOMED RESEARCH INTERNATIONAL 2021; 2021:6622253. [PMID: 33860043 PMCID: PMC8026298 DOI: 10.1155/2021/6622253] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 02/21/2021] [Accepted: 03/16/2021] [Indexed: 11/17/2022]
Abstract
Vessel segmentation is a fundamental, yet not well-solved problem in medical image analysis, due to the complicated geometrical and topological structures of human vessels. Unlike existing rule- and conventional learning-based techniques, which hardly capture the location of tiny vessel structures and perceive their global spatial structures, we propose Simultaneous Self- and Channel-attention Neural Network (termed SSCA-Net) to solve the multiscale structure-preserving vessel segmentation (MSVS) problem. SSCA-Net differs from the conventional neural networks in modeling image global contexts, showing more power to understand the global semantic information by both self- and channel-attention (SCA) mechanism and offering high performance on segmenting vessels with multiscale structures (e.g., DSC: 96.21% and MIoU: 92.70% on the intracranial vessel dataset). Specifically, the SCA module is designed and embedded in the feature decoding stage to learn SCA features at different layers, in which the self-attention is used to obtain the position information of the feature itself, and the channel attention is designed to guide the shallow features to obtain global feature information. To evaluate the effectiveness of our SSCA-Net, we compare it with several state-of-the-art methods on three well-known vessel segmentation benchmark datasets. Qualitative and quantitative results demonstrate clear improvements of our method over the state-of-the-art in terms of preserving vessel details and global spatial structures.
Collapse
Affiliation(s)
- Jiajia Ni
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
- College of Internet of Things Engineering, Hohai University Changzhou, China
| | - Jianhuang Wu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| | - Jing Tong
- College of Internet of Things Engineering, Hohai University Changzhou, China
| | - Mingqiang Wei
- Nanjing University of Aeronautics and Astronautics, China
| | - Zhengming Chen
- College of Internet of Things Engineering, Hohai University Changzhou, China
| |
Collapse
|
61
|
Ma Y, Hao H, Xie J, Fu H, Zhang J, Yang J, Wang Z, Liu J, Zheng Y, Zhao Y. ROSE: A Retinal OCT-Angiography Vessel Segmentation Dataset and New Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:928-939. [PMID: 33284751 DOI: 10.1109/tmi.2020.3042802] [Citation(s) in RCA: 104] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Optical Coherence Tomography Angiography (OCTA) is a non-invasive imaging technique that has been increasingly used to image the retinal vasculature at capillary level resolution. However, automated segmentation of retinal vessels in OCTA has been under-studied due to various challenges such as low capillary visibility and high vessel complexity, despite its significance in understanding many vision-related diseases. In addition, there is no publicly available OCTA dataset with manually graded vessels for training and validation of segmentation algorithms. To address these issues, for the first time in the field of retinal image analysis we construct a dedicated Retinal OCTA SEgmentation dataset (ROSE), which consists of 229 OCTA images with vessel annotations at either centerline-level or pixel level. This dataset with the source code has been released for public access to assist researchers in the community in undertaking research in related topics. Secondly, we introduce a novel split-based coarse-to-fine vessel segmentation network for OCTA images (OCTA-Net), with the ability to detect thick and thin vessels separately. In the OCTA-Net, a split-based coarse segmentation module is first utilized to produce a preliminary confidence map of vessels, and a split-based refined segmentation module is then used to optimize the shape/contour of the retinal microvasculature. We perform a thorough evaluation of the state-of-the-art vessel segmentation models and our OCTA-Net on the constructed ROSE dataset. The experimental results demonstrate that our OCTA-Net yields better vessel segmentation performance in OCTA than both traditional and other deep learning methods. In addition, we provide a fractal dimension analysis on the segmented microvasculature, and the statistical analysis demonstrates significant differences between the healthy control and Alzheimer's Disease group. This consolidates that the analysis of retinal microvasculature may offer a new scheme to study various neurodegenerative diseases.
Collapse
|
62
|
Bai R, Jiang S, Sun H, Yang Y, Li G. Deep Neural Network-Based Semantic Segmentation of Microvascular Decompression Images. SENSORS 2021; 21:s21041167. [PMID: 33562275 PMCID: PMC7915571 DOI: 10.3390/s21041167] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 01/26/2021] [Accepted: 02/02/2021] [Indexed: 11/30/2022]
Abstract
Image semantic segmentation has been applied more and more widely in the fields of satellite remote sensing, medical treatment, intelligent transportation, and virtual reality. However, in the medical field, the study of cerebral vessel and cranial nerve segmentation based on true-color medical images is in urgent need and has good research and development prospects. We have extended the current state-of-the-art semantic-segmentation network DeepLabv3+ and used it as the basic framework. First, the feature distillation block (FDB) was introduced into the encoder structure to refine the extracted features. In addition, the atrous spatial pyramid pooling (ASPP) module was added to the decoder structure to enhance the retention of feature and boundary information. The proposed model was trained by fine tuning and optimizing the relevant parameters. Experimental results show that the encoder structure has better performance in feature refinement processing, improving target boundary segmentation precision, and retaining more feature information. Our method has a segmentation accuracy of 75.73%, which is 3% better than DeepLabv3+.
Collapse
Affiliation(s)
- Ruifeng Bai
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (R.B.); (H.S.); (Y.Y.); (G.L.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Shan Jiang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (R.B.); (H.S.); (Y.Y.); (G.L.)
- Correspondence: ; Tel.: +86-187-4401-2663
| | - Haijiang Sun
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (R.B.); (H.S.); (Y.Y.); (G.L.)
| | - Yifan Yang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (R.B.); (H.S.); (Y.Y.); (G.L.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guiju Li
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (R.B.); (H.S.); (Y.Y.); (G.L.)
| |
Collapse
|
63
|
Xie H, Tang C, Zhang W, Shen Y, Lei Z. Multi-scale retinal vessel segmentation using encoder-decoder network with squeeze-and-excitation connection and atrous spatial pyramid pooling. APPLIED OPTICS 2021; 60:239-249. [PMID: 33448945 DOI: 10.1364/ao.409512] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 12/07/2020] [Indexed: 06/12/2023]
Abstract
The segmentation of blood vessels in retinal images is crucial to the diagnosis of many diseases. We propose a deep learning method for vessel segmentation based on an encoder-decoder network combined with squeeze-and-excitation connection and atrous spatial pyramid pooling. In our implementation, the atrous spatial pyramid pooling allows the network to capture features at multiple scales, and the high-level semantic information is combined with low-level features through the encoder-decoder architecture to generate segmentations. Meanwhile, the squeeze-and-excitation connections in the proposed network can adaptively recalibrate features according to the relationship between different channels of features. The proposed network can achieve precise segmentation of retinal vessels without hand-crafted features or specific post-processing. The performance of our model is evaluated in terms of visual effects and quantitative evaluation metrics on two publicly available datasets of retinal images, the Digital Retinal Images for Vessel Extraction and Structured Analysis of the Retina datasets, with comparison to 12 representative methods. Furthermore, the proposed network is applied to vessel segmentation on local retinal images, which demonstrates promising application prospect in medical practices.
Collapse
|
64
|
Samuel PM, Veeramalai T. VSSC Net: Vessel Specific Skip chain Convolutional Network for blood vessel segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105769. [PMID: 33039919 DOI: 10.1016/j.cmpb.2020.105769] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 09/18/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning techniques are instrumental in developing network models that aid in the early diagnosis of life-threatening diseases. To screen and diagnose the retinal fundus and coronary blood vessel disorders, the most important step is the proper segmentation of the blood vessels. METHODS This paper aims to segment the blood vessels from both the coronary angiogram and the retinal fundus images using a single VSSC Net after performing the image-specific preprocessing. The VSSC Net uses two-vessel extraction layers with added supervision on top of the base VGG-16 network. The vessel extraction layers comprise of the vessel-specific convolutional blocks to localize the blood vessels, skip chain convolutional layers to enable rich feature propagation, and a unique feature map summation. Supervision is associated with the two-vessel extraction layers using separate loss/sigmoid function. Finally, the weighted fusion of the individual loss/sigmoid function produces the desired blood vessel probability map. It is then binary segmented and validated for performance. RESULTS The VSSC Net shows improved accuracy values on the standard retinal and coronary angiogram datasets respectively. The computational time required to segment the blood vessels is 0.2 seconds using GPU. Moreover, the vessel extraction layer uses a lesser parameter count of 0.4 million parameters to accurately segment the blood vessels. CONCLUSION The proposed VSSC Net that segments blood vessels from both the retinal fundus images and coronary angiogram can be used for the early diagnosis of vessel disorders. Moreover, it could aid the physician to analyze the blood vessel structure of images obtained from multiple imaging sources.
Collapse
Affiliation(s)
- Pearl Mary Samuel
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India.
| | | |
Collapse
|
65
|
Retinal blood vessels segmentation using classical edge detection filters and the neural network. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100521] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
|
66
|
Rodrigues EO, Conci A, Liatsis P. ELEMENT: Multi-Modal Retinal Vessel Segmentation Based on a Coupled Region Growing and Machine Learning Approach. IEEE J Biomed Health Inform 2020; 24:3507-3519. [PMID: 32750920 DOI: 10.1109/jbhi.2020.2999257] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Vascular structures in the retina contain important information for the detection and analysis of ocular diseases, including age-related macular degeneration, diabetic retinopathy and glaucoma. Commonly used modalities in diagnosis of these diseases are fundus photography, scanning laser ophthalmoscope (SLO) and fluorescein angiography (FA). Typically, retinal vessel segmentation is carried out either manually or interactively, which makes it time consuming and prone to human errors. In this research, we propose a new multi-modal framework for vessel segmentation called ELEMENT (vEsseL sEgmentation using Machine lEarning and coNnecTivity). This framework consists of feature extraction and pixel-based classification using region growing and machine learning. The proposed features capture complementary evidence based on grey level and vessel connectivity properties. The latter information is seamlessly propagated through the pixels at the classification phase. ELEMENT reduces inconsistencies and speeds up the segmentation throughput. We analyze and compare the performance of the proposed approach against state-of-the-art vessel segmentation algorithms in three major groups of experiments, for each of the ocular modalities. Our method produced higher overall performance, with an overall accuracy of 97.40%, compared to 25 of the 26 state-of-the-art approaches, including six works based on deep learning, evaluated on the widely known DRIVE fundus image dataset. In the case of the STARE, CHASE-DB, VAMPIRE FA, IOSTAR SLO and RC-SLO datasets, the proposed framework outperformed all of the state-of-the-art methods with accuracies of 98.27%, 97.78%, 98.34%, 98.04% and 98.35%, respectively.
Collapse
|
67
|
Wang D, Haytham A, Pottenburgh J, Saeedi O, Tao Y. Hard Attention Net for Automatic Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2020; 24:3384-3396. [DOI: 10.1109/jbhi.2020.3002985] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
68
|
Mookiah MRK, Hogg S, MacGillivray TJ, Prathiba V, Pradeepa R, Mohan V, Anjana RM, Doney AS, Palmer CNA, Trucco E. A review of machine learning methods for retinal blood vessel segmentation and artery/vein classification. Med Image Anal 2020; 68:101905. [PMID: 33385700 DOI: 10.1016/j.media.2020.101905] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 11/10/2020] [Accepted: 11/11/2020] [Indexed: 12/20/2022]
Abstract
The eye affords a unique opportunity to inspect a rich part of the human microvasculature non-invasively via retinal imaging. Retinal blood vessel segmentation and classification are prime steps for the diagnosis and risk assessment of microvascular and systemic diseases. A high volume of techniques based on deep learning have been published in recent years. In this context, we review 158 papers published between 2012 and 2020, focussing on methods based on machine and deep learning (DL) for automatic vessel segmentation and classification for fundus camera images. We divide the methods into various classes by task (segmentation or artery-vein classification), technique (supervised or unsupervised, deep and non-deep learning, hand-crafted methods) and more specific algorithms (e.g. multiscale, morphology). We discuss advantages and limitations, and include tables summarising results at-a-glance. Finally, we attempt to assess the quantitative merit of DL methods in terms of accuracy improvement compared to other methods. The results allow us to offer our views on the outlook for vessel segmentation and classification for fundus camera images.
Collapse
Affiliation(s)
| | - Stephen Hogg
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| | - Tom J MacGillivray
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh EH16 4SB, UK
| | - Vijayaraghavan Prathiba
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Rajendra Pradeepa
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Viswanathan Mohan
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Ranjit Mohan Anjana
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Alexander S Doney
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Colin N A Palmer
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Emanuele Trucco
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| |
Collapse
|
69
|
Automatic Drusen Segmentation for Age-Related Macular Degeneration in Fundus Images Using Deep Learning. ELECTRONICS 2020. [DOI: 10.3390/electronics9101617] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Drusen are the main aspect of detecting age-related macular degeneration (AMD). Ophthalmologists can evaluate the condition of AMD based on drusen in fundus images. However, in the early stage of AMD, the drusen areas are usually small and vague. This leads to challenges in the drusen segmentation task. Moreover, due to the high-resolution fundus images, it is hard to accurately predict the drusen areas with deep learning models. In this paper, we propose a multi-scale deep learning model for drusen segmentation. By exploiting both local and global information, we can improve the performance, especially in the early stages of AMD cases.
Collapse
|
70
|
Palanivel DA, Natarajan S, Gopalakrishnan S. Retinal vessel segmentation using multifractal characterization. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106439] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
71
|
Zhao Y, Zhang J, Pereira E, Zheng Y, Su P, Xie J, Zhao Y, Shi Y, Qi H, Liu J, Liu Y. Automated Tortuosity Analysis of Nerve Fibers in Corneal Confocal Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2725-2737. [PMID: 32078542 DOI: 10.1109/tmi.2020.2974499] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Precise characterization and analysis of corneal nerve fiber tortuosity are of great importance in facilitating examination and diagnosis of many eye-related diseases. In this paper we propose a fully automated method for image-level tortuosity estimation, comprising image enhancement, exponential curvature estimation, and tortuosity level classification. The image enhancement component is based on an extended Retinex model, which not only corrects imbalanced illumination and improves image contrast in an image, but also models noise explicitly to aid removal of imaging noise. Afterwards, we take advantage of exponential curvature estimation in the 3D space of positions and orientations to directly measure curvature based on the enhanced images, rather than relying on the explicit segmentation and skeletonization steps in a conventional pipeline usually with accumulated pre-processing errors. The proposed method has been applied over two corneal nerve microscopy datasets for the estimation of a tortuosity level for each image. The experimental results show that it performs better than several selected state-of-the-art methods. Furthermore, we have performed manual gradings at tortuosity level of four hundred and three corneal nerve microscopic images, and this dataset has been released for public access to facilitate other researchers in the community in carrying out further research on the same and related topics.
Collapse
|
72
|
Mohammedhasan M, Uğuz H. A New Deeply Convolutional Neural Network Architecture for Retinal Blood Vessel Segmentation. INT J PATTERN RECOGN 2020. [DOI: 10.1142/s0218001421570019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper proposes an incoming Deep Convolutional Neural Network (CNN) architecture for segmenting retinal blood vessels automatically from fundus images. Automatic segmentation performs a substantial role in computer-aided diagnosis of retinal diseases; it is of considerable significance as eye diseases as well as some other systemic diseases give rise to perceivable pathologic changes. Retinal blood vessel segmentation is challenging because of the excessive changes in the morphology of the vessels on a noisy background. Previous deep learning-based supervised methods suffer from the insufficient use of low-level features which is advantageous in semantic segmentation tasks. The proposed architecture makes use of both high-level features and low-level features to segment retinal blood vessels. The major contribution of the proposed architecture concentrates on two important factors; the first in its supplying of extremely modularized network architecture of aggregated residual connections which enable us to copy the learned layers from the shallower model and developing additional layers to identity mapping. The second is to improve the utilization of computing resources within the network. This is achieved through a skillfully crafted design that allows for increased depth and width of the network while maintaining the stability of its computational budget. Experimental results show the effectiveness of using aggregated residual connections in segmenting retinal vessels more accurately and clearly. Compared to the best existing methods, the proposed method outperformed other existing methods in different measures, comprised less false positives at fine vessels, and caressed more clear lines with sufficient details like the human annotator.
Collapse
Affiliation(s)
- Mali Mohammedhasan
- Department of Computer Engineering, Selçuk Üniversitesi, Selçuklu, Konya 42130, Turkey
| | - Harun Uğuz
- Department of Computer Engineering, Selçuk Üniversitesi, Selçuklu, Konya 42130, Turkey
| |
Collapse
|
73
|
Wu X, Gao D, Borroni D, Madhusudhan S, Jin Z, Zheng Y. Cooperative Low-Rank Models for Removing Stripe Noise From OCTA Images. IEEE J Biomed Health Inform 2020; 24:3480-3490. [PMID: 32750910 DOI: 10.1109/jbhi.2020.2997381] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Optical coherence tomography angiography (OCTA) is an emerging non-invasive imaging technique for imaging the microvasculature of the eye based on phase variance or amplitude decorrelation derived from repeated OCT images of the same tissue area. Stripe noise occurs during the OCTA acquisition process due to the involuntary movement of the eye. To remove the stripe noise (or 'destriping') effectively, we propose two novel image decomposition models to simultaneously destripe all the OCTA images of the same eye cooperatively: cooperative uniformity destriping (CUD) model and cooperative similarity destriping (CSD) model. Both the models consider stripe noise by low-rank constraint but in different ways: the CUD model assumes that stripe noise is identical across all the layers while the CSD model assumes that the stripe noise at different layers are different and have to be considered in the model. Compared to the CUD model, CSD is a more general solution for real OCTA images. An efficient solution (CSD+) is developed for model CSD to reduce the computational complexity. The models were extensively evaluated against state-of-the-art methods on both synthesized and real OCTA datasets. The experiments demonstrated not only the effectiveness of the CSD and CSD+ models in terms of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) and CSD+ is twice faster than CSD, but also their beneficiary effect on the vessel segmentation of OCTA images. We expect our models will become a powerful tool for clinical applications.
Collapse
|
74
|
Zhang Z, Wu C, Coleman S, Kerr D. DENSE-INception U-net for medical image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 192:105395. [PMID: 32163817 DOI: 10.1016/j.cmpb.2020.105395] [Citation(s) in RCA: 99] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Revised: 02/01/2020] [Accepted: 02/12/2020] [Indexed: 05/28/2023]
Abstract
BACKGROUND AND OBJECTIVE Convolutional neural networks (CNNs) play an important role in the field of medical image segmentation. Among many kinds of CNNs, the U-net architecture is one of the most famous fully convolutional network architectures for medical semantic segmentation tasks. Recent work shows that the U-net network can be substantially deeper thus resulting in improved performance on segmentation tasks. Though adding more layers directly into network is a popular way to make a network deeper, it may lead to gradient vanishing or redundant computation during training. METHODS A novel CNN architecture is proposed that integrates the Inception-Res module and densely connecting convolutional module into the U-net architecture. The proposed network model consists of the following parts: firstly, the Inception-Res block is designed to increase the width of the network by replacing the standard convolutional layers; secondly, the Dense-Inception block is designed to extract features and make the network more deep without additional parameters; thirdly, the down-sampling block is adopted to reduce the size of feature maps to accelerate learning and the up-sampling block is used to resize the feature maps. RESULTS The proposed model is tested on images of blood vessel segmentations from retina images, the lung segmentation of CT Data from the benchmark Kaggle datasets and the MRI scan brain tumor segmentation datasets from MICCAI BraTS 2017. The experimental results show that the proposed method can provide better performance on these two tasks compared with the state-of-the-art algorithms. The results reach an average Dice score of 0.9857 in the lung segmentation. For the blood vessel segmentation, the results reach an average Dice score of 0.9582. For the brain tumor segmentation, the results reach an average Dice score of 0.9867. CONCLUSIONS The experiments highlighted that combining the inception module with dense connections in the U-Net architecture is a promising approach for semantic medical image segmentation.
Collapse
Affiliation(s)
- Ziang Zhang
- Faculty of Robot Science and Engineering, Northeastern University, 110004, Shenyang, Liaoning Province, China.
| | - Chengdong Wu
- Faculty of Robot Science and Engineering, Northeastern University, 110004, Shenyang, Liaoning Province, China.
| | - Sonya Coleman
- School of Computing, Engineering and Intelligent Systems, Ulster University, Londonderry, BT48 7JL, Northern Ireland, United Kingdom.
| | - Dermot Kerr
- School of Computing, Engineering and Intelligent Systems, Ulster University, Londonderry, BT48 7JL, Northern Ireland, United Kingdom.
| |
Collapse
|
75
|
Yang T, Wu T, Li L, Zhu C. SUD-GAN: Deep Convolution Generative Adversarial Network Combined with Short Connection and Dense Block for Retinal Vessel Segmentation. J Digit Imaging 2020; 33:946-957. [PMID: 32323089 PMCID: PMC7522149 DOI: 10.1007/s10278-020-00339-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Since morphology of retinal blood vessels plays a key role in ophthalmological disease diagnosis, retinal vessel segmentation is an indispensable step for the screening and diagnosis of retinal diseases with fundus images. In this paper, deep convolution adversarial network combined with short connection and dense block is proposed to separate blood vessels from fundus image, named SUD-GAN. The generator adopts U-shape encode-decode structure and adds short connection block between convolution layers to prevent gradient dispersion caused by deep convolution network. The discriminator is all composed of convolution block, and dense connection structure is added to the middle part of the convolution network to strengthen the spread of features and enhance the network discrimination ability. The proposed method is evaluated on two publicly available databases, the DRIVE and STARE. The results show that the proposed method outperforms the state-of-the-art performance in sensitivity and specificity, which were 0.8340 and 0.9820, and 0.8334 and 0.9897 respectively on DRIVE and STARE, and can detect more tiny vessels and locate the edge of blood vessels more accurately.
Collapse
Affiliation(s)
- Tiejun Yang
- Key Laboratory of Grain Information Processing and Control (Henan University of Technology), Ministry of Education, ZhengZhou, 450001 China
- School of Artificial Intelligence and Big Data, Henan University of Technology, ZhengZhou, 450001 China
| | - Tingting Wu
- College of Information Science and Technology, Henan University of Technology, Zhengzhou, 450001 China
| | - Lei Li
- College of Information Science and Technology, Henan University of Technology, Zhengzhou, 450001 China
| | - Chunhua Zhu
- College of Information Science and Technology, Henan University of Technology, Zhengzhou, 450001 China
| |
Collapse
|
76
|
Mao X, Zhao Y, Chen B, Ma Y, Gu Z, Gu S, Yang J, Cheng J, Liu J. Deep Learning with Skip Connection Attention for Choroid Layer Segmentation in OCT Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1641-1645. [PMID: 33018310 DOI: 10.1109/embc44109.2020.9175631] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Since the thickness and shape of the choroid layer are indicators for the diagnosis of several ophthalmic diseases, the choroid layer segmentation is an important task. There exist many challenges in segmentation of the choroid layer. In this paper, in view of the lack of context information due to the ambiguous boundaries, and the subsequent inconsistent predictions of the same category targets ascribed to the lack of context information or the large regions, a novel Skip Connection Attention (SCA) module which is integrated into the U-Shape architecture is proposed to improve the precision of choroid layer segmentation in Optical Coherence Tomography (OCT) images. The main function of the SCA module is to capture the global context in the highest level to provide the decoder with stage-by-stage guidance, to extract more context information and generate more consistent predictions for the same class targets. By integrating the SCA module into the U-Net and CE-Net, we show that the module improves the accuracy of the choroid layer segmentation.
Collapse
|
77
|
Ni J, Wu J, Tong J, Chen Z, Zhao J. GC-Net: Global context network for medical image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 190:105121. [PMID: 31623863 DOI: 10.1016/j.cmpb.2019.105121] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Revised: 09/23/2019] [Accepted: 10/04/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Medical image segmentation plays an important role in many clinical applications such as disease diagnosis, surgery planning, and computer-assisted therapy. However, it is a very challenging task due to variant images qualities, complex shapes of objects, and the existence of outliers. Recently, researchers have presented deep learning methods to segment medical images. However, these methods often use the high-level features of the convolutional neural network directly or the high-level features combined with the shallow features, thus ignoring the role of the global context features for the segmentation task. Consequently, they have limited capability on extensive medical segmentation tasks. The purpose of this work is to devise a neural network with global context feature information for accomplishing medical image segmentation of different tasks. METHODS The proposed global context network (GC-Net) consists of two components; feature encoding and decoding modules. We use multiple convolutions and batch normalization layers in the encoding module. On the other hand, the decoding module is formed by a proposed global context attention (GCA) block and squeeze and excitation pyramid pooling (SEPP) block. The GCA module connects low-level and high-level features to produce more representative features, while the SEPP module increases the size of the receptive field and the ability of multi-scale feature fusion. Moreover, a weighted cross entropy loss is designed to better balance the segmented and non-segmented regions. RESULTS The proposed GC-Net is validated on three publicly available datasets and one local dataset. The tested medical segmentation tasks include segmentation of intracranial blood vessel, retinal vessels, cell contours, and lung. Experiments demonstrate that, our network outperforms state-of-the-art methods concerning several commonly used evaluation metrics. CONCLUSION Medical segmentation of different tasks can be accurately and effectively achieved by devising a deep convolutional neural network with a global context attention mechanism.
Collapse
Affiliation(s)
- Jiajia Ni
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China; College of Internet of Things Engineering, HoHai University Changzhou, China
| | - Jianhuang Wu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China.
| | - Jing Tong
- College of Internet of Things Engineering, HoHai University Changzhou, China
| | - Zhengming Chen
- College of Internet of Things Engineering, HoHai University Changzhou, China
| | - Junping Zhao
- Institute of Medical Informatics, Chinese PLA General Hospital, China
| |
Collapse
|
78
|
Semi-Supervised Learning Method of U-Net Deep Learning Network for Blood Vessel Segmentation in Retinal Images. Symmetry (Basel) 2020. [DOI: 10.3390/sym12071067] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Blood vessel segmentation methods based on deep neural networks have achieved satisfactory results. However, these methods are usually supervised learning methods, which require large numbers of retinal images with high quality pixel-level ground-truth labels. In practice, the task of labeling these retinal images is very costly, financially and in human effort. To deal with these problems, we propose a semi-supervised learning method which can be used in blood vessel segmentation with limited labeled data. In this method, we use the improved U-Net deep learning network to segment the blood vessel tree. On this basis, we implement the U-Net network-based training dataset updating strategy. A large number of experiments are presented to analyze the segmentation performance of the proposed semi-supervised learning method. The experiment results demonstrate that the proposed methodology is able to avoid the problems of insufficient hand-labels, and achieve satisfactory performance.
Collapse
|
79
|
Yu L, Qin Z, Zhuang T, Ding Y, Qin Z, Raymond Choo KK. A framework for hierarchical division of retinal vascular networks. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.11.113] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
80
|
Yan Q, Chen B, Hu Y, Cheng J, Gong Y, Yang J, Liu J, Zhao Y. Speckle reduction of OCT via super resolution reconstruction and its application on retinal layer segmentation. Artif Intell Med 2020; 106:101871. [DOI: 10.1016/j.artmed.2020.101871] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Revised: 02/17/2020] [Accepted: 05/02/2020] [Indexed: 10/24/2022]
|
81
|
Ding L, Bawany MH, Kuriyan AE, Ramchandran RS, Wykoff CC, Sharma G. A Novel Deep Learning Pipeline for Retinal Vessel Detection In Fluorescein Angiography. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:10.1109/TIP.2020.2991530. [PMID: 32396087 PMCID: PMC7648732 DOI: 10.1109/tip.2020.2991530] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
While recent advances in deep learning have significantly advanced the state of the art for vessel detection in color fundus (CF) images, the success for detecting vessels in fluorescein angiography (FA) has been stymied due to the lack of labeled ground truth datasets. We propose a novel pipeline to detect retinal vessels in FA images using deep neural networks (DNNs) that reduces the effort required for generating labeled ground truth data by combining two key components: cross-modality transfer and human-in-the-loop learning. The cross-modality transfer exploits concurrently captured CF and fundus FA images. Binary vessels maps are first detected from CF images with a pre-trained neural network and then are geometrically registered with and transferred to FA images via robust parametric chamfer alignment to a preliminary FA vessel detection obtained with an unsupervised technique. Using the transferred vessels as initial ground truth labels for deep learning, the human-in-the-loop approach progressively improves the quality of the ground truth labeling by iterating between deep-learning and labeling. The approach significantly reduces manual labeling effort while increasing engagement. We highlight several important considerations for the proposed methodology and validate the performance on three datasets. Experimental results demonstrate that the proposed pipeline significantly reduces the annotation effort and the resulting deep learning methods outperform prior existing FA vessel detection methods by a significant margin. A new public dataset, RECOVERY-FA19, is introduced that includes high-resolution ultra-widefield images and accurately labeled ground truth binary vessel maps.
Collapse
|
82
|
Mao J, Luo Y, Liu L, Lao J, Shao Y, Zhang M, Zhang C, Sun M, Shen L. Automated diagnosis and quantitative analysis of plus disease in retinopathy of prematurity based on deep convolutional neural networks. Acta Ophthalmol 2020; 98:e339-e345. [PMID: 31559701 DOI: 10.1111/aos.14264] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Accepted: 09/06/2019] [Indexed: 12/24/2022]
Abstract
BACKGROUND The purpose of this study was to develop an automated diagnosis and quantitative analysis system for plus disease. The system provides a diagnostic decision but also performs quantitative analysis of the typical pathological features of the disease, which helps the physicians to make the best judgement and communicate the decisions. METHODS The deep learning network provided segmentation of the retinal vessels and the optic disc (OD). Based on the vessel segmentation, plus disease was classified and tortuosity, width, fractal dimension and vessel density were evaluated automatically. RESULTS The trained network achieved a sensitivity of 95.1% with 97.8% specificity for the diagnosis of plus disease. For detection of preplus or worse, the sensitivity and specificity were 92.4% and 97.4%. The quadratic weighted k was 0.9244. The tortuosities for the normal, preplus and plus groups were 3.61 ± 0.08, 5.95 ± 1.57 and 10.67 ± 0.50 (104 cm-3 ). The widths of the blood vessels were 63.46 ± 0.39, 67.21 ± 0.70 and 68.89 ± 0.75 μm. The fractal dimensions were 1.18 ± 0.01, 1.22 ± 0.01 and 1.26 ± 0.02. The vessel densities were 1.39 ± 0.03, 1.60 ± 0.01 and 1.64 ± 0.09 (%). All values were statistically different among the groups. After treatment for plus disease with ranibizumab injection, quantitative analysis showed significant changes in the pathological features. CONCLUSIONS Our system achieved high accuracy of diagnosis of plus disease in retinopathy of prematurity. It provided a quantitative analysis of the dynamic features of the disease progression. This automated system can assist physicians by providing a classification decision with auxiliary quantitative evaluation of the typical pathological features of the disease.
Collapse
Affiliation(s)
- Jianbo Mao
- Eye Hospital of Wenzhou Medical University Wenzhou Medical University Wenzhou China
| | - Yuhao Luo
- Department of Precision Machinery and Instrumentation University of Science and Technology of China Hefei China
| | - Lei Liu
- Department of Electronic Engineering and Information Science University of Science and Technology of China Hefei China
| | - Jimeng Lao
- Eye Hospital of Wenzhou Medical University Wenzhou Medical University Wenzhou China
| | - Yirun Shao
- Eye Hospital of Wenzhou Medical University Wenzhou Medical University Wenzhou China
| | - Min Zhang
- Department of Precision Machinery and Instrumentation University of Science and Technology of China Hefei China
| | - Caiyun Zhang
- Eye Hospital of Wenzhou Medical University Wenzhou Medical University Wenzhou China
| | - Mingzhai Sun
- Department of Precision Machinery and Instrumentation University of Science and Technology of China Hefei China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes University of Science and Technology of China Hefei China
| | - Lijun Shen
- Eye Hospital of Wenzhou Medical University Wenzhou Medical University Wenzhou China
| |
Collapse
|
83
|
Mou L, Chen L, Cheng J, Gu Z, Zhao Y, Liu J. Dense Dilated Network With Probability Regularized Walk for Vessel Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1392-1403. [PMID: 31675323 DOI: 10.1109/tmi.2019.2950051] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The detection of retinal vessel is of great importance in the diagnosis and treatment of many ocular diseases. Many methods have been proposed for vessel detection. However, most of the algorithms neglect the connectivity of the vessels, which plays an important role in the diagnosis. In this paper, we propose a novel method for retinal vessel detection. The proposed method includes a dense dilated network to get an initial detection of the vessels and a probability regularized walk algorithm to address the fracture issue in the initial detection. The dense dilated network integrates newly proposed dense dilated feature extraction blocks into an encoder-decoder structure to extract and accumulate features at different scales. A multi-scale Dice loss function is adopted to train the network. To improve the connectivity of the segmented vessels, we also introduce a probability regularized walk algorithm to connect the broken vessels. The proposed method has been applied on three public data sets: DRIVE, STARE and CHASE_DB1. The results show that the proposed method outperforms the state-of-the-art methods in accuracy, sensitivity, specificity and also area under receiver operating characteristic curve.
Collapse
|
84
|
Shukla AK, Pandey RK, Pachori RB. A fractional filter based efficient algorithm for retinal blood vessel segmentation. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101883] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
85
|
Zhou C, Zhang X, Chen H. A new robust method for blood vessel segmentation in retinal fundus images based on weighted line detector and hidden Markov model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105231. [PMID: 31786454 DOI: 10.1016/j.cmpb.2019.105231] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Revised: 11/08/2019] [Accepted: 11/17/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic vessel segmentation is a crucial preliminary processing step to facilitate ophthalmologist diagnosis in some diseases. But, due to the complexity of retinal fundus image, there are some problems on accurate segmentation of retinal vessel. In this paper, a new method for retinal vessel segmentation is proposed to handle two main problems: thin vessel missing and false detection in difficult regions. METHODS First, an improved line detector is proposed and used to fast extract the major structures of vessels. Then, Hidden Markov model (HMM) is applied to effectively detect vessel centerlines that include thin vessels. Finally, a denoising approach is presented to remove noises and two types of vessels are unified to obtain the complete segmentation results. RESULTS Our method is tested on two public databases (DRIVE and STARE databases), and five measures namely accuracy (Acc), sensitivity (Se), specificity (Sp), Dice coefficient (Dc), structural similarity index (SSIM) and feature similarity index (FSIM) are used to evaluate our segmentation performance. The respective values of the performance measures are 0.9475, 0.7262, 0.9803, 0.7781, 0.9992 and 0.9793 for DRIVE dataset and 0.9535, 0.7865, 0.9730, 0.7764, 0.9987 and 0.9742 for STARE dataset. CONCLUSIONS The experiment results show that our method outperforms most published state-of-the-art methods and is better the result of a human observer. Moreover, in term of specificity, our proposed algorithm can obtain the best score among the unsupervised methods. Meanwhile, there are excellent structure and feature similarities between our result and the ground truth according to achieved SSIM and FSIM. Visual inspection on the segmentation results shows that the proposed method produces more accurate segmentations on some difficult regions such as optic disc and central light reflex while detecting thin vessels effectively compared with the other methods.
Collapse
Affiliation(s)
- Chao Zhou
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082 China.
| | - Xiaogang Zhang
- College of Electrical and Information Engineering, Hunan University, Changsha, 410082 China.
| | - Hua Chen
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410082 China.
| |
Collapse
|
86
|
Zhao Y, Xie J, Zhang H, Zheng Y, Zhao Y, Qi H, Zhao Y, Su P, Liu J, Liu Y. Retinal Vascular Network Topology Reconstruction and Artery/Vein Classification via Dominant Set Clustering. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:341-356. [PMID: 31283498 DOI: 10.1109/tmi.2019.2926492] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The estimation of vascular network topology in complex networks is important in understanding the relationship between vascular changes and a wide spectrum of diseases. Automatic classification of the retinal vascular trees into arteries and veins is of direct assistance to the ophthalmologist in terms of diagnosis and treatment of eye disease. However, it is challenging due to their projective ambiguity and subtle changes in appearance, contrast, and geometry in the imaging process. In this paper, we propose a novel method that is capable of making the artery/vein (A/V) distinction in retinal color fundus images based on vascular network topological properties. To this end, we adapt the concept of dominant set clustering and formalize the retinal blood vessel topology estimation and the A/V classification as a pairwise clustering problem. The graph is constructed through image segmentation, skeletonization, and identification of significant nodes. The edge weight is defined as the inverse Euclidean distance between its two end points in the feature space of intensity, orientation, curvature, diameter, and entropy. The reconstructed vascular network is classified into arteries and veins based on their intensity and morphology. The proposed approach has been applied to five public databases, namely INSPIRE, IOSTAR, VICAVR, DRIVE, and WIDE, and achieved high accuracies of 95.1%, 94.2%, 93.8%, 91.1%, and 91.0%, respectively. Furthermore, we have made manual annotations of the blood vessel topologies for INSPIRE, IOSTAR, VICAVR, and DRIVE datasets, and these annotations are released for public access so as to facilitate researchers in the community.
Collapse
|
87
|
Shang Q, Zhao Y, Chen Z, Hao H, Li F, Zhang X, Liu J. Automated Iris Segmentation from Anterior Segment OCT Images with Occludable Angles via Local Phase Tensor. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:4745-4749. [PMID: 31946922 DOI: 10.1109/embc.2019.8857336] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Morphological changes in the iris are one of the major causes of angle-closure glaucoma, and an anteriorly-bowed iris may be further associated with greater risk of disease progression from primary angle-closure suspect (PACS) to chronic primary angle-closure glaucoma (CPCAG). In consequence, the automated detection of abnormalities in the iris region is of great importance in the management of glaucoma. In this paper, we present a new method for the extraction of the iris region by using a local phase tensor-based curvilinear structure enhancement method, and apply it to anterior segment optical coherence tomography (AS-OCT) imagery in the presence of occludable iridocorneal angle. The proposed method is evaluated across a dataset of 200 anterior chamber angle (ACA) images, and the experimental results show that the proposed method outperforms existing state-of-the-art method in applicability, effectiveness, and accuracy.
Collapse
|
88
|
Zhao R, Zhao Y, Chen Z, Zhao Y, Yang J, Hu Y, Cheng J, Liu J. Speckle Reduction in Optical Coherence Tomography via Super-Resolution Reconstruction. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:5589-5592. [PMID: 31947122 DOI: 10.1109/embc.2019.8856445] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Reducing speckle noise from the optical coherence tomograms (OCT) of human retina is a fundamental step to a better visualization and analysis in retinal imaging, as thus to support examination, diagnosis and treatment of many eye diseases. In this study, we propose a new method for speckle reduction in OCT images using the super-resolution technology. It merges multiple images for the same scene but with sub-pixel movements and restores the missing signals in one pixel, which significantly improves the image quality. The proposed method is evaluated on a dataset of 20 OCT volumes (5120 images), through the mean square error, peak signal to noise ratio and the mean structure similarity index using high quality line-scan images as reference. The experimental results show that the proposed method outperforms existing state-of-the-art approaches in applicability, effectiveness, and accuracy.
Collapse
|
89
|
Zhao H, Sun Y, Li H. Retinal vascular junction detection and classification via deep neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 183:105096. [PMID: 31586789 DOI: 10.1016/j.cmpb.2019.105096] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2019] [Revised: 09/09/2019] [Accepted: 09/25/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVES The retinal fundus contains intricate vascular trees, some of which are mutually intersected and overlapped. The intersection and overlapping of retinal vessels represent vascular junctions (i.e. bifurcation and crossover) in 2D retinal images. These junctions are important for analyzing vascular diseases and tracking the morphology of vessels. In this paper, we propose a two-stage pipeline to detect and classify the junction points. METHODS In the detection stage, a RCNN-based Junction Proposal Network is utilized to search the potential bifurcation and crossover locations directly on color retinal images, which is followed by a Junction Refinement Network to eliminate the false detections. In the classification stage, the detected junction points are identified as crossover or bifurcation using the proposed Junction Classification Network that shares the same model structure with the refinement network. RESULTS Our approach achieves 70% and 60% F1-score on DRIVE and IOSTAR dataset respectively which outperform the state-of-the-art methods by 4.5% and 1.7%, with a high and balanced precision and recall values. CONCLUSIONS This paper proposes a new junction detection and classification method which performs directly on color retinal images without any vessel segmentation nor skeleton preprocessing. The superior performance demonstrates that the effectiveness of our approach.
Collapse
Affiliation(s)
- He Zhao
- School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Yun Sun
- School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Huiqi Li
- School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
90
|
|
91
|
Strisciuglio N, Azzopardi G, Petkov N. Robust Inhibition-Augmented Operator for Delineation of Curvilinear Structures. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:5852-5866. [PMID: 31247549 DOI: 10.1109/tip.2019.2922096] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Delineation of curvilinear structures in images is an important basic step of several image processing applications, such as segmentation of roads or rivers in aerial images, vessels or staining membranes in medical images, and cracks in pavements and roads, among others. Existing methods suffer from insufficient robustness to noise. In this paper, we propose a novel operator for the detection of curvilinear structures in images, which we demonstrate to be robust to various types of noise and effective in several applications. We call it RUSTICO, which stands for RobUST Inhibition-augmented Curvilinear Operator. It is inspired by the push-pull inhibition in visual cortex and takes as input the responses of two trainable B-COSFIRE filters of opposite polarity. The output of RUSTICO consists of a magnitude map and an orientation map. We carried out experiments on a data set of synthetic stimuli with noise drawn from different distributions, as well as on several benchmark data sets of retinal fundus images, crack pavements, and aerial images and a new data set of rose bushes used for automatic gardening. We evaluated the performance of RUSTICO by a metric that considers the structural properties of line networks (connectivity, area, and length) and demonstrated that RUSTICO outperforms many existing methods with high statistical significance. RUSTICO exhibits high robustness to noise and texture.
Collapse
|
92
|
Yin XX, Irshad S, Zhang Y. Artery/vein classification of retinal vessels using classifiers fusion. Health Inf Sci Syst 2019; 7:26. [PMID: 31749960 DOI: 10.1007/s13755-019-0090-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2019] [Accepted: 10/28/2019] [Indexed: 11/28/2022] Open
Abstract
The morphological changes in retinal blood vessels indicate cardiovascular diseases and consequently those diseases lead to ocular complications such as Hypertensive Retinopathy. One of the significant clinical findings related to this ocular abnormality is alteration of width of vessel. The classification of retinal vessels into arteries and veins in eye fundus images is a relevant task for the automatic assessment of vascular changes. This paper presents an important approach to solve this problem by means of feature ranking strategies and multiple classifiers decision-combination scheme that is specifically adapted for artery/vein classification. For this, three databases are used with a local dataset of 44 images and two publically available databases, INSPIRE-AVR containing 40 images and VICAVR containing 58 images. The local database also contains images with pathologically diseased structures. The performance of the proposed system is assessed by comparing the experimental results with the gold standard estimations as well as with the results of previous methodologies, achieving promising classification performance, with an over all accuracy of 90.45%, 93.90% and 87.82%, in retinal blood vessel separation for Local, INSPIRE-AVR and VICAVR dataset, respectively.
Collapse
Affiliation(s)
- Xiao-Xia Yin
- 1Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou, 510006 China
| | - Samra Irshad
- 2Institute for Sustainable Industries and Liveable Cities, Victoria University, Melbourne, Australia
| | - Yanchun Zhang
- 2Institute for Sustainable Industries and Liveable Cities, Victoria University, Melbourne, Australia
| |
Collapse
|
93
|
Hau SC, Devarajan K, Ang M. Anterior Segment Optical Coherence Tomography Angiography and Optical Coherence Tomography in the Evaluation of Episcleritis and Scleritis. Ocul Immunol Inflamm 2019; 29:362-369. [PMID: 31714864 DOI: 10.1080/09273948.2019.1682617] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Purpose: To evaluate the feasibility of using anterior segment optical coherence tomography (AS-OCT) and AS-OCT angiography (AS-OCTA) in assessing patients with episcleritis and scleritis.Methods: Degree of vascularity [vessel density index (VDI)], measured with AS-OCTA, and sclera thickness [conjunctiva epithelium (CE), conjunctiva/episclera complex (CEC), and episclera/sclera complex (ESC)], measured with AS-OCT were compared.Results: A total of 37 eyes (13 episcleritis, 11 scleritis, 13 controls) were analyzed. VDI was lowest for controls for the various tissue depths (p < .001). Episcleritis versus scleritis revealed a significant difference in VDI at ESC (38.1 ± 11.4% vs 46.4 ± 6.4%; p = .03). Mean sclera thickness was lower in controls for CE (p < .001), CEC (p < .001) but not for ESC (p = .54).Conclusions: The degree of vascularity and tissue thickness were different between episcleritis, scleritis and controls. AS-OCTA and AS-OCT may potentially be useful in evaluating patients with scleral inflammation.
Collapse
Affiliation(s)
- Scott C Hau
- NIHR Moorfields Clinical Research Facility, Moorfields Eye Hospital, London, UK
| | - Kavya Devarajan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Marcus Ang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.,Department of Ophthalmology and Visual Sciences, Duke-National University of Singapore Medical School, Singapore, Singapore
| |
Collapse
|
94
|
Mao J, Luo Y, Chen K, Lao J, Chen L, Shao Y, Zhang C, Sun M, Shen L. New grading criterion for retinal haemorrhages in term newborns based on deep convolutional neural networks. Clin Exp Ophthalmol 2019; 48:220-229. [PMID: 31648403 DOI: 10.1111/ceo.13670] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Revised: 10/16/2019] [Accepted: 10/16/2019] [Indexed: 11/29/2022]
Abstract
BACKGROUND To define a new quantitative grading criterion for retinal haemorrhages in term newborns based on the segmentation results of a deep convolutional neural network. METHODS We constructed a dataset of 1543 retina images acquired from 847 term newborns, and developed a deep convolutional neural network to segment retinal haemorrhages, blood vessels and optic discs and locate the macular region. Based on the ratio of areas of retinal haemorrhage to optic disc, and the location of retinal haemorrhages relative to the macular region, we defined a new criterion to grade the degree of retinal haemorrhages in term newborns. RESULTS The F1 scores of the proposed network for segmenting retinal haemorrhages, blood vessels and optic discs were 0.84, 0.73 and 0.94, respectively. Compared with two commonly used retinal haemorrhage grading criteria, this new method is more accurate, objective and quantitative, with the relative location of the retinal haemorrhages to the macula as an important factor. CONCLUSIONS Based on a deep convolutional neural network, we can segment retinal haemorrhages, blood vessels and optic disc with high accuracy. The proposed grading criterion considers not only the area of the haemorrhages but also the locations relative to the macular region. It provides a more objective and comprehensive evaluation criterion. The developed deep convolutional neural network offers an end-to-end solution that can assist doctors to grade retinal haemorrhages in term newborns.
Collapse
Affiliation(s)
- Jianbo Mao
- Eye Hospital of Wenzhou Medical University, Wenzhou Medical University, Wenzhou, China
| | - Yuhao Luo
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, China
| | - Kun Chen
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, China
| | - Jimeng Lao
- Eye Hospital of Wenzhou Medical University, Wenzhou Medical University, Wenzhou, China
| | - Ling'an Chen
- Department of Automation, University of Science and Technology of China, Hefei, China
| | - Yirun Shao
- Eye Hospital of Wenzhou Medical University, Wenzhou Medical University, Wenzhou, China
| | - Caiyun Zhang
- Eye Hospital of Wenzhou Medical University, Wenzhou Medical University, Wenzhou, China
| | - Mingzhai Sun
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, China.,Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, China
| | - Lijun Shen
- Eye Hospital of Wenzhou Medical University, Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
95
|
Cherukuri V, G VKB, Bala R, Monga V. Deep Retinal Image Segmentation with Regularization Under Geometric Priors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2552-2567. [PMID: 31613766 DOI: 10.1109/tip.2019.2946078] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Vessel segmentation of retinal images is a key diagnostic capability in ophthalmology. This problem faces several challenges including low contrast, variable vessel size and thickness, and presence of interfering pathology such as micro-aneurysms and hemorrhages. Early approaches addressing this problem employed hand-crafted filters to capture vessel structures, accompanied by morphological post-processing. More recently, deep learning techniques have been employed with significantly enhanced segmentation accuracy. We propose a novel domain enriched deep network that consists of two components: 1) a representation network that learns geometric features specific to retinal images, and 2) a custom designed computationally efficient residual task network that utilizes the features obtained from the representation layer to perform pixel-level segmentation. The representation and task networks are jointly learned for any given training set. To obtain physically meaningful and practically effective representation filters, we propose two new constraints that are inspired by expected prior structure on these filters: 1) orientation constraint that promotes geometric diversity of curvilinear features, and 2) a data adaptive noise regularizer that penalizes false positives. Multi-scale extensions are developed to enable accurate detection of thin vessels. Experiments performed on three challenging benchmark databases under a variety of training scenarios show that the proposed prior guided deep network outperforms state of the art alternatives as measured by common evaluation metrics, while being more economical in network size and inference time.
Collapse
|
96
|
Yan Q, Zhao Y, Zheng Y, Liu Y, Zhou K, Frangi A, Liu J. Automated retinal lesion detection via image saliency analysis. Med Phys 2019; 46:4531-4544. [PMID: 31381173 DOI: 10.1002/mp.13746] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 07/11/2019] [Accepted: 07/22/2019] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND AND OBJECTIVE The detection of abnormalities such as lesions or leakage from retinal images is an important health informatics task for automated early diagnosis of diabetic and malarial retinopathy or other eye diseases, in order to prevent blindness and common systematic conditions. In this work, we propose a novel retinal lesion detection method by adapting the concepts of saliency. METHODS Retinal images are first segmented as superpixels, two new saliency feature representations: uniqueness and compactness, are then derived to represent the superpixels. The pixel level saliency is then estimated from these superpixel saliency values via a bilateral filter. These extracted saliency features form a matrix for low-rank analysis to achieve saliency detection. The precise contour of a lesion is finally extracted from the generated saliency map after removing confounding structures such as blood vessels, the optic disk, and the fovea. The main novelty of this method is that it is an effective tool for detecting different abnormalities at the pixel level from different modalities of retinal images, without the need to tune parameters. RESULTS To evaluate its effectiveness, we have applied our method to seven public datasets of diabetic and malarial retinopathy with four different types of lesions: exudate, hemorrhage, microaneurysms, and leakage. The evaluation was undertaken at the pixel level, lesion level, or image level according to ground truth availability in these datasets. CONCLUSIONS The experimental results show that the proposed method outperforms existing state-of-the-art ones in applicability, effectiveness, and accuracy.
Collapse
Affiliation(s)
- Qifeng Yan
- University of Chinese Academy of Sciences, Beijing, 100049, China.,Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China
| | - Yalin Zheng
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China.,Department of Eye and Vision Science, University of Liverpool, Liverpool, L7 8TX, UK
| | - Yonghuai Liu
- Department of Computer Science, Edge Hill University, Ormskirk, L39 4QP, UK
| | - Kang Zhou
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China.,School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| | - Alejandro Frangi
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China.,School of Computing, University of Leeds, Leeds, S2 9JT, UK
| | - Jiang Liu
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China.,Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
| |
Collapse
|
97
|
Gu Z, Cheng J, Fu H, Zhou K, Hao H, Zhao Y, Zhang T, Gao S, Liu J. CE-Net: Context Encoder Network for 2D Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2281-2292. [PMID: 30843824 DOI: 10.1109/tmi.2019.2903562] [Citation(s) in RCA: 774] [Impact Index Per Article: 129.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Medical image segmentation is an important step in medical image analysis. With the rapid development of a convolutional neural network in image processing, deep learning has been used for medical image segmentation, such as optic disc segmentation, blood vessel detection, lung segmentation, cell segmentation, and so on. Previously, U-net based approaches have been proposed. However, the consecutive pooling and strided convolutional operations led to the loss of some spatial information. In this paper, we propose a context encoder network (CE-Net) to capture more high-level information and preserve spatial information for 2D medical image segmentation. CE-Net mainly contains three major components: a feature encoder module, a context extractor, and a feature decoder module. We use the pretrained ResNet block as the fixed feature extractor. The context extractor module is formed by a newly proposed dense atrous convolution block and a residual multi-kernel pooling block. We applied the proposed CE-Net to different 2D medical image segmentation tasks. Comprehensive results show that the proposed method outperforms the original U-Net method and other state-of-the-art methods for optic disc segmentation, vessel detection, lung segmentation, cell contour segmentation, and retinal optical coherence tomography layer segmentation.
Collapse
|
98
|
Zhang Y, Lian J, Rong L, Jia W, Li C, Zheng Y. Even faster retinal vessel segmentation via accelerated singular value decomposition. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04505-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
99
|
Yue K, Zou B, Chen Z, Liu Q. Retinal vessel segmentation using dense U-net with multiscale inputs. J Med Imaging (Bellingham) 2019; 6:034004. [PMID: 31572745 DOI: 10.1117/1.jmi.6.3.034004] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2019] [Accepted: 08/30/2019] [Indexed: 11/14/2022] Open
Abstract
A color fundus image is an image of the inner wall of the eyeball taken with a fundus camera. Doctors can observe retinal vessel changes in the image, and these changes can be used to diagnose many serious diseases such as atherosclerosis, glaucoma, and age-related macular degeneration. Automated segmentation of retinal vessels can facilitate more efficient diagnosis of these diseases. We propose an improved U-net architecture to segment retinal vessels. Multiscale input layer and dense block are introduced into the conventional U-net, so that the network can make use of richer spatial context information. The proposed method is evaluated on the public dataset DRIVE, achieving 0.8199 in sensitivity and 0.9561 in accuracy. Especially for thin blood vessels, which are difficult to detect because of their low contrast with the background pixels, the segmentation results have been improved.
Collapse
Affiliation(s)
- Kejuan Yue
- Central South University, School of Computer Science and Engineering, Changsha, China.,Central South University, Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China.,Hunan First Normal University, School of Information Science and Engineering, Changsha, China
| | - Beiji Zou
- Central South University, School of Computer Science and Engineering, Changsha, China.,Central South University, Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Zailiang Chen
- Central South University, School of Computer Science and Engineering, Changsha, China.,Central South University, Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Qing Liu
- Central South University, School of Computer Science and Engineering, Changsha, China.,Central South University, Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| |
Collapse
|
100
|
Arsalan M, Owais M, Mahmood T, Cho SW, Park KR. Aiding the Diagnosis of Diabetic and Hypertensive Retinopathy Using Artificial Intelligence-Based Semantic Segmentation. J Clin Med 2019; 8:E1446. [PMID: 31514466 PMCID: PMC6780110 DOI: 10.3390/jcm8091446] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 09/04/2019] [Accepted: 09/07/2019] [Indexed: 12/13/2022] Open
Abstract
Automatic segmentation of retinal images is an important task in computer-assisted medical image analysis for the diagnosis of diseases such as hypertension, diabetic and hypertensive retinopathy, and arteriosclerosis. Among the diseases, diabetic retinopathy, which is the leading cause of vision detachment, can be diagnosed early through the detection of retinal vessels. The manual detection of these retinal vessels is a time-consuming process that can be automated with the help of artificial intelligence with deep learning. The detection of vessels is difficult due to intensity variation and noise from non-ideal imaging. Although there are deep learning approaches for vessel segmentation, these methods require many trainable parameters, which increase the network complexity. To address these issues, this paper presents a dual-residual-stream-based vessel segmentation network (Vess-Net), which is not as deep as conventional semantic segmentation networks, but provides good segmentation with few trainable parameters and layers. The method takes advantage of artificial intelligence for semantic segmentation to aid the diagnosis of retinopathy. To evaluate the proposed Vess-Net method, experiments were conducted with three publicly available datasets for vessel segmentation: digital retinal images for vessel extraction (DRIVE), the Child Heart Health Study in England (CHASE-DB1), and structured analysis of retina (STARE). Experimental results show that Vess-Net achieved superior performance for all datasets with sensitivity (Se), specificity (Sp), area under the curve (AUC), and accuracy (Acc) of 80.22%, 98.1%, 98.2%, and 96.55% for DRVIE; 82.06%, 98.41%, 98.0%, and 97.26% for CHASE-DB1; and 85.26%, 97.91%, 98.83%, and 96.97% for STARE dataset.
Collapse
Affiliation(s)
- Muhammad Arsalan
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Muhammad Owais
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Tahir Mahmood
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Se Woon Cho
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| |
Collapse
|