201
|
Retinal Blood Vessel Segmentation Using Hybrid Features and Multi-Layer Perceptron Neural Networks. Symmetry (Basel) 2020. [DOI: 10.3390/sym12060894] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Segmentation of retinal blood vessels is the first step for several computer aided-diagnosis systems (CAD), not only for ocular disease diagnosis such as diabetic retinopathy (DR) but also of non-ocular disease, such as hypertension, stroke and cardiovascular diseases. In this paper, a supervised learning-based method, using a multi-layer perceptron neural network and carefully selected vector of features, is proposed. In particular, for each pixel of a retinal fundus image, we construct a 24-D feature vector, encoding information on the local intensity, morphology transformation, principal moments of phase congruency, Hessian, and difference of Gaussian values. A post-processing technique depending on mathematical morphological operators is used to optimise the segmentation. Moreover, the selected feature vector succeeded in outfitting the symmetric features that provided the final blood vessel probability as a binary map image. The proposed method is tested on three known datasets: Digital Retinal Image for Extraction (DRIVE), Structure Analysis of the Retina (STARE), and CHASED_DB1 datasets. The experimental results, both visual and quantitative, testify to the robustness of the proposed method. This proposed method achieved 0.9607, 0.7542, and 0.9843 in DRIVE, 0.9632, 0.7806, and 0.9825 on STARE, 0.9577, 0.7585 and 0.9846 in CHASE_DB1, with respectable accuracy, sensitivity, and specificity performance metrics. Furthermore, they testify that the method is superior to seven similar state-of-the-art methods.
Collapse
|
202
|
Feng S, Zhuo Z, Pan D, Tian Q. CcNet: A cross-connected convolutional network for segmenting retinal vessels using multi-scale features. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.10.098] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
203
|
Hao D, Ding S, Qiu L, Lv Y, Fei B, Zhu Y, Qin B. Sequential vessel segmentation via deep channel attention network. Neural Netw 2020; 128:172-187. [PMID: 32447262 DOI: 10.1016/j.neunet.2020.05.005] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2019] [Revised: 04/22/2020] [Accepted: 05/04/2020] [Indexed: 02/01/2023]
Abstract
Accurately segmenting contrast-filled vessels from X-ray coronary angiography (XCA) image sequence is an essential step for the diagnosis and therapy of coronary artery disease. However, developing automatic vessel segmentation is particularly challenging due to the overlapping structures, low contrast and the presence of complex and dynamic background artifacts in XCA images. This paper develops a novel encoder-decoder deep network architecture which exploits the several contextual frames of 2D+t sequential images in a sliding window centered at current frame to segment 2D vessel masks from the current frame. The architecture is equipped with temporal-spatial feature extraction in encoder stage, feature fusion in skip connection layers and channel attention mechanism in decoder stage. In the encoder stage, a series of 3D convolutional layers are employed to hierarchically extract temporal-spatial features. Skip connection layers subsequently fuse the temporal-spatial feature maps and deliver them to the corresponding decoder stages. To efficiently discriminate vessel features from the complex and noisy backgrounds in the XCA images, the decoder stage effectively utilizes channel attention blocks to refine the intermediate feature maps from skip connection layers for subsequently decoding the refined features in 2D ways to produce the segmented vessel masks. Furthermore, Dice loss function is implemented to train the proposed deep network in order to tackle the class imbalance problem in the XCA data due to the wide distribution of complex background artifacts. Extensive experiments by comparing our method with other state-of-the-art algorithms demonstrate the proposed method's superior performance over other methods in terms of the quantitative metrics and visual validation. To facilitate the reproductive research in XCA community, we publicly release our dataset and source codes at https://github.com/Binjie-Qin/SVS-net.
Collapse
Affiliation(s)
- Dongdong Hao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Song Ding
- Department of Cardiology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200127, China
| | - Linwei Qiu
- School of Astronautics, Beihang University, Beijing 100191, China
| | - Yisong Lv
- School of Continuing Education, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Baowei Fei
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, University of Texas at Dallas, Richardson, TX 75080, USA
| | - Yueqi Zhu
- Department of Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai Jiao Tong University, 600 Yi Shan Road, Shanghai 200233, China
| | - Binjie Qin
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China.
| |
Collapse
|
204
|
Ding L, Bawany MH, Kuriyan AE, Ramchandran RS, Wykoff CC, Sharma G. A Novel Deep Learning Pipeline for Retinal Vessel Detection In Fluorescein Angiography. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:10.1109/TIP.2020.2991530. [PMID: 32396087 PMCID: PMC7648732 DOI: 10.1109/tip.2020.2991530] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
While recent advances in deep learning have significantly advanced the state of the art for vessel detection in color fundus (CF) images, the success for detecting vessels in fluorescein angiography (FA) has been stymied due to the lack of labeled ground truth datasets. We propose a novel pipeline to detect retinal vessels in FA images using deep neural networks (DNNs) that reduces the effort required for generating labeled ground truth data by combining two key components: cross-modality transfer and human-in-the-loop learning. The cross-modality transfer exploits concurrently captured CF and fundus FA images. Binary vessels maps are first detected from CF images with a pre-trained neural network and then are geometrically registered with and transferred to FA images via robust parametric chamfer alignment to a preliminary FA vessel detection obtained with an unsupervised technique. Using the transferred vessels as initial ground truth labels for deep learning, the human-in-the-loop approach progressively improves the quality of the ground truth labeling by iterating between deep-learning and labeling. The approach significantly reduces manual labeling effort while increasing engagement. We highlight several important considerations for the proposed methodology and validate the performance on three datasets. Experimental results demonstrate that the proposed pipeline significantly reduces the annotation effort and the resulting deep learning methods outperform prior existing FA vessel detection methods by a significant margin. A new public dataset, RECOVERY-FA19, is introduced that includes high-resolution ultra-widefield images and accurately labeled ground truth binary vessel maps.
Collapse
|
205
|
Abstract
Ophthalmology is a core medical field that is of interest to many. Retinal examination is a commonly performed diagnostic procedure that can be used to inspect the interior of the eye and screen for any pathological symptoms. Although various types of eye examinations exist, there are many cases where it is difficult to identify the retinal condition of the patient accurately because the test image resolution is very low because of the utilization of simple methods. In this paper, we propose an image synthetic approach that reconstructs the vessel image based on past retinal image data using the multilayer perceptron concept with artificial neural networks. The approach proposed in this study can convert vessel images to vessel-centered images with clearer identification, even for low-resolution retinal images. To verify the proposed approach, we determined whether high-resolution vessel images could be extracted from low-resolution images through a statistical analysis using high- and low-resolution images extracted from the same patient.
Collapse
|
206
|
Mou L, Chen L, Cheng J, Gu Z, Zhao Y, Liu J. Dense Dilated Network With Probability Regularized Walk for Vessel Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1392-1403. [PMID: 31675323 DOI: 10.1109/tmi.2019.2950051] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The detection of retinal vessel is of great importance in the diagnosis and treatment of many ocular diseases. Many methods have been proposed for vessel detection. However, most of the algorithms neglect the connectivity of the vessels, which plays an important role in the diagnosis. In this paper, we propose a novel method for retinal vessel detection. The proposed method includes a dense dilated network to get an initial detection of the vessels and a probability regularized walk algorithm to address the fracture issue in the initial detection. The dense dilated network integrates newly proposed dense dilated feature extraction blocks into an encoder-decoder structure to extract and accumulate features at different scales. A multi-scale Dice loss function is adopted to train the network. To improve the connectivity of the segmented vessels, we also introduce a probability regularized walk algorithm to connect the broken vessels. The proposed method has been applied on three public data sets: DRIVE, STARE and CHASE_DB1. The results show that the proposed method outperforms the state-of-the-art methods in accuracy, sensitivity, specificity and also area under receiver operating characteristic curve.
Collapse
|
207
|
Yuan F, Dai N, Tian S, Zhang B, Sun Y, Yu Q, Liu H. Personalized design technique for the dental occlusal surface based on conditional generative adversarial networks. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2020; 36:e3321. [PMID: 32043311 DOI: 10.1002/cnm.3321] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2019] [Revised: 12/14/2019] [Accepted: 02/03/2020] [Indexed: 06/10/2023]
Abstract
The tooth defect is a frequently occurring disease within the field of dental clinic. However, the traditional manual restoration for the defective tooth needs an especially long treatment time, and dental computer aided design and manufacture (CAD/CAM) systems fail to restore the personalized anatomical features of natural teeth. Aiming to address the shortcomings of existed methods, this article proposes an intelligent network model for designing tooth crown surface based on conditional generative adversarial networks. Then, the data set for training the network model is constructed via generating depth maps of 3D tooth models scanned by the intraoral. Through adversarial training, the network model is able to generate tooth occlusal surface under the constraint of the space occlusal relationship, the perceptual loss, and occlusal groove filter loss. Finally, we carry out the assessment experiments for the quality of the occlusal surface and the occlusal relationship with the opposing tooth. The experimental results demonstrate that our method can automatically reconstruct the personalized anatomical features on occlusal surface and shorten the treatment time while restoring the full functionality of the defective tooth.
Collapse
Affiliation(s)
- Fulai Yuan
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Ning Dai
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Sukun Tian
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Bei Zhang
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Yuchun Sun
- Peking University School and Hospital of Stomatology, Beijing, People's Republic of China
| | - Qing Yu
- Nanjing Stomatological Hospital, Medical School of Nanjing University, Nanjing, People's Republic of China
| | - Hao Liu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| |
Collapse
|
208
|
Abstract
We present a method for virtual staining for morphological analysis of individual biological cells based on stain-free digital holography, allowing clinicians and biologists to visualize and analyze the cells as if they have been chemically stained. Our approach provides numerous advantages, as it 1) circumvents the possible toxicity of staining materials, 2) saves time and resources, 3) optimizes inter- and intralab variability, 4) allows concurrent staining of different types of cells with multiple virtual stains, and 5) provides ideal conditions for real-time analysis, such as rapid stain-free imaging flow cytometry. The proposed method is shown to be accurate, repeatable, and nonsubjective. Hence, it bears great potential to become a common tool in clinical settings and biological research. Many medical and biological protocols for analyzing individual biological cells involve morphological evaluation based on cell staining, designed to enhance imaging contrast and enable clinicians and biologists to differentiate between various cell organelles. However, cell staining is not always allowed in certain medical procedures. In other cases, staining may be time-consuming or expensive to implement. Staining protocols may be operator-sensitive, and hence may lead to varying analytical results, as well as cause artificial imaging artifacts or false heterogeneity. We present a deep-learning approach, called HoloStain, which converts images of isolated biological cells acquired without staining by holographic microscopy to their virtually stained images. We demonstrate this approach for human sperm cells, as there is a well-established protocol and global standardization for characterizing the morphology of stained human sperm cells for fertility evaluation, but, on the other hand, staining might be cytotoxic and thus is not allowed during human in vitro fertilization (IVF). After a training process, the deep neural network can take images of unseen sperm cells retrieved from holograms acquired without staining and convert them to their stainlike images. We obtained a fivefold recall improvement in the analysis results, demonstrating the advantage of using virtual staining for sperm cell analysis. With the introduction of simple holographic imaging methods in clinical settings, the proposed method has a great potential to become a common practice in human IVF procedures, as well as to significantly simplify and radically change other cell analyses and techniques such as imaging flow cytometry.
Collapse
|
209
|
Islam MM, Poly TN, Walther BA, Yang HC, Li YC(J. Artificial Intelligence in Ophthalmology: A Meta-Analysis of Deep Learning Models for Retinal Vessels Segmentation. J Clin Med 2020; 9:E1018. [PMID: 32260311 PMCID: PMC7231106 DOI: 10.3390/jcm9041018] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Revised: 03/27/2020] [Accepted: 03/28/2020] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND AND OBJECTIVE Accurate retinal vessel segmentation is often considered to be a reliable biomarker of diagnosis and screening of various diseases, including cardiovascular diseases, diabetic, and ophthalmologic diseases. Recently, deep learning (DL) algorithms have demonstrated high performance in segmenting retinal images that may enable fast and lifesaving diagnoses. To our knowledge, there is no systematic review of the current work in this research area. Therefore, we performed a systematic review with a meta-analysis of relevant studies to quantify the performance of the DL algorithms in retinal vessel segmentation. METHODS A systematic search on EMBASE, PubMed, Google Scholar, Scopus, and Web of Science was conducted for studies that were published between 1 January 2000 and 15 January 2020. We followed the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) procedure. The DL-based study design was mandatory for a study's inclusion. Two authors independently screened all titles and abstracts against predefined inclusion and exclusion criteria. We used the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool for assessing the risk of bias and applicability. RESULTS Thirty-one studies were included in the systematic review; however, only 23 studies met the inclusion criteria for the meta-analysis. DL showed high performance for four publicly available databases, achieving an average area under the ROC of 0.96, 0.97, 0.96, and 0.94 on the DRIVE, STARE, CHASE_DB1, and HRF databases, respectively. The pooled sensitivity for the DRIVE, STARE, CHASE_DB1, and HRF databases was 0.77, 0.79, 0.78, and 0.81, respectively. Moreover, the pooled specificity of the DRIVE, STARE, CHASE_DB1, and HRF databases was 0.97, 0.97, 0.97, and 0.92, respectively. CONCLUSION The findings of our study showed the DL algorithms had high sensitivity and specificity for segmenting the retinal vessels from digital fundus images. The future role of DL algorithms in retinal vessel segmentation is promising, especially for those countries with limited access to healthcare. More compressive studies and global efforts are mandatory for evaluating the cost-effectiveness of DL-based tools for retinal disease screening worldwide.
Collapse
Affiliation(s)
- Md. Mohaimenul Islam
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; (M.M.I.); (T.N.P.); (H.C.Y.)
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
| | - Tahmina Nasrin Poly
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; (M.M.I.); (T.N.P.); (H.C.Y.)
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
| | - Bruno Andreas Walther
- Department of Biological Sciences, National Sun Yat-Sen University, Gushan District, Kaohsiung City 804, Taiwan;
| | - Hsuan Chia Yang
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; (M.M.I.); (T.N.P.); (H.C.Y.)
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
| | - Yu-Chuan (Jack) Li
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; (M.M.I.); (T.N.P.); (H.C.Y.)
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
- Department of Dermatology, Wan Fang Hospital, Taipei 110, Taiwan
- TMU Research Center of Cancer Translational Medicine, Taipei Medical University, Taipei 110, Taiwan
| |
Collapse
|
210
|
Tajbakhsh N, Jeyaseelan L, Li Q, Chiang JN, Wu Z, Ding X. Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation. Med Image Anal 2020; 63:101693. [PMID: 32289663 DOI: 10.1016/j.media.2020.101693] [Citation(s) in RCA: 323] [Impact Index Per Article: 64.6] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Revised: 03/22/2020] [Accepted: 03/23/2020] [Indexed: 12/12/2022]
Abstract
The medical imaging literature has witnessed remarkable progress in high-performing segmentation models based on convolutional neural networks. Despite the new performance highs, the recent advanced segmentation models still require large, representative, and high quality annotated datasets. However, rarely do we have a perfect training dataset, particularly in the field of medical imaging, where data and annotations are both expensive to acquire. Recently, a large body of research has studied the problem of medical image segmentation with imperfect datasets, tackling two major dataset limitations: scarce annotations where only limited annotated data is available for training, and weak annotations where the training data has only sparse annotations, noisy annotations, or image-level annotations. In this article, we provide a detailed review of the solutions above, summarizing both the technical novelties and empirical results. We further compare the benefits and requirements of the surveyed methodologies and provide our recommended solutions. We hope this survey article increases the community awareness of the techniques that are available to handle imperfect medical image segmentation datasets.
Collapse
Affiliation(s)
| | | | - Qian Li
- VoxelCloud, Inc., United States
| | | | | | | |
Collapse
|
211
|
Hervella ÁS, Rouco J, Novo J, Penedo MG, Ortega M. Deep multi-instance heatmap regression for the detection of retinal vessel crossings and bifurcations in eye fundus images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 186:105201. [PMID: 31783244 DOI: 10.1016/j.cmpb.2019.105201] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2019] [Revised: 10/09/2019] [Accepted: 11/11/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVES The analysis of the retinal vasculature plays an important role in the diagnosis of many ocular and systemic diseases. In this context, the accurate detection of the vessel crossings and bifurcations is an important requirement for the automated extraction of relevant biomarkers. In that regard, we propose a novel approach that addresses the simultaneous detection of vessel crossings and bifurcations in eye fundus images. METHOD We propose to formulate the detection of vessel crossings and bifurcations in eye fundus images as a multi-instance heatmap regression. In particular, a deep neural network is trained in the prediction of multi-instance heatmaps that model the likelihood of a pixel being a landmark location. This novel approach allows to make predictions using full images and integrates into a single step the detection and distinction of the vascular landmarks. RESULTS The proposed method is validated on two public datasets of reference that include detailed annotations for vessel crossings and bifurcations in eye fundus images. The conducted experiments evidence that the proposed method offers a satisfactory performance. In particular, the proposed method achieves 74.23% and 70.90% F-score for the detection of crossings and bifurcations, respectively, in color fundus images. Furthermore, the proposed method outperforms previous works by a significant margin. CONCLUSIONS The proposed multi-instance heatmap regression allows to successfully exploit the potential of modern deep learning algorithms for the simultaneous detection of retinal vessel crossings and bifurcations. Consequently, this results in a significant improvement over previous methods, which will further facilitate the automated analysis of the retinal vasculature in many pathological conditions.
Collapse
Affiliation(s)
- Álvaro S Hervella
- CITIC-Research Center of Information and Communication Technologies, Universidade da Coruña, A Coruña, Spain; Department of Computer Science, Universidade da Coruña, A Coruña, Spain.
| | - José Rouco
- CITIC-Research Center of Information and Communication Technologies, Universidade da Coruña, A Coruña, Spain; Department of Computer Science, Universidade da Coruña, A Coruña, Spain
| | - Jorge Novo
- CITIC-Research Center of Information and Communication Technologies, Universidade da Coruña, A Coruña, Spain; Department of Computer Science, Universidade da Coruña, A Coruña, Spain
| | - Manuel G Penedo
- CITIC-Research Center of Information and Communication Technologies, Universidade da Coruña, A Coruña, Spain; Department of Computer Science, Universidade da Coruña, A Coruña, Spain
| | - Marcos Ortega
- CITIC-Research Center of Information and Communication Technologies, Universidade da Coruña, A Coruña, Spain; Department of Computer Science, Universidade da Coruña, A Coruña, Spain
| |
Collapse
|
212
|
Cerebrovascular segmentation from TOF-MRA using model- and data-driven method via sparse labels. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.092] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
213
|
Zhang M, Zhang C, Wu X, Cao X, Young GS, Chen H, Xu X. A neural network approach to segment brain blood vessels in digital subtraction angiography. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 185:105159. [PMID: 31710990 PMCID: PMC7518214 DOI: 10.1016/j.cmpb.2019.105159] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2019] [Revised: 08/27/2019] [Accepted: 10/26/2019] [Indexed: 05/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Cerebrovascular diseases (CVDs) affect a large number of patients and often have devastating outcomes. The hallmarks of CVDs are the abnormalities formed on brain blood vessels, including protrusions, narrows, widening, and bifurcation of the blood vessels. CVDs are often diagnosed by digital subtraction angiography (DSA) yet the interpretation of DSA is challenging as one must carefully examine each brain blood vessel. The objective of this work is to develop a computerized analysis approach for automated segmentation of brain blood vessels. METHODS In this work, we present a U-net based deep learning approach, combined with pre-processing, to track and segment brain blood vessels in DSA images. We compared the results given by the deep learning approach with manually marked ground truth using accuracy, sensitivity, specificity, and Dice coefficient. RESULTS Our results showed that the proposed approach achieved an accuracy of 0.978, with a standard deviation of 0.00796, a sensitivity of 0.76 with a standard deviation of 0.096, a specificity of 0.994 with a standard deviation of 0.0036, and an average Dice coefficient was 0.8268 with a standard deviation of 0.052. CONCLUSIONS Our findings show that the deep learning approach can achieve satisfactory performance as a computer-aided analysis tool to assist clinicians in diagnosing CVDs.
Collapse
Affiliation(s)
- Min Zhang
- Departments of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, MA 02115, USA
| | - Chen Zhang
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Xian Wu
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Xinhua Cao
- Departments of Radiology, Boston Children’s Hospital and Harvard Medical School, Boston, MA 02115, USA
| | - Geoffrey S. Young
- Departments of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, MA 02115, USA
| | - Huai Chen
- Department of Radiology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, Guangdong 510120, China
- Corresponding authors. (X. Xu)
| | - Xiaoyin Xu
- Departments of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, MA 02115, USA
- Corresponding authors. (X. Xu)
| |
Collapse
|
214
|
Guo Y, Peng Y. BSCN: bidirectional symmetric cascade network for retinal vessel segmentation. BMC Med Imaging 2020; 20:20. [PMID: 32070306 PMCID: PMC7029442 DOI: 10.1186/s12880-020-0412-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2019] [Accepted: 01/14/2020] [Indexed: 11/18/2022] Open
Abstract
Background Retinal blood vessel segmentation has an important guiding significance for the analysis and diagnosis of cardiovascular diseases such as hypertension and diabetes. But the traditional manual method of retinal blood vessel segmentation is not only time-consuming and laborious but also cannot guarantee the accuracy and efficiency of diagnosis. Therefore, it is especially significant to create a computer-aided method of automatic and accurate retinal vessel segmentation. Methods In order to extract the blood vessels’ contours of different diameters to realize fine segmentation of retinal vessels, we propose a Bidirectional Symmetric Cascade Network (BSCN) where each layer is supervised by vessel contour labels of specific diameter scale instead of using one general ground truth to train different network layers. In addition, to increase the multi-scale feature representation of retinal blood vessels, we propose the Dense Dilated Convolution Module (DDCM), which extracts retinal vessel features of different diameters by adjusting the dilation rate in the dilated convolution branches and generates two blood vessel contour prediction results by two directions respectively. All dense dilated convolution module outputs are fused to obtain the final vessel segmentation results. Results We experimented the three datasets of DRIVE, STARE, HRF and CHASE_DB1, and the proposed method reaches accuracy of 0.9846/0.9872/0.9856/0.9889 and AUC of 0.9874/0.9941/0.9882/0.9874 on DRIVE, STARE, HRF and CHASE_DB1. Conclusions The experimental results show that compared with the state-of-art methods, the proposed method has strong robustness, it not only avoids the adverse interference of the lesion background but also detects the tiny blood vessels at the intersection accurately.
Collapse
Affiliation(s)
- Yanfei Guo
- College of Information Science and Engineering,Shandong University of Science and Technology, Shandong, Qingdao 266590, China
| | - Yanjun Peng
- College of Information Science and Engineering,Shandong University of Science and Technology, Shandong, Qingdao 266590, China. .,Shandong Province Key Laboratory of Wisdom Mining Information Technology, Shandong University of Science and Technology, Shandong, Qingdao 266590, China.
| |
Collapse
|
215
|
Tensor-cut: A tensor-based graph-cut blood vessel segmentation method and its application to renal artery segmentation. Med Image Anal 2020; 60:101623. [DOI: 10.1016/j.media.2019.101623] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 11/18/2019] [Accepted: 11/25/2019] [Indexed: 11/19/2022]
|
216
|
Guan Q, Huang Y. Multi-label chest X-ray image classification via category-wise residual attention learning. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2018.10.027] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
217
|
Yi X, Adams S, Babyn P, Elnajmi A. Automatic Catheter and Tube Detection in Pediatric X-ray Images Using a Scale-Recurrent Network and Synthetic Data. J Digit Imaging 2020; 33:181-190. [PMID: 30972586 PMCID: PMC7064683 DOI: 10.1007/s10278-019-00201-7] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
Abstract
Catheters are commonly inserted life supporting devices. Because serious complications can arise from malpositioned catheters, X-ray images are used to assess the position of a catheter immediately after placement. Previous computer vision approaches to detect catheters on X-ray images were either rule-based or only capable of processing a limited number or type of catheters projecting over the chest. With the resurgence of deep learning, supervised training approaches are beginning to show promising results. However, dense annotation maps are required, and the work of a human annotator is difficult to scale. In this work, we propose an automatic approach for detection of catheters and tubes on pediatric X-ray images. We propose a simple way of synthesizing catheters on X-ray images to generate a training dataset by exploiting the fact that catheters are essentially tubular structures with various cross sectional profiles. Further, we develop a UNet-style segmentation network with a recurrent module that can process inputs at multiple scales and iteratively refine the detection result. By training on adult chest X-rays, the proposed network exhibits promising detection results on pediatric chest/abdomen X-rays in terms of both precision and recall, with Fβ = 0.8. The approach described in this work may contribute to the development of clinical systems to detect and assess the placement of catheters on X-ray images. This may provide a solution to triage and prioritize X-ray images with potentially malpositioned catheters for a radiologist's urgent review and help automate radiology reporting.
Collapse
Affiliation(s)
- X Yi
- College of Medicine, University of Saskatchewan, Saskatoon, SK, Canada.
| | - Scott Adams
- College of Medicine, University of Saskatchewan, Saskatoon, SK, Canada
| | - Paul Babyn
- College of Medicine, University of Saskatchewan, Saskatoon, SK, Canada
| | - Abdul Elnajmi
- College of Medicine, University of Saskatchewan, Saskatoon, SK, Canada
| |
Collapse
|
218
|
Samber DD, Ramachandran S, Sahota A, Naidu S, Pruzan A, Fayad ZA, Mani V. Segmentation of carotid arterial walls using neural networks. World J Radiol 2020; 12:1-9. [PMID: 31988700 PMCID: PMC6928332 DOI: 10.4329/wjr.v12.i1.1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 10/11/2019] [Accepted: 11/20/2019] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Automated, accurate, objective, and quantitative medical image segmentation has remained a challenging goal in computer science since its inception. This study applies the technique of convolutional neural networks (CNNs) to the task of segmenting carotid arteries to aid in the assessment of pathology. AIM To investigate CNN's utility as an ancillary tool for researchers who require accurate segmentation of carotid vessels. METHODS An expert reader delineated vessel wall boundaries on 4422 axial T2-weighted magnetic resonance images of bilateral carotid arteries from 189 subjects with clinically evident atherosclerotic disease. A portion of this dataset was used to train two CNNs (one to segment the vessel lumen and the other to segment the vessel wall) with the remaining portion used to test the algorithm's efficacy by comparing CNN segmented images with those of an expert reader. RESULTS Overall quantitative assessment between automated and manual segmentations was determined by computing the DICE coefficient for each pair of segmented images in the test dataset for each CNN applied. The average DICE coefficient for the test dataset (CNN segmentations compared to expert's segmentations) was 0.96 for the lumen and 0.87 for the vessel wall. Pearson correlation values and the intra-class correlation coefficient (ICC) were computed for the lumen (Pearson = 0.98, ICC = 0.98) and vessel wall (Pearson = 0.88, ICC = 0.86) segmentations. Bland-Altman plots of area measurements for the CNN and expert readers indicate good agreement with a mean bias of 1%-8%. CONCLUSION Although the technique produces reasonable results that are on par with expert human assessments, our application requires human supervision and monitoring to ensure consistent results. We intend to deploy this algorithm as part of a software platform to lessen researchers' workload to more quickly obtain reliable results.
Collapse
Affiliation(s)
- Daniel D Samber
- Translational and Molecular Imaging Institute (TMII), Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Sarayu Ramachandran
- Translational and Molecular Imaging Institute (TMII), Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Anoop Sahota
- Translational and Molecular Imaging Institute (TMII), Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Sonum Naidu
- Translational and Molecular Imaging Institute (TMII), Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Alison Pruzan
- Translational and Molecular Imaging Institute (TMII), Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Zahi A Fayad
- Translational and Molecular Imaging Institute (TMII), Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Venkatesh Mani
- Translational and Molecular Imaging Institute (TMII), Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| |
Collapse
|
219
|
Pan L, Shi F, Xiang D, Yu K, Duan L, Zheng J, Chen X. OCTRexpert:A Feature-based 3D Registration Method for Retinal OCT Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:3885-3897. [PMID: 31995490 DOI: 10.1109/tip.2020.2967589] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Medical image registration can be used for studying longitudinal and cross-sectional data, quantitatively monitoring disease progression and guiding computer assisted diagnosis and treatments. However, deformable registration which enables more precise and quantitative comparison has not been well developed for retinal optical coherence tomography (OCT) images. This paper proposes a new 3D registration approach for retinal OCT data called OCTRexpert. To the best of our knowledge, the proposed algorithm is the first full 3D registration approach for retinal OCT images which can be applied to longitudinal OCT images for both normal and serious pathological subjects. In this approach, a pre-processing method is first performed to remove eye motion artifact and then a novel design-detection-deformation strategy is applied for the registration. In the design step, a couple of features are designed for each voxel in the image. In the detection step, active voxels are selected and the point-to-point correspondences between the subject and template images are established. In the deformation step, the image is hierarchically deformed according to the detected correspondences in multi-resolution. The proposed method is evaluated on a dataset with longitudinal OCT images from 20 healthy subjects and 4 subjects diagnosed with serious Choroidal Neovascularization (CNV). Experimental results show that the proposed registration algorithm consistently yields statistically significant improvements in both Dice similarity coefficient and the average unsigned surface error compared with the other registration methods.
Collapse
|
220
|
Hua CH, Huynh-The T, Lee S. Retinal Vessel Segmentation using Round-wise Features Aggregation on Bracket-shaped Convolutional Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:36-39. [PMID: 31945839 DOI: 10.1109/embc.2019.8856552] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
With the recent advent of deep learning in medical image processing, retinal blood vessel segmentation topic has been comprehensively handled by numerous research works. However, since the ratio between the number of vessel and background pixels is heavily imbalanced, many attempts utilized patches augmented from original fundus images along with fully convolutional networks for addressing such pixel-wise labeling problem, which significantly costs computational resources. In this paper, a method using Round-wise Features Aggregation on Bracket-shaped convolutional neural networks (RFA-BNet) is proposed to exclude the necessity of patches augmentation while efficiently handling the irregular and diverse representation of retinal vessels. Particularly, given raw fundus images, typical feature maps extracted from a pretrained backbone network are employed for a bracket-shaped decoder, wherein middle-scale features are continuously exploited round-by-round. Then, the decoded maps having highest resolution of each round are aggregated to enable the built model to flexibly learn various degrees of embedded semantic details while retaining proper annotations of thin and small vessels. Finally, the proposed approach showed its effectiveness in terms of sensitivity (0.7932), specificity (0.9741), accuracy (0.9511), and AUROC (0.9732) on DRIVE dataset.
Collapse
|
221
|
Liu X, Du T, Zhang H, Sun C. Detection and Classification of Chronic Total Occlusion lesions using Deep Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:828-831. [PMID: 31946023 DOI: 10.1109/embc.2019.8856696] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Cardiovascular disease (CVD) is one of the diseases with the highest mortality rate in modern society, while chronic total occlusion (CTO) is the initial factor that influences the success rate of percutaneous coronary intervention (PCI), which is one of the most common treatments for CVD. In this work, novel deep convolutional neural networks (CNNs) are proposed to detect the entry point of CTO and classify its morphology according to the coronary angiography automatically. Specifically, feature pyramid networks (FPN) module and model fusion technique are applied to the detection network, and data augmentation and attentive regularization loss by reciprocative learning algorithm are used in classification network. Extracted from contrary angiography, the dataset consists of 2059 cases annotated by professional cardiologists. Experiment results show that the recall of CTO detection can reach up to 89.3%, and the sensitivity and specificity of CTO classification can reach up to 94.5% and 89.1% respectively.
Collapse
|
222
|
|
223
|
Nalepa J, Ribalta Lorenzo P, Marcinkiewicz M, Bobek-Billewicz B, Wawrzyniak P, Walczak M, Kawulok M, Dudzik W, Kotowski K, Burda I, Machura B, Mrukwa G, Ulrych P, Hayball MP. Fully-automated deep learning-powered system for DCE-MRI analysis of brain tumors. Artif Intell Med 2020; 102:101769. [DOI: 10.1016/j.artmed.2019.101769] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Revised: 10/28/2019] [Accepted: 11/20/2019] [Indexed: 02/01/2023]
|
224
|
Ye F, Yin S, Li M, Li Y, Zhong J. In-vivo full-field measurement of microcirculatory blood flow velocity based on intelligent object identification. JOURNAL OF BIOMEDICAL OPTICS 2020; 25:1-11. [PMID: 31970945 PMCID: PMC6975132 DOI: 10.1117/1.jbo.25.1.016003] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Accepted: 12/30/2019] [Indexed: 05/09/2023]
Abstract
Microcirculation plays a crucial role in delivering oxygen and nutrients to living tissues and in removing metabolic wastes from the human body. Monitoring the velocity of blood flow in microcirculation is essential for assessing various diseases, such as diabetes, cancer, and critical illnesses. Because of the complex morphological pattern of the capillaries, both In-vivo capillary identification and blood flow velocity measurement by conventional optical capillaroscopy are challenging. Thus, we focused on developing an In-vivo optical microscope for capillary imaging, and we propose an In-vivo full-field flow velocity measurement method based on intelligent object identification. The proposed method realizes full-field blood flow velocity measurements in microcirculation by employing a deep neural network to automatically identify and distinguish capillaries from images. In addition, a spatiotemporal diagram analysis is used for flow velocity calculation. In-vivo experiments were conducted, and the images and videos of capillaries were collected for analysis. We demonstrated that the proposed method is highly accurate in performing full-field blood flow velocity measurements in microcirculation. Further, because this method is simple and inexpensive, it can be effectively employed in clinics.
Collapse
Affiliation(s)
- Fei Ye
- Jinan University, Department of Optoelectronic Engineering, Guangzhou, China
| | - Songchao Yin
- Sun Yat-sen University, Third Affiliated Hospital, Department of Dermatology, Guangzhou, China
| | - Meirong Li
- Sun Yat-sen University, Third Affiliated Hospital, Department of Dermatology, Guangzhou, China
| | - Yujie Li
- Sun Yat-sen University, Sixth Affiliated Hospital, Reproductive Medicine Center, Guangzhou, China
| | - Jingang Zhong
- Jinan University, Department of Optoelectronic Engineering, Guangzhou, China
- Address all correspondence to Jingang Zhong, E-mail:
| |
Collapse
|
225
|
Liu S, Hong J, Lu X, Jia X, Lin Z, Zhou Y, Liu Y, Zhang H. Joint optic disc and cup segmentation using semi-supervised conditional GANs. Comput Biol Med 2019; 115:103485. [DOI: 10.1016/j.compbiomed.2019.103485] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2019] [Revised: 10/01/2019] [Accepted: 10/04/2019] [Indexed: 11/25/2022]
|
226
|
Wang Y, Dou H, Hu X, Zhu L, Yang X, Xu M, Qin J, Heng PA, Wang T, Ni D. Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2768-2778. [PMID: 31021793 DOI: 10.1109/tmi.2019.2913184] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Automatic prostate segmentation in transrectal ultrasound (TRUS) images is of essential importance for image-guided prostate interventions and treatment planning. However, developing such automatic solutions remains very challenging due to the missing/ambiguous boundary and inhomogeneous intensity distribution of the prostate in TRUS, as well as the large variability in prostate shapes. This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in TRUS by fully exploiting the complementary information encoded in different layers of the convolutional neural network (CNN). Our attention module utilizes the attention mechanism to selectively leverage the multi-level features integrated from different layers to refine the features at each individual layer, suppressing the non-prostate noise at shallow layers of the CNN and increasing more prostate details into features at deep layers. Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance. The proposed attention mechanism is a general strategy to aggregate multi-level deep features and has the potential to be used for other medical image segmentation tasks. The code is publicly available at https://github.com/wulalago/DAF3D.
Collapse
|
227
|
Choudhary P, Hazra A. Chest disease radiography in twofold: using convolutional neural networks and transfer learning. EVOLVING SYSTEMS 2019. [DOI: 10.1007/s12530-019-09316-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
228
|
Sengupta S, Singh A, Leopold HA, Gulati T, Lakshminarayanan V. Ophthalmic diagnosis using deep learning with fundus images - A critical review. Artif Intell Med 2019; 102:101758. [PMID: 31980096 DOI: 10.1016/j.artmed.2019.101758] [Citation(s) in RCA: 71] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2019] [Revised: 11/04/2019] [Accepted: 11/05/2019] [Indexed: 12/23/2022]
Abstract
An overview of the applications of deep learning for ophthalmic diagnosis using retinal fundus images is presented. We describe various retinal image datasets that can be used for deep learning purposes. Applications of deep learning for segmentation of optic disk, optic cup, blood vessels as well as detection of lesions are reviewed. Recent deep learning models for classification of diseases such as age-related macular degeneration, glaucoma, and diabetic retinopathy are also discussed. Important critical insights and future research directions are given.
Collapse
Affiliation(s)
- Sourya Sengupta
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Ontario, Canada; Department of Systems Design Engineering, University of Waterloo, Ontario, Canada.
| | - Amitojdeep Singh
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Ontario, Canada; Department of Systems Design Engineering, University of Waterloo, Ontario, Canada
| | - Henry A Leopold
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Ontario, Canada; Department of Systems Design Engineering, University of Waterloo, Ontario, Canada
| | - Tanmay Gulati
- Department of Computer Science and Engineering, Manipal Institute of Technology, India
| | - Vasudevan Lakshminarayanan
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Ontario, Canada; Department of Systems Design Engineering, University of Waterloo, Ontario, Canada
| |
Collapse
|
229
|
Singh N, Kaur L, Singh K. Segmentation of retinal blood vessels based on feature-oriented dictionary learning and sparse coding using ensemble classification approach. J Med Imaging (Bellingham) 2019; 6:044006. [DOI: 10.1117/1.jmi.6.4.044006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Accepted: 11/04/2019] [Indexed: 11/14/2022] Open
Affiliation(s)
- Navdeep Singh
- Punjabi University, Department of Computer Science and Engineering, Patiala, Punjab
| | - Lakhwinder Kaur
- Punjabi University, Department of Computer Science and Engineering, Patiala, Punjab
| | - Kuldeep Singh
- Malaviya National Institute of Technology, Jaipur, Rajasthan
| |
Collapse
|
230
|
Chudzik P, Al-Diri B, Caliva F, Hunter A. DISCERN: Generative Framework for Vessel Segmentation using Convolutional Neural Network and Visual Codebook. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:5934-5937. [PMID: 30441687 DOI: 10.1109/embc.2018.8513604] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper presents a novel two-stage vessel segmentation framework applied to retinal fundus images. In the first stage a convolutional neural network (CNN) is used to correlate an image patch with a corresponding groundtruth reduced using Totally Random Trees Embedding. In the second stage training patches are forward propagated through CNN to create a visual codebook. The codebook is used to build a generative nearest neighbour search space that can be queried by feature vectors created through forward propagating previously-unseen patches through CNN. The proposed framework is able to generate segmentation patches that were not seen during training. Evaluated using publicly available datasets (DRIVE, STARE) demonstrated better performance than state-of-the-art methods in terms of multiple evaluation metrics. The accuracy, robustness, speed and simplicity of the proposed framework demonstrates its suitability for automated vessel segmentation.
Collapse
|
231
|
Kassim YM, Maude RJ, Palaniappan K. Sensitivity of Cross-Trained Deep CNNs for Retinal Vessel Extraction. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:2736-2739. [PMID: 30440967 DOI: 10.1109/embc.2018.8512764] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Automatic segmentation of vascular network is a critical step in quantitatively characterizing vessel remodeling in retinal images and other tissues. We proposed a deep learning architecture consists of 14 layers to extract blood vessels in fundoscopy images for the popular standard datasets DRIVE and STARE. Experimental results show that our CNN characterized by superior identifying for the foreground vessel regions. It produces results with sensitivity higher by 10% than other methods when trained by the same data set and more than 1% with cross training (trained on DRIVE, tested with STARE and vice versa). Further, our results have better accuracy $> 0 .95$% compared to state of the art algorithms.
Collapse
|
232
|
Toliušis R, Kurasova O, Bernatavičienė J. Semantic Segmentation of Eye Fundus Images Using Convolutional Neural Networks. INFORMACIJOS MOKSLAI 2019. [DOI: 10.15388/im.2019.85.20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
This article reviews the problems of eye bottom fundus analysis and semantic segmentation algorithms used to distinguish the eye vessels and the optical disk. Various diseases, such as glaucoma, hypertension, diabetic retinopathy, macular degeneration, etc., can be diagnosed through changes and anomalies of the vesssels and optical disk. Convolutional neural networks, especially the U-Net architecture, are well-suited for semantic segmentation. A number of U-Net modifications have been recently developed that deliver excellent performance results.
Collapse
|
233
|
Zhu C, Song F, Wang Y, Dong H, Guo Y, Liu J. Breast cancer histopathology image classification through assembling multiple compact CNNs. BMC Med Inform Decis Mak 2019; 19:198. [PMID: 31640686 PMCID: PMC6805574 DOI: 10.1186/s12911-019-0913-x] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2019] [Accepted: 09/09/2019] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Breast cancer causes hundreds of thousands of deaths each year worldwide. The early stage diagnosis and treatment can significantly reduce the mortality rate. However, the traditional manual diagnosis needs intense workload, and diagnostic errors are prone to happen with the prolonged work of pathologists. Automatic histopathology image recognition plays a key role in speeding up diagnosis and improving the quality of diagnosis. METHODS In this work, we propose a breast cancer histopathology image classification by assembling multiple compact Convolutional Neural Networks (CNNs). First, a hybrid CNN architecture is designed, which contains a global model branch and a local model branch. By local voting and two-branch information merging, our hybrid model obtains stronger representation ability. Second, by embedding the proposed Squeeze-Excitation-Pruning (SEP) block into our hybrid model, the channel importance can be learned and the redundant channels are thus removed. The proposed channel pruning scheme can decrease the risk of overfitting and produce higher accuracy with the same model size. At last, with different data partition and composition, we build multiple models and assemble them together to further enhance the model generalization ability. RESULTS Experimental results show that in public BreaKHis dataset, our proposed hybrid model achieves comparable performance with the state-of-the-art. By adopting the multi-model assembling scheme, our method outperforms the state-of-the-art in both patient level and image level accuracy for BACH dataset. CONCLUSIONS We propose a novel compact breast cancer histopathology image classification scheme by assembling multiple compact hybrid CNNs. The proposed scheme achieves promising results for the breast cancer image classification task. Our method can be used in breast cancer auxiliary diagnostic scenario, and it can reduce the workload of pathologists as well as improve the quality of diagnosis.
Collapse
Affiliation(s)
- Chuang Zhu
- The Center for Data Science, the Beijing Key Laboratory of Network System Architecture and Convergence, the School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Xitucheng Road, Beijing, China
| | - Fangzhou Song
- The Center for Data Science, the Beijing Key Laboratory of Network System Architecture and Convergence, the School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Xitucheng Road, Beijing, China
| | - Ying Wang
- The Department of Pathology, Beijing Chaoyang Hospital, the Third Clinical Medical College of Capital Medical University, Gongren Tiyuchang Nanlu, Beijing, China
| | - Huihui Dong
- The Center for Data Science, the Beijing Key Laboratory of Network System Architecture and Convergence, the School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Xitucheng Road, Beijing, China
| | - Yao Guo
- The Center for Data Science, the Beijing Key Laboratory of Network System Architecture and Convergence, the School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Xitucheng Road, Beijing, China
| | - Jun Liu
- The Center for Data Science, the Beijing Key Laboratory of Network System Architecture and Convergence, the School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Xitucheng Road, Beijing, China
| |
Collapse
|
234
|
Cherukuri V, G VKB, Bala R, Monga V. Deep Retinal Image Segmentation with Regularization Under Geometric Priors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2552-2567. [PMID: 31613766 DOI: 10.1109/tip.2019.2946078] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Vessel segmentation of retinal images is a key diagnostic capability in ophthalmology. This problem faces several challenges including low contrast, variable vessel size and thickness, and presence of interfering pathology such as micro-aneurysms and hemorrhages. Early approaches addressing this problem employed hand-crafted filters to capture vessel structures, accompanied by morphological post-processing. More recently, deep learning techniques have been employed with significantly enhanced segmentation accuracy. We propose a novel domain enriched deep network that consists of two components: 1) a representation network that learns geometric features specific to retinal images, and 2) a custom designed computationally efficient residual task network that utilizes the features obtained from the representation layer to perform pixel-level segmentation. The representation and task networks are jointly learned for any given training set. To obtain physically meaningful and practically effective representation filters, we propose two new constraints that are inspired by expected prior structure on these filters: 1) orientation constraint that promotes geometric diversity of curvilinear features, and 2) a data adaptive noise regularizer that penalizes false positives. Multi-scale extensions are developed to enable accurate detection of thin vessels. Experiments performed on three challenging benchmark databases under a variety of training scenarios show that the proposed prior guided deep network outperforms state of the art alternatives as measured by common evaluation metrics, while being more economical in network size and inference time.
Collapse
|
235
|
Gu Z, Cheng J, Fu H, Zhou K, Hao H, Zhao Y, Zhang T, Gao S, Liu J. CE-Net: Context Encoder Network for 2D Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2281-2292. [PMID: 30843824 DOI: 10.1109/tmi.2019.2903562] [Citation(s) in RCA: 774] [Impact Index Per Article: 129.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Medical image segmentation is an important step in medical image analysis. With the rapid development of a convolutional neural network in image processing, deep learning has been used for medical image segmentation, such as optic disc segmentation, blood vessel detection, lung segmentation, cell segmentation, and so on. Previously, U-net based approaches have been proposed. However, the consecutive pooling and strided convolutional operations led to the loss of some spatial information. In this paper, we propose a context encoder network (CE-Net) to capture more high-level information and preserve spatial information for 2D medical image segmentation. CE-Net mainly contains three major components: a feature encoder module, a context extractor, and a feature decoder module. We use the pretrained ResNet block as the fixed feature extractor. The context extractor module is formed by a newly proposed dense atrous convolution block and a residual multi-kernel pooling block. We applied the proposed CE-Net to different 2D medical image segmentation tasks. Comprehensive results show that the proposed method outperforms the original U-Net method and other state-of-the-art methods for optic disc segmentation, vessel detection, lung segmentation, cell contour segmentation, and retinal optical coherence tomography layer segmentation.
Collapse
|
236
|
Yue K, Zou B, Chen Z, Liu Q. Retinal vessel segmentation using dense U-net with multiscale inputs. J Med Imaging (Bellingham) 2019; 6:034004. [PMID: 31572745 DOI: 10.1117/1.jmi.6.3.034004] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2019] [Accepted: 08/30/2019] [Indexed: 11/14/2022] Open
Abstract
A color fundus image is an image of the inner wall of the eyeball taken with a fundus camera. Doctors can observe retinal vessel changes in the image, and these changes can be used to diagnose many serious diseases such as atherosclerosis, glaucoma, and age-related macular degeneration. Automated segmentation of retinal vessels can facilitate more efficient diagnosis of these diseases. We propose an improved U-net architecture to segment retinal vessels. Multiscale input layer and dense block are introduced into the conventional U-net, so that the network can make use of richer spatial context information. The proposed method is evaluated on the public dataset DRIVE, achieving 0.8199 in sensitivity and 0.9561 in accuracy. Especially for thin blood vessels, which are difficult to detect because of their low contrast with the background pixels, the segmentation results have been improved.
Collapse
Affiliation(s)
- Kejuan Yue
- Central South University, School of Computer Science and Engineering, Changsha, China.,Central South University, Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China.,Hunan First Normal University, School of Information Science and Engineering, Changsha, China
| | - Beiji Zou
- Central South University, School of Computer Science and Engineering, Changsha, China.,Central South University, Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Zailiang Chen
- Central South University, School of Computer Science and Engineering, Changsha, China.,Central South University, Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Qing Liu
- Central South University, School of Computer Science and Engineering, Changsha, China.,Central South University, Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| |
Collapse
|
237
|
Wang C, Roth HR, Kitasaka T, Oda M, Hayashi Y, Yoshino Y, Yamamoto T, Sassa N, Goto M, Mori K. Precise estimation of renal vascular dominant regions using spatially aware fully convolutional networks, tensor-cut and Voronoi diagrams. Comput Med Imaging Graph 2019; 77:101642. [PMID: 31525543 DOI: 10.1016/j.compmedimag.2019.101642] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Revised: 06/07/2019] [Accepted: 07/23/2019] [Indexed: 10/26/2022]
Abstract
This paper presents a new approach for precisely estimating the renal vascular dominant region using a Voronoi diagram. To provide computer-assisted diagnostics for the pre-surgical simulation of partial nephrectomy surgery, we must obtain information on the renal arteries and the renal vascular dominant regions. We propose a fully automatic segmentation method that combines a neural network and tensor-based graph-cut methods to precisely extract the kidney and renal arteries. First, we use a convolutional neural network to localize the kidney regions and extract tiny renal arteries with a tensor-based graph-cut method. Then we generate a Voronoi diagram to estimate the renal vascular dominant regions based on the segmented kidney and renal arteries. The accuracy of kidney segmentation in 27 cases with 8-fold cross validation reached a Dice score of 95%. The accuracy of renal artery segmentation in 8 cases obtained a centerline overlap ratio of 80%. Each partition region corresponds to a renal vascular dominant region. The final dominant-region estimation accuracy achieved a Dice coefficient of 80%. A clinical application showed the potential of our proposed estimation approach in a real clinical surgical environment. Further validation using large-scale database is our future work.
Collapse
Affiliation(s)
- Chenglong Wang
- Graduate School of Information Science, Nagoya University, Nagoya, Japan.
| | - Holger R Roth
- Graduate School of Informatics, Nagoya University, Nagoya, Japan
| | | | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Nagoya, Japan
| | - Yuichiro Hayashi
- Graduate School of Informatics, Nagoya University, Nagoya, Japan
| | - Yasushi Yoshino
- Nagoya University Graduate School of Medicine, Nagoya, Japan
| | | | - Naoto Sassa
- Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Momokazu Goto
- Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Nagoya, Japan; Information Technology Center, Nagoya University, Japan; Research Center for Medical Bigdata, National Institute of Informatics, Japan.
| |
Collapse
|
238
|
Oda M, Roth HR, Kitasaka T, Misawa K, Fujiwara M, Mori K. Abdominal artery segmentation method from CT volumes using fully convolutional neural network. Int J Comput Assist Radiol Surg 2019; 14:2069-2081. [PMID: 31493112 DOI: 10.1007/s11548-019-02062-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Accepted: 08/27/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE : The purpose of this paper is to present a fully automated abdominal artery segmentation method from a CT volume. Three-dimensional (3D) blood vessel structure information is important for diagnosis and treatment. Information about blood vessels (including arteries) can be used in patient-specific surgical planning and intra-operative navigation. Since blood vessels have large inter-patient variations in branching patterns and positions, a patient-specific blood vessel segmentation method is necessary. Even though deep learning-based segmentation methods provide good segmentation accuracy among large organs, small organs such as blood vessels are not well segmented. We propose a deep learning-based abdominal artery segmentation method from a CT volume. Because the artery is one of small organs that is difficult to segment, we introduced an original training sample generation method and a three-plane segmentation approach to improve segmentation accuracy. METHOD : Our proposed method segments abdominal arteries from an abdominal CT volume with a fully convolutional network (FCN). To segment small arteries, we employ a 2D patch-based segmentation method and an area imbalance reduced training patch generation (AIRTPG) method. AIRTPG adjusts patch number imbalances between patches with artery regions and patches without them. These methods improved the segmentation accuracies of small artery regions. Furthermore, we introduced a three-plane segmentation approach to obtain clear 3D segmentation results from 2D patch-based processes. In the three-plane approach, we performed three segmentation processes using patches generated on axial, coronal, and sagittal planes and combined the results to generate a 3D segmentation result. RESULTS : The evaluation results of the proposed method using 20 cases of abdominal CT volumes show that the averaged F-measure, precision, and recall rates were 87.1%, 85.8%, and 88.4%, respectively. This result outperformed our previous automated FCN-based segmentation method. Our method offers competitive performance compared to the previous blood vessel segmentation methods from 3D volumes. CONCLUSIONS : We developed an abdominal artery segmentation method using FCN. The 2D patch-based and AIRTPG methods effectively segmented the artery regions. In addition, the three-plane approach generated good 3D segmentation results.
Collapse
Affiliation(s)
- Masahiro Oda
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, Japan.
| | - Holger R Roth
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, Japan
| | - Takayuki Kitasaka
- School of Information Science, Aichi Institute of Technology, 1247 Yachigusa, Yakusa-cho, Toyota, Aichi, Japan
| | - Kazunari Misawa
- Aichi Cancer Center Hospital, 1-1 Kanokoden, Chikusa-ku, Nagoya, Aichi, Japan
| | - Michitaka Fujiwara
- Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, Japan.,Research Center for Medical Bigdata, National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo, Japan
| |
Collapse
|
239
|
Shin SY, Lee S, Yun ID, Lee KM. Deep vessel segmentation by learning graphical connectivity. Med Image Anal 2019; 58:101556. [PMID: 31536906 DOI: 10.1016/j.media.2019.101556] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 09/02/2019] [Accepted: 09/05/2019] [Indexed: 11/17/2022]
Abstract
We propose a novel deep learning based system for vessel segmentation. Existing methods using CNNs have mostly relied on local appearances learned on the regular image grid, without consideration of the graphical structure of vessel shape. Effective use of the strong relationship that exists between vessel neighborhoods can help improve the vessel segmentation accuracy. To this end, we incorporate a graph neural network into a unified CNN architecture to jointly exploit both local appearances and global vessel structures. We extensively perform comparative evaluations on four retinal image datasets and a coronary artery X-ray angiography dataset, showing that the proposed method outperforms or is on par with current state-of-the-art methods in terms of the average precision and the area under the receiver operating characteristic curve. Statistical significance on the performance difference between the proposed method and each comparable method is suggested by conducting a paired t-test. In addition, ablation studies support the particular choices of algorithmic detail and hyperparameter values of the proposed method. The proposed architecture is widely applicable since it can be applied to expand any type of CNN-based vessel segmentation method to enhance the performance.
Collapse
Affiliation(s)
- Seung Yeon Shin
- Department of Electrical and Computer Engineering, Automation and Systems Research Institute, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea
| | - Soochahn Lee
- School of Electrical Engineering, Kookmin University, Seoul, 02707, South Korea.
| | - Il Dong Yun
- Division of Computer and Electronic Systems Engineering, Hankuk University of Foreign Studies, Yongin, 17035, South Korea
| | - Kyoung Mu Lee
- Department of Electrical and Computer Engineering, Automation and Systems Research Institute, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea
| |
Collapse
|
240
|
Computer aided detection of deep inferior epigastric perforators in computed tomography angiography scans. Comput Med Imaging Graph 2019; 77:101648. [PMID: 31476532 DOI: 10.1016/j.compmedimag.2019.101648] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2018] [Revised: 08/09/2019] [Accepted: 08/12/2019] [Indexed: 12/09/2022]
Abstract
The deep inferior epigastric artery perforator (DIEAP) flap is the most common free flap used for breast reconstruction after a mastectomy. It makes use of the skin and fat of the lower abdomen to build a new breast mound either at the same time of the mastectomy or in a second surgery. This operation requires preoperative imaging studies to evaluate the branches - the perforators - that irrigate the tissue that will be used to reconstruct the breast mound. These branches will support tissue viability after the microsurgical ligation of the inferior epigastric vessels to the receptor vessels in the thorax. Usually through a computed tomography angiography (CTA), each perforator is manually identified and characterized by the imaging team, who will subsequently draw a map for the identification of the best vascular support for the reconstruction. In the current work we propose a semi-automatic methodology that aims at reducing the time and subjectivity inherent to the manual annotation. In 21 CTAs from patients proposed for breast reconstruction with DIEAP flaps, the subcutaneous region of each perforator was extracted, by means of a tracking procedure, whereas the intramuscular portion was detected through a minimum cost approach. Both were subsequently compared with the radiologist manual annotation. Results showed that the semi-automatic procedure was able to correctly detect the course of the DIEAPs with a minimum error (average error of 0.64 and 0.50 mm regarding the extraction of subcutaneous and intramuscular paths, respectively), taking little time to do so. The objective methodology is a promising tool in the automatic detection of perforators in CTA and can contribute to spare human resources and reduce subjectivity in the aforementioned task.
Collapse
|
241
|
Automatic Retinal Blood Vessel Segmentation Based on Fully Convolutional Neural Networks. Symmetry (Basel) 2019. [DOI: 10.3390/sym11091112] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Automated retinal vessel segmentation technology has become an important tool for disease screening and diagnosis in clinical medicine. However, most of the available methods of retinal vessel segmentation still have problems such as poor accuracy and low generalization ability. This is because the symmetrical and asymmetrical patterns between blood vessels are complicated, and the contrast between the vessel and the background is relatively low due to illumination and pathology. Robust vessel segmentation of the retinal image is essential for improving the diagnosis of diseases such as vein occlusions and diabetic retinopathy. Automated retinal vein segmentation remains a challenging task. In this paper, we proposed an automatic retinal vessel segmentation framework using deep fully convolutional neural networks (FCN), which integrate novel methods of data preprocessing, data augmentation, and full convolutional neural networks. It is an end-to-end framework that automatically and efficiently performs retinal vessel segmentation. The framework was evaluated on three publicly available standard datasets, achieving F1 score of 0.8321, 0.8531, and 0.8243, an average accuracy of 0.9706, 0.9777, and 0.9773, and average area under the Receiver Operating Characteristic (ROC) curve of 0.9880, 0.9923 and 0.9917 on the DRIVE, STARE, and CHASE_DB1 datasets, respectively. The experimental results show that our proposed framework achieves state-of-the-art vessel segmentation performance in all three benchmark tests.
Collapse
|
242
|
Tang P, Liang Q, Yan X, Xiang S, Sun W, Zhang D, Coppola G. Efficient skin lesion segmentation using separable-Unet with stochastic weight averaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 178:289-301. [PMID: 31416556 DOI: 10.1016/j.cmpb.2019.07.005] [Citation(s) in RCA: 61] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Revised: 07/04/2019] [Accepted: 07/04/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Efficient segmentation of skin lesion in dermoscopy images can improve the classification accuracy of skin diseases, which provides a powerful approach for the dermatologists in examining pigmented skin lesions. However, the segmentation is challenging due to the low contrast of skin lesions from a captured image, fuzzy and indistinct lesion boundaries, huge variety of interclass variation of melanomas, the existence of artifacts, etc. In this work, an efficient and accurate melanoma region segmentation method is proposed for computer-aided diagnostic systems. METHOD A skin lesion segmentation (SLS) method based on the separable-Unet with stochastic weight averaging is proposed in this work. Specifically, the proposed Separable-Unet framework takes advantage of the separable convolutional block and U-Net architectures, which can extremely capture the context feature channel correlation and higher semantic feature information to enhance the pixel-level discriminative representation capability of fully convolutional networks (FCN). Further, considering that the over-fitting is a local optimum (or sub-optimum) problem, a scheme based on stochastic weight averaging is introduced, which can obtain much broader optimum and better generalization. RESULTS The proposed method is evaluated in three publicly available datasets. The experimental results showed that the proposed approach segmented the skin lesions with an average Dice coefficient of 93.03% and Jaccard index of 89.25% for the International Skin Imaging Collaboration (ISIC) 2016 Skin Lesion Challenge (SLC) dataset, 86.93% and 79.26% for the ISIC 2017 SLC, and 94.13% and 89.40% for the PH2 dataset, respectively. The proposed approach is compared with other state-of-the-art methods, and the results demonstrate that the proposed approach outperforms them for SLS on both melanoma and non-melanoma cases. Segmentation of a potential lesion with the proposed approach in a dermoscopy image requires less than 0.05 s of processing time, which is roughly 30 times faster than the second best method (regarding the value of Jaccard index) for the ISIC 2017 dataset with the same hardware configuration. CONCLUSIONS We concluded that using the separable convolutional block and U-Net architectures with stochastic weight averaging strategy could enable to obtain better pixel-level discriminative representation capability. Moreover, the considerably decreased computation time suggests that the proposed approach has potential for practical computer-aided diagnose systems, besides provides a segmentation for the specific analysis with improved segmentation performance.
Collapse
Affiliation(s)
- Peng Tang
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China; Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing, Hunan University, Changsha 410082, China; National Engineering Laboratory for Robot Vision Perception and Control, Hunan University, Changsha 410082, China
| | - Qiaokang Liang
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China; Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing, Hunan University, Changsha 410082, China; National Engineering Laboratory for Robot Vision Perception and Control, Hunan University, Changsha 410082, China.
| | - Xintong Yan
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
| | - Shao Xiang
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China; Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing, Hunan University, Changsha 410082, China; National Engineering Laboratory for Robot Vision Perception and Control, Hunan University, Changsha 410082, China
| | - Wei Sun
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China; Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing, Hunan University, Changsha 410082, China; National Engineering Laboratory for Robot Vision Perception and Control, Hunan University, Changsha 410082, China
| | - Dan Zhang
- Department of Mechanical Engineering, York University, Toronto, ON M3J 1P3, Canada
| | - Gianmarc Coppola
- Faculty of Engineering and Applied Science, University of Ontario Institute of Technology, Oshawa, ON L1H 7K4, Canada
| |
Collapse
|
243
|
Noh KJ, Park SJ, Lee S. Scale-space approximated convolutional neural networks for retinal vessel segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 178:237-246. [PMID: 31416552 DOI: 10.1016/j.cmpb.2019.06.030] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Revised: 06/15/2019] [Accepted: 06/28/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinal fundus images are widely used to diagnose retinal diseases and can potentially be used for early diagnosis and prevention of chronic vascular diseases and diabetes. While various automatic retinal vessel segmentation methods using deep learning have been proposed, they are mostly based on common CNN structures developed for other tasks such as classification. METHODS We present a novel and simple multi-scale convolutional neural network (CNN) structure for retinal vessel segmentation. We first provide a theoretical analysis of existing multi-scale structures based on signal processing. In previous structures, multi-scale representations are achieved through downsampling by subsampling and decimation. By incorporating scale-space theory, we propose a simple yet effective multi-scale structure for CNNs using upsampling, which we term scale-space approximated CNN (SSANet). Based on further analysis of the effects of the SSA structure within a CNN, we also incorporate residual blocks, resulting in a multi-scale CNN that outperforms current state-of-the-art methods. RESULTS Quantitative evaluations are presented as the area-under-curve (AUC) of the receiver operating characteristic (ROC) curve and the precision-recall curve, as well as accuracy, for four publicly available datasets, namely DRIVE, STARE, CHASE_DB1, and HRF. For the CHASE_DB1 set, the SSANet achieves state-of-the-art AUC value of 0.9916 for the ROC curve. An ablative analysis is presented to analyze the contribution of different components of the SSANet to the performance improvement. CONCLUSIONS The proposed retinal SSANet achieves state-of-the-art or comparable accuracy across publicly available datasets, especially improving segmentation for thin vessels, vessel junctions, and central vessel reflexes.
Collapse
Affiliation(s)
- Kyoung Jin Noh
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Gyeonggi-do 13620, South Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Gyeonggi-do 13620, South Korea.
| | - Soochahn Lee
- School of Electrical Engineering, Kookmin University, Seongbuk-gu, Seoul 02707, South Korea.
| |
Collapse
|
244
|
Zaffino P, Pernelle G, Mastmeyer A, Mehrtash A, Zhang H, Kikinis R, Kapur T, Francesca Spadea M. Fully automatic catheter segmentation in MRI with 3D convolutional neural networks: application to MRI-guided gynecologic brachytherapy. Phys Med Biol 2019; 64:165008. [PMID: 31272095 DOI: 10.1088/1361-6560/ab2f47] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
External-beam radiotherapy followed by high dose rate (HDR) brachytherapy is the standard-of-care for treating gynecologic cancers. The enhanced soft-tissue contrast provided by magnetic resonance imaging (MRI) makes it a valuable imaging modality for diagnosing and treating these cancers. However, in contrast to computed tomography (CT) imaging, the appearance of the brachytherapy catheters, through which radiation sources are inserted to reach the cancerous tissue later on, is often variable across images. This paper reports, for the first time, a new deep-learning-based method for fully automatic segmentation of multiple closely spaced brachytherapy catheters in intraoperative MRI. Represented in the data are 50 gynecologic cancer patients treated by MRI-guided HDR brachytherapy. For each patient, a single intraoperative MRI was used. 826 catheters in the images were manually segmented by an expert radiation physicist who is also a trained radiation oncologist. The number of catheters in a patient ranged between 10 and 35. A deep 3D convolutional neural network (CNN) model was developed and trained. In order to make the learning process more robust, the network was trained 5 times, each time using a different combination of shown patients. Finally, each test case was processed by the five networks and the final segmentation was generated by voting on the obtained five candidate segmentations. 4-fold validation was executed and all the patients were segmented. An average distance error of 2.0 ± 3.4 mm was achieved. False positive and false negative catheters were 6.7% and 1.5% respectively. Average Dice score was equal to 0.60 ± 0.17. The algorithm is available for use in the open source software platform 3D Slicer allowing for wide scale testing and research discussion. In conclusion, to the best of our knowledge, fully automatic segmentation of multiple closely spaced catheters from intraoperative MR images was achieved for the first time in gynecological brachytherapy.
Collapse
Affiliation(s)
- Paolo Zaffino
- Department of Experimental and Clinical Medicine, Magna Graecia University, 88100, Catanzaro, Italy. Author to whom any correspondence should be addressed
| | | | | | | | | | | | | | | |
Collapse
|
245
|
Aslani S, Dayan M, Storelli L, Filippi M, Murino V, Rocca MA, Sona D. Multi-branch convolutional neural network for multiple sclerosis lesion segmentation. Neuroimage 2019; 196:1-15. [DOI: 10.1016/j.neuroimage.2019.03.068] [Citation(s) in RCA: 72] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2018] [Revised: 03/23/2019] [Accepted: 03/28/2019] [Indexed: 11/26/2022] Open
|
246
|
Cunefare D, Huckenpahler AL, Patterson EJ, Dubra A, Carroll J, Farsiu S. RAC-CNN: multimodal deep learning based automatic detection and classification of rod and cone photoreceptors in adaptive optics scanning light ophthalmoscope images. BIOMEDICAL OPTICS EXPRESS 2019; 10:3815-3832. [PMID: 31452977 PMCID: PMC6701534 DOI: 10.1364/boe.10.003815] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 06/26/2019] [Accepted: 06/29/2019] [Indexed: 05/03/2023]
Abstract
Quantification of the human rod and cone photoreceptor mosaic in adaptive optics scanning light ophthalmoscope (AOSLO) images is useful for the study of various retinal pathologies. Subjective and time-consuming manual grading has remained the gold standard for evaluating these images, with no well validated automatic methods for detecting individual rods having been developed. We present a novel deep learning based automatic method, called the rod and cone CNN (RAC-CNN), for detecting and classifying rods and cones in multimodal AOSLO images. We test our method on images from healthy subjects as well as subjects with achromatopsia over a range of retinal eccentricities. We show that our method is on par with human grading for detecting rods and cones.
Collapse
Affiliation(s)
- David Cunefare
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Alison L. Huckenpahler
- Department of Cell Biology, Neurobiology, & Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Emily J. Patterson
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
| | - Joseph Carroll
- Department of Cell Biology, Neurobiology, & Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| |
Collapse
|
247
|
Feng-Ping A, Zhi-Wen L. Medical image segmentation algorithm based on feedback mechanism convolutional neural network. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.101589] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
248
|
Tang P, Liang Q, Yan X, Zhang D, Coppola G, Sun W. Multi-proportion channel ensemble model for retinal vessel segmentation. Comput Biol Med 2019; 111:103352. [DOI: 10.1016/j.compbiomed.2019.103352] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Revised: 07/07/2019] [Accepted: 07/07/2019] [Indexed: 10/26/2022]
|
249
|
|
250
|
Deep learning based computer-aided diagnosis systems for diabetic retinopathy: A survey. Artif Intell Med 2019; 99:101701. [DOI: 10.1016/j.artmed.2019.07.009] [Citation(s) in RCA: 95] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Revised: 07/19/2019] [Accepted: 07/26/2019] [Indexed: 02/06/2023]
|