101
|
Xia KJ, Yin HS, Zhang YD. Deep Semantic Segmentation of Kidney and Space-Occupying Lesion Area Based on SCNN and ResNet Models Combined with SIFT-Flow Algorithm. J Med Syst 2018; 43:2. [PMID: 30456668 DOI: 10.1007/s10916-018-1116-1] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2018] [Accepted: 11/01/2018] [Indexed: 02/07/2023]
Abstract
Renal segmentation is one of the most fundamental and challenging task in computer aided diagnosis systems. In order to overcome the shortcomings of automatic kidney segmentation based on deep network for abdominal CT images, a two-stage semantic segmentation of kidney and space-occupying lesion area based on SCNN and ResNet models combined with SIFT-flow transformation is proposed in paper, which is divided into two stages: image retrieval and semantic segmentation. To facilitate the image retrieval, a metric learning-based approach is firstly adopted to construct a deep convolutional neural network structure using SCNN and ResNet network to extract image features and minimize the impact of interference factors on features, so as to obtain the ability to represent the abdominal CT scan image with the same angle under different imaging conditions. And then, SIFT Flow transformation is introduced, which adopts MRF to fuse label information, priori spatial information and smoothing information to establish the dense matching relationship of pixels so that the semantics can be transferred from the known image to the target image so as to obtain the semantic segmentation result of kidney and space-occupying lesion area. In order to validate effectiveness and efficiency of our proposed method, we conduct experiments on self-establish CT dataset, focus on kidney organ and most of which have tumors inside of the kidney, and abnormal deformed shape of kidney. The experimental results qualitatively and quantitatively show that the accuracy of kidney segmentation is greatly improved, and the key information of the proportioned tumor occupying a small area of the image are exhibited a good segmentation results. In addition, our algorithm has also achieved ideal results in the clinical verification, which is suitable for intelligent medicine equipment applications.
Collapse
Affiliation(s)
- Kai-Jian Xia
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, 221116, Jiangsu, China.
- Changshu Affiliated Hospital of Soochow University (Changshu No.1 People's Hospital), Changshu, 215500, Jiangsu, China.
| | - Hong-Sheng Yin
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, 221116, Jiangsu, China.
| | - Yu-Dong Zhang
- Department of Informatics, University of Leicester, Leicester, LE1 7RH, UK
| |
Collapse
|
102
|
To MNN, Vu DQ, Turkbey B, Choyke PL, Kwak JT. Deep dense multi-path neural network for prostate segmentation in magnetic resonance imaging. Int J Comput Assist Radiol Surg 2018; 13:1687-1696. [PMID: 30088208 PMCID: PMC6177294 DOI: 10.1007/s11548-018-1841-4] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2018] [Accepted: 07/27/2018] [Indexed: 01/22/2023]
Abstract
PURPOSE We propose an approach of 3D convolutional neural network to segment the prostate in MR images. METHODS A 3D deep dense multi-path convolutional neural network that follows the framework of the encoder-decoder design is proposed. The encoder is built based upon densely connected layers that learn the high-level feature representation of the prostate. The decoder interprets the features and predicts the whole prostate volume by utilizing a residual layout and grouped convolution. A set of sub-volumes of MR images, centered at the prostate, is generated and fed into the proposed network for training purpose. The performance of the proposed network is compared to previously reported approaches. RESULTS Two independent datasets were employed to assess the proposed network. In quantitative evaluations, the proposed network achieved 95.11 and 89.01 Dice coefficients for the two datasets. The segmentation results were robust to variations in MR images. In comparison experiments, the segmentation performance of the proposed network was comparable to the previously reported approaches. In qualitative evaluations, the segmentation results by the proposed network were well matched to the ground truth provided by human experts. CONCLUSIONS The proposed network is capable of segmenting the prostate in an accurate and robust manner. This approach can be applied to other types of medical images.
Collapse
Affiliation(s)
- Minh Nguyen Nhat To
- Department of Computer Science and Engineering, Sejong University, Seoul, 05006, South Korea
| | - Dang Quoc Vu
- Department of Computer Science and Engineering, Sejong University, Seoul, 05006, South Korea
| | - Baris Turkbey
- Molecular Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Peter L Choyke
- Molecular Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Jin Tae Kwak
- Department of Computer Science and Engineering, Sejong University, Seoul, 05006, South Korea.
| |
Collapse
|
103
|
Yang H, Sun J, Li H, Wang L, Xu Z. Neural multi-atlas label fusion: Application to cardiac MR images. Med Image Anal 2018; 49:60-75. [DOI: 10.1016/j.media.2018.07.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2017] [Revised: 07/10/2018] [Accepted: 07/30/2018] [Indexed: 10/28/2022]
|
104
|
Feng DD, Fulham M. Multi-view collaborative segmentation for prostate MRI images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2017:3529-3532. [PMID: 29060659 DOI: 10.1109/embc.2017.8037618] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Prostate delineation from MRI images is a prolonged challenging issue partially due to appearance variations across patients and disease progression. To address these challenges, our proposed collaborative method takes into account the computed multiple label-relevance maps as multiple views for learning the optimal boundary delineation. In our method, we firstly extracted multiple label-relevance maps to represent the affinities between each unlabeled pixel to the pre-defined labels to avoid the selection of handcrafted features. Then these maps were incorporated in a collaborative clustering to learn the adaptive weights for an optimal segmentation which overcomes the seeds selection sensitivity problems. The segmentation results were evaluated over 22 prostate MRI patient studies with respect to dice similarity coefficient (DSC), absolute relative volume difference (ARVD) and average symmetric surface distance (ASSD) (mm). The results and t-Test demonstrated that the proposed method improved the segmentation accuracy and robustness and the improvement was statistically significant.
Collapse
|
105
|
Lu X, Xie Q, Zha Y, Wang D. Fully automatic liver segmentation combining multi-dimensional graph cut with shape information in 3D CT images. Sci Rep 2018; 8:10700. [PMID: 30013150 PMCID: PMC6048104 DOI: 10.1038/s41598-018-28787-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2018] [Accepted: 06/22/2018] [Indexed: 11/09/2022] Open
Abstract
Liver segmentation is an essential procedure in computer-assisted surgery, radiotherapy, and volume measurement. It is still a challenging task to extract liver tissue from 3D CT images owing to nearby organs with similar intensities. In this paper, an automatic approach integrating multi-dimensional features into graph cut refinement is developed and validated. Multi-atlas segmentation is utilized to estimate the coarse shape of liver on the target image. The unsigned distance field based on initial shape is then calculated throughout the whole image, which aims at automatic graph construction during refinement procedure. Finally, multi-dimensional features and shape constraints are embedded into graph cut framework. The optimal liver region can be precisely detected with a minimal cost. The proposed technique is evaluated on 40 CT scans, obtained from two public databases Sliver07 and 3Dircadb1. The dataset Sliver07 is considered as the training set for parameter learning. On the dataset 3Dircadb1, the average of volume overlap is up to 94%. The experiment results indicate that the proposed method has ability to reach the desired boundary of liver and has potential value for clinical application.
Collapse
Affiliation(s)
- Xuesong Lu
- College of Biomedical Engineering, South-Central University for Nationalities, Wuhan, 430074, P. R. China
| | - Qinlan Xie
- College of Biomedical Engineering, South-Central University for Nationalities, Wuhan, 430074, P. R. China
| | - Yunfei Zha
- Department of Radiology, Remin Hospital of Wuhan University, Wuhan, 430060, P. R. China
| | - Defeng Wang
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, China. .,School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing, China. .,Research Centre for Medical Image Computing, Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong, China.
| |
Collapse
|
106
|
Shahedi M, Cool DW, Bauman GS, Bastian-Jordan M, Fenster A, Ward AD. Accuracy Validation of an Automated Method for Prostate Segmentation in Magnetic Resonance Imaging. J Digit Imaging 2018; 30:782-795. [PMID: 28342043 DOI: 10.1007/s10278-017-9964-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Three dimensional (3D) manual segmentation of the prostate on magnetic resonance imaging (MRI) is a laborious and time-consuming task that is subject to inter-observer variability. In this study, we developed a fully automatic segmentation algorithm for T2-weighted endorectal prostate MRI and evaluated its accuracy within different regions of interest using a set of complementary error metrics. Our dataset contained 42 T2-weighted endorectal MRI from prostate cancer patients. The prostate was manually segmented by one observer on all of the images and by two other observers on a subset of 10 images. The algorithm first coarsely localizes the prostate in the image using a template matching technique. Then, it defines the prostate surface using learned shape and appearance information from a set of training images. To evaluate the algorithm, we assessed the error metric values in the context of measured inter-observer variability and compared performance to that of our previously published semi-automatic approach. The automatic algorithm needed an average execution time of ∼60 s to segment the prostate in 3D. When compared to a single-observer reference standard, the automatic algorithm has an average mean absolute distance of 2.8 mm, Dice similarity coefficient of 82%, recall of 82%, precision of 84%, and volume difference of 0.5 cm3 in the mid-gland. Concordant with other studies, accuracy was highest in the mid-gland and lower in the apex and base. Loss of accuracy with respect to the semi-automatic algorithm was less than the measured inter-observer variability in manual segmentation for the same task.
Collapse
Affiliation(s)
- Maysam Shahedi
- Baines Imaging Research Laboratory, London Regional Cancer Program, A3-123A, 790 Commissioners Rd E, London, ON, N6A 4L6, Canada. .,Robarts Research Institute, The University of Western Ontario, London, ON, Canada. .,Graduate Program in Biomedical Engineering, The University of Western Ontario, London, ON, Canada.
| | - Derek W Cool
- Robarts Research Institute, The University of Western Ontario, London, ON, Canada.,The Department of Medical Imaging, The University of Western Ontario, London, ON, Canada
| | - Glenn S Bauman
- Baines Imaging Research Laboratory, London Regional Cancer Program, A3-123A, 790 Commissioners Rd E, London, ON, N6A 4L6, Canada.,The Department of Medical Biophysics, The University of Western Ontario, London, ON, Canada.,The Department of Oncology, The University of Western Ontario, London, ON, Canada
| | - Matthew Bastian-Jordan
- The Department of Medical Imaging, The University of Western Ontario, London, ON, Canada
| | - Aaron Fenster
- Robarts Research Institute, The University of Western Ontario, London, ON, Canada.,Graduate Program in Biomedical Engineering, The University of Western Ontario, London, ON, Canada.,The Department of Medical Imaging, The University of Western Ontario, London, ON, Canada.,The Department of Medical Biophysics, The University of Western Ontario, London, ON, Canada
| | - Aaron D Ward
- Baines Imaging Research Laboratory, London Regional Cancer Program, A3-123A, 790 Commissioners Rd E, London, ON, N6A 4L6, Canada.,Graduate Program in Biomedical Engineering, The University of Western Ontario, London, ON, Canada.,The Department of Medical Biophysics, The University of Western Ontario, London, ON, Canada.,The Department of Oncology, The University of Western Ontario, London, ON, Canada
| |
Collapse
|
107
|
Meyer P, Noblet V, Mazzara C, Lallement A. Survey on deep learning for radiotherapy. Comput Biol Med 2018; 98:126-146. [PMID: 29787940 DOI: 10.1016/j.compbiomed.2018.05.018] [Citation(s) in RCA: 168] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2018] [Revised: 05/15/2018] [Accepted: 05/15/2018] [Indexed: 12/17/2022]
Abstract
More than 50% of cancer patients are treated with radiotherapy, either exclusively or in combination with other methods. The planning and delivery of radiotherapy treatment is a complex process, but can now be greatly facilitated by artificial intelligence technology. Deep learning is the fastest-growing field in artificial intelligence and has been successfully used in recent years in many domains, including medicine. In this article, we first explain the concept of deep learning, addressing it in the broader context of machine learning. The most common network architectures are presented, with a more specific focus on convolutional neural networks. We then present a review of the published works on deep learning methods that can be applied to radiotherapy, which are classified into seven categories related to the patient workflow, and can provide some insights of potential future applications. We have attempted to make this paper accessible to both radiotherapy and deep learning communities, and hope that it will inspire new collaborations between these two communities to develop dedicated radiotherapy applications.
Collapse
Affiliation(s)
- Philippe Meyer
- Department of Medical Physics, Paul Strauss Center, Strasbourg, France.
| | | | | | | |
Collapse
|
108
|
Qin W, Wu J, Han F, Yuan Y, Zhao W, Ibragimov B, Gu J, Xing L. Superpixel-based and boundary-sensitive convolutional neural network for automated liver segmentation. Phys Med Biol 2018; 63:095017. [PMID: 29633960 PMCID: PMC5983385 DOI: 10.1088/1361-6560/aabd19] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Segmentation of liver in abdominal computed tomography (CT) is an important step for radiation therapy planning of hepatocellular carcinoma. Practically, a fully automatic segmentation of liver remains challenging because of low soft tissue contrast between liver and its surrounding organs, and its highly deformable shape. The purpose of this work is to develop a novel superpixel-based and boundary sensitive convolutional neural network (SBBS-CNN) pipeline for automated liver segmentation. The entire CT images were first partitioned into superpixel regions, where nearby pixels with similar CT number were aggregated. Secondly, we converted the conventional binary segmentation into a multinomial classification by labeling the superpixels into three classes: interior liver, liver boundary, and non-liver background. By doing this, the boundary region of the liver was explicitly identified and highlighted for the subsequent classification. Thirdly, we computed an entropy-based saliency map for each CT volume, and leveraged this map to guide the sampling of image patches over the superpixels. In this way, more patches were extracted from informative regions (e.g. the liver boundary with irregular changes) and fewer patches were extracted from homogeneous regions. Finally, deep CNN pipeline was built and trained to predict the probability map of the liver boundary. We tested the proposed algorithm in a cohort of 100 patients. With 10-fold cross validation, the SBBS-CNN achieved mean Dice similarity coefficients of 97.31 ± 0.36% and average symmetric surface distance of 1.77 ± 0.49 mm. Moreover, it showed superior performance in comparison with state-of-art methods, including U-Net, pixel-based CNN, active contour, level-sets and graph-cut algorithms. SBBS-CNN provides an accurate and effective tool for automated liver segmentation. It is also envisioned that the proposed framework is directly applicable in other medical image segmentation scenarios.
Collapse
Affiliation(s)
- Wenjian Qin
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China. Medical Physics Division in the Department of Radiation Oncology, Stanford University, Palo Alto, CA 94305, United States of America. University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China
| | | | | | | | | | | | | | | |
Collapse
|
109
|
Lv J, Chen K, Yang M, Zhang J, Wang X. Reconstruction of undersampled radial free-breathing 3D abdominal MRI using stacked convolutional auto-encoders. Med Phys 2018; 45:2023-2032. [PMID: 29574939 DOI: 10.1002/mp.12870] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2017] [Revised: 02/21/2018] [Accepted: 03/06/2018] [Indexed: 01/22/2023] Open
Abstract
PURPOSE Free-breathing three-dimensional (3D) abdominal imaging is a challenging task for MRI, as respiratory motion severely degrades image quality. One of the most promising self-navigation techniques is the 3D golden-angle radial stack-of-stars (SOS) sequence, which has advantages in terms of speed, resolution, and allowing free breathing. However, streaking artifacts are still clearly observed in reconstructed images when undersampling is applied. This work presents a novel reconstruction approach based on a stacked convolutional auto-encoder (SCAE) network to solve this problem. METHODS Thirty healthy volunteers participated in our experiment. To build the dataset, reference and artifact-affected images were reconstructed using 451 golden-angle spokes and the first 20, 40, or 90 golden-angle spokes corresponding to acceleration rates of 31.4, 15.7, and 6.98, respectively. In the training step, we trained the SCAE by feeding it with patches from artifact-affected images. The SCAE outputs patches in the corresponding reference images. In the testing step, we applied the trained SCAE to map each input artifact-affected patch to the corresponding reference image patch. RESULT The SCAE-based reconstruction images with acceleration rates of 6.98 and 15.7 show nearly similar quality as the reference images. Additionally, the calculation time is below 1 s. Moreover, the proposed approach preserves important features, such as lesions not presented in the training set. CONCLUSION The preliminary results demonstrate the feasibility of the proposed SCAE-based strategy for correcting the streaking artifacts of undersampled free-breathing 3D abdominal MRI with a negligible reconstruction time.
Collapse
Affiliation(s)
- Jun Lv
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871, China
| | - Kun Chen
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871, China
| | - Ming Yang
- Vusion Tech Ltd. Co, Hefei, 230031, China
| | - Jue Zhang
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871, China.,College of Engineering, Peking University, Beijing, 100871, China
| | - Xiaoying Wang
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871, China.,Department of Radiology, Peking University First Hospital, Beijing, 100034, China
| |
Collapse
|
110
|
Tian Z, Liu L, Zhang Z, Fei B. PSNet: prostate segmentation on MRI based on a convolutional neural network. J Med Imaging (Bellingham) 2018; 5:021208. [PMID: 29376105 PMCID: PMC5771127 DOI: 10.1117/1.jmi.5.2.021208] [Citation(s) in RCA: 57] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Accepted: 12/20/2017] [Indexed: 01/09/2023] Open
Abstract
Automatic segmentation of the prostate on magnetic resonance images (MRI) has many applications in prostate cancer diagnosis and therapy. We proposed a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage, which uses prostate MRI and the corresponding ground truths as inputs. The learned CNN model can be used to make an inference for pixel-wise segmentation. Experiments were performed on three data sets, which contain prostate MRI of 140 patients. The proposed CNN model of prostate segmentation (PSNet) obtained a mean Dice similarity coefficient of [Formula: see text] as compared to the manually labeled ground truth. Experimental results show that the proposed model could yield satisfactory segmentation of the prostate on MRI.
Collapse
Affiliation(s)
- Zhiqiang Tian
- Xi'an Jiaotong University, School of Software Engineering, Xi'an, China
- Emory University School of Medicine, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Lizhi Liu
- Emory University School of Medicine, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Zhenfeng Zhang
- The Second Hospital of Guangzhou Medical University, Department of Radiology, Guangzhou, China
| | - Baowei Fei
- Emory University School of Medicine, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
- Georgia Institute of Technology and Emory University, Wallace H. Coulter Department of Biomedical Engineering, Atlanta, Georgia, United States
- Winship Cancer Institute of Emory University, Atlanta, Georgia, United States
- Emory University, Department of Mathematics and Computer Science, Atlanta, Georgia, United States
| |
Collapse
|
111
|
Tian P, Qi L, Shi Y, Zhou L, Gao Y, Shen D. A NOVEL IMAGE-SPECIFIC TRANSFER APPROACH FOR PROSTATE SEGMENTATION IN MR IMAGES. PROCEEDINGS OF THE ... IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING. ICASSP (CONFERENCE) 2018; 2018:806-810. [PMID: 30636936 PMCID: PMC6328258 DOI: 10.1109/icassp.2018.8461716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Prostate segmentation in Magnetic Resonance (MR) Images is a significant yet challenging task for prostate cancer treatment. Most of the existing works attempted to design a global classifier for all MR images, which neglect the discrepancy of images across different patients. To this end, we propose a novel transfer approach for prostate segmentation in MR images. Firstly, an image-specific classifier is built for each training image. Secondly, a pair of dictionaries and a mapping matrix are jointly obtained by a novel Semi-Coupled Dictionary Transfer Learning (SCDTL). Finally, the classifiers on the source domain could be selectively transferred to the target domain (i.e. testing images) by the dictionaries and the mapping matrix. The evaluation demonstrates that our approach has a competitive performance compared with the state-of-the-art transfer learning methods. Moreover, the proposed transfer approach outperforms the conventional deep neural network based method.
Collapse
Affiliation(s)
- Pinzhuo Tian
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Lei Qi
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Luping Zhou
- School of Computing and Information Technology, University of Wollongong, Australia
| | - Yang Gao
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Dinggang Shen
- Department of Radiology and BRIC, UNC Chapel Hill, USA
| |
Collapse
|
112
|
Feng Z, Nie D, Wang L, Shen D. SEMI-SUPERVISED LEARNING FOR PELVIC MR IMAGE SEGMENTATION BASED ON MULTI-TASK RESIDUAL FULLY CONVOLUTIONAL NETWORKS. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2018; 2018:885-888. [PMID: 30344892 PMCID: PMC6193482 DOI: 10.1109/isbi.2018.8363713] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Accurate segmentation of pelvic organs from magnetic resonance (MR) images plays an important role in image-guided radiotherapy. However, it is a challenging task due to inconsistent organ appearances and large shape variations. Fully convolutional network (FCN) has recently achieved state-of-the-art performance in medical image segmentation, but it requires a large amount of labeled data for training, which is usually difficult to obtain in real situation. To address these challenges, we propose a deep learning based semi-supervised learning framework. Specifically, we first train an initial multi-task residual fully convolutional network (FCN) based on a limited number of labeled MRI data. Based on the initially trained FCN, those unlabeled new data can be automatically segmented and some reasonable segmentations (after manual/automatic checking) can be included into the training data to fine-tune the network. This step can be repeated to progressively improve the training of our network, until no reasonable segmentations of new data can be included. Experimental results demonstrate the effectiveness of our proposed progressive semi-supervised learning fashion as well as its advantage in terms of accuracy.
Collapse
Affiliation(s)
- Zishun Feng
- Department of Automation, Tsinghua University
- Department of Radiology and BRIC, UNC-Chapel Hill
| | - Dong Nie
- Department of Computer Science, UNC-Chapel Hill
- Department of Radiology and BRIC, UNC-Chapel Hill
| | - Li Wang
- Department of Radiology and BRIC, UNC-Chapel Hill
| | | |
Collapse
|
113
|
Lian C, Zhang J, Liu M, Zong X, Hung SC, Lin W, Shen D. Multi-channel multi-scale fully convolutional network for 3D perivascular spaces segmentation in 7T MR images. Med Image Anal 2018. [PMID: 29518675 DOI: 10.1016/j.media.2018.02.009] [Citation(s) in RCA: 77] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Accurate segmentation of perivascular spaces (PVSs) is an important step for quantitative study of PVS morphology. However, since PVSs are the thin tubular structures with relatively low contrast and also the number of PVSs is often large, it is challenging and time-consuming for manual delineation of PVSs. Although several automatic/semi-automatic methods, especially the traditional learning-based approaches, have been proposed for segmentation of 3D PVSs, their performance often depends on the hand-crafted image features, as well as sophisticated preprocessing operations prior to segmentation (e.g., specially defined regions-of-interest (ROIs)). In this paper, a novel fully convolutional neural network (FCN) with no requirement of any specified hand-crafted features and ROIs is proposed for efficient segmentation of PVSs. Particularly, the original T2-weighted 7T magnetic resonance (MR) images are first filtered via a non-local Haar-transform-based line singularity representation method to enhance the thin tubular structures. Both the original and enhanced MR images are used as multi-channel inputs to complementarily provide detailed image information and enhanced tubular structural information for the localization of PVSs. Multi-scale features are then automatically learned to characterize the spatial associations between PVSs and adjacent brain tissues. Finally, the produced PVS probability maps are recursively loaded into the network as an additional channel of inputs to provide the auxiliary contextual information for further refining the segmentation results. The proposed multi-channel multi-scale FCN has been evaluated on the 7T brain MR images scanned from 20 subjects. The experimental results show its superior performance compared with several state-of-the-art methods.
Collapse
Affiliation(s)
- Chunfeng Lian
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| | - Jun Zhang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Mingxia Liu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Xiaopeng Zong
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Sheng-Che Hung
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Weili Lin
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, South Korea.
| |
Collapse
|
114
|
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42:60-88. [PMID: 28778026 DOI: 10.1016/j.media.2017.07.005] [Citation(s) in RCA: 4766] [Impact Index Per Article: 595.8] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 07/24/2017] [Accepted: 07/25/2017] [Indexed: 02/07/2023]
Affiliation(s)
- Geert Litjens
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Thijs Kooi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | | | - Francesco Ciompi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Mohsen Ghafoorian
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
115
|
Wang X, Yang W, Weinreb J, Han J, Li Q, Kong X, Yan Y, Ke Z, Luo B, Liu T, Wang L. Searching for prostate cancer by fully automated magnetic resonance imaging classification: deep learning versus non-deep learning. Sci Rep 2017; 7:15415. [PMID: 29133818 PMCID: PMC5684419 DOI: 10.1038/s41598-017-15720-y] [Citation(s) in RCA: 94] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2016] [Accepted: 10/31/2017] [Indexed: 01/11/2023] Open
Abstract
Prostate cancer (PCa) is a major cause of death since ancient time documented in Egyptian Ptolemaic mummy imaging. PCa detection is critical to personalized medicine and varies considerably under an MRI scan. 172 patients with 2,602 morphologic images (axial 2D T2-weighted imaging) of the prostate were obtained. A deep learning with deep convolutional neural network (DCNN) and a non-deep learning with SIFT image feature and bag-of-word (BoW), a representative method for image recognition and analysis, were used to distinguish pathologically confirmed PCa patients from prostate benign conditions (BCs) patients with prostatitis or prostate benign hyperplasia (BPH). In fully automated detection of PCa patients, deep learning had a statistically higher area under the receiver operating characteristics curve (AUC) than non-deep learning (P = 0.0007 < 0.001). The AUCs were 0.84 (95% CI 0.78-0.89) for deep learning method and 0.70 (95% CI 0.63-0.77) for non-deep learning method, respectively. Our results suggest that deep learning with DCNN is superior to non-deep learning with SIFT image feature and BoW model for fully automated PCa patients differentiation from prostate BCs patients. Our deep learning method is extensible to image modalities such as MR imaging, CT and PET of other organs.
Collapse
Affiliation(s)
- Xinggang Wang
- Department of Radiology, Tongji Hospital, Huazhong University of Science and Technology, Jiefang Road 1095, 430030, Wuhan, China
- School of Electronics Information and Communications, Huazhong University of Science and Technology, Luoyu Road 1037, Wuhan, Hubei, 430074, China
| | - Wei Yang
- Department of Nutrition and Food Hygiene, MOE Key Lab of Environment, Hubei Key Laboratory of Food Nutrition and Safety, Health, School of Public Health, Tongji Medical College, Huazhong University of Science and Technology, Hangkong Road 13, 430030, Wuhan, China
| | - Jeffrey Weinreb
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, 208042, Connecticut, USA
| | - Juan Han
- Department of Maternal and Child and Adolescent & Department of Epidemiology and Biostatistics, School of Public Health, Tongji Medical College, Huazhong University of Science and Technology, Hangkong Road 13, 430030, Wuhan, China
| | - Qiubai Li
- Program in Cellular and Molecular Medicine, Boston Children's Hospital, Boston, MA, 02115, USA
| | - Xiangchuang Kong
- Department of Radiology, Union Hospital, Huazhong University of Science and Technology, Jiefang Road 1277, 430022, Wuhan, China
| | - Yongluan Yan
- School of Electronics Information and Communications, Huazhong University of Science and Technology, Luoyu Road 1037, Wuhan, Hubei, 430074, China
| | - Zan Ke
- Department of Radiology, Tongji Hospital, Huazhong University of Science and Technology, Jiefang Road 1095, 430030, Wuhan, China
| | - Bo Luo
- School of mechanical science and engineering, Huazhong University of Science and Technology, Luoyu Road 1037, 430074, Wuhan, China
| | - Tao Liu
- School of mechanical science and engineering, Huazhong University of Science and Technology, Luoyu Road 1037, 430074, Wuhan, China
| | - Liang Wang
- Department of Radiology, Tongji Hospital, Huazhong University of Science and Technology, Jiefang Road 1095, 430030, Wuhan, China.
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science &Technology, Jie-Fang-Da-Dao 1095, Wuhan, 430030, P.R. China.
| |
Collapse
|
116
|
|
117
|
Shi Y, Yang W, Gao Y, Shen D. Does Manual Delineation only Provide the Side Information in CT Prostate Segmentation? MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2017; 10435:692-700. [PMID: 30035275 PMCID: PMC6054464 DOI: 10.1007/978-3-319-66179-7_79] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Prostate segmentation, for accurate prostate localization in CT images, is regarded as a crucial yet challenging task. Nevertheless, due to the inevitable factors (e.g., low contrast, large appearance and shape changes), the most important problem is how to learn the informative feature representation to distinguish the prostate from non-prostate regions. We address this challenging feature learning by leveraging the manual delineation as guidance: the manual delineation does not only indicate the category of patches, but also helps enhance the appearance of prostate. This is realized by the proposed cascaded deep domain adaptation (CDDA) model. Specifically, CDDA constructs several consecutive source domains by employing a mask of manual delineation to overlay on the original CT images with different mask ratios. Upon these source domains, convnet will guide better transferrable feature learning until to the target domain. Particularly, we implement two typical methods: patch-to-scalar (CDDA-CNN) and patch-to-patch (CDDA-FCN). Also, we theoretically analyze the generalization error bound of CDDA. Experimental results show the promising results of our method.
Collapse
Affiliation(s)
- Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Wanqi Yang
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
- School of Computer Science, Nanjing Normal University, Nanjing, China
| | - Yang Gao
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Dinggang Shen
- Department of Radiology and BRIC, UNC Chapel Hill, Chapel Hill, USA
| |
Collapse
|
118
|
Xu Y, Yan K, Kim J, Wang X, Li C, Su L, Yu S, Xu X, Feng DD. Dual-stage deep learning framework for pigment epithelium detachment segmentation in polypoidal choroidal vasculopathy. BIOMEDICAL OPTICS EXPRESS 2017; 8:4061-4076. [PMID: 28966847 PMCID: PMC5611923 DOI: 10.1364/boe.8.004061] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2017] [Revised: 07/29/2017] [Accepted: 08/07/2017] [Indexed: 05/13/2023]
Abstract
Worldwide, polypoidal choroidal vasculopathy (PCV) is a common vision-threatening exudative maculopathy, and pigment epithelium detachment (PED) is an important clinical characteristic. Thus, precise and efficient PED segmentation is necessary for PCV clinical diagnosis and treatment. We propose a dual-stage learning framework via deep neural networks (DNN) for automated PED segmentation in PCV patients to avoid issues associated with manual PED segmentation (subjectivity, manual segmentation errors, and high time consumption).The optical coherence tomography scans of fifty patients were quantitatively evaluated with different algorithms and clinicians. Dual-stage DNN outperformed existing PED segmentation methods for all segmentation accuracy parameters, including true positive volume fraction (85.74 ± 8.69%), dice similarity coefficient (85.69 ± 8.08%), positive predictive value (86.02 ± 8.99%) and false positive volume fraction (0.38 ± 0.18%). Dual-stage DNN achieves accurate PED quantitative information, works with multiple types of PEDs and agrees well with manual delineation, suggesting that it is a potential automated assistant for PCV management.
Collapse
Affiliation(s)
- Yupeng Xu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080, China; Shanghai Key Laboratory of Fundus Disease, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080, China
| | - Ke Yan
- Biomedical and Multimedia Information Technology (BMIT) Research Group, The University of Sydney, Sydney, NSW, 2006, Australia
| | - Jinman Kim
- Biomedical and Multimedia Information Technology (BMIT) Research Group, The University of Sydney, Sydney, NSW, 2006, Australia
| | - Xiuying Wang
- Biomedical and Multimedia Information Technology (BMIT) Research Group, The University of Sydney, Sydney, NSW, 2006, Australia
| | - Changyang Li
- Biomedical and Multimedia Information Technology (BMIT) Research Group, The University of Sydney, Sydney, NSW, 2006, Australia
| | - Li Su
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080, China; Shanghai Key Laboratory of Fundus Disease, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080, China
| | - Suqin Yu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080, China; Shanghai Key Laboratory of Fundus Disease, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080, China
| | - Xun Xu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080, China; Shanghai Key Laboratory of Fundus Disease, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080, China
| | - Dagan David Feng
- Biomedical and Multimedia Information Technology (BMIT) Research Group, The University of Sydney, Sydney, NSW, 2006, Australia
| |
Collapse
|
119
|
Cheng R, Roth HR, Lay N, Lu L, Turkbey B, Gandler W, McCreedy ES, Pohida T, Pinto PA, Choyke P, McAuliffe MJ, Summers RM. Automatic magnetic resonance prostate segmentation by deep learning with holistically nested networks. J Med Imaging (Bellingham) 2017; 4:041302. [PMID: 28840173 DOI: 10.1117/1.jmi.4.4.041302] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Accepted: 05/22/2017] [Indexed: 11/14/2022] Open
Abstract
Accurate automatic segmentation of the prostate in magnetic resonance images (MRI) is a challenging task due to the high variability of prostate anatomic structure. Artifacts such as noise and similar signal intensity of tissues around the prostate boundary inhibit traditional segmentation methods from achieving high accuracy. We investigate both patch-based and holistic (image-to-image) deep-learning methods for segmentation of the prostate. First, we introduce a patch-based convolutional network that aims to refine the prostate contour which provides an initialization. Second, we propose a method for end-to-end prostate segmentation by integrating holistically nested edge detection with fully convolutional networks. Holistically nested networks (HNN) automatically learn a hierarchical representation that can improve prostate boundary detection. Quantitative evaluation is performed on the MRI scans of 250 patients in fivefold cross-validation. The proposed enhanced HNN model achieves a mean ± standard deviation. A Dice similarity coefficient (DSC) of [Formula: see text] and a mean Jaccard similarity coefficient (IoU) of [Formula: see text] are used to calculate without trimming any end slices. The proposed holistic model significantly ([Formula: see text]) outperforms a patch-based AlexNet model by 9% in DSC and 13% in IoU. Overall, the method achieves state-of-the-art performance as compared with other MRI prostate segmentation methods in the literature.
Collapse
Affiliation(s)
- Ruida Cheng
- Imaging Sciences Laboratory, Center of Information Technology, NIH, Bethesda, Maryland, United States
| | - Holger R Roth
- Imaging Biomarkers and CAD Laboratory, Clinical Center, NIH, Bethesda, Maryland, United States
| | - Nathan Lay
- Imaging Biomarkers and CAD Laboratory, Clinical Center, NIH, Bethesda, Maryland, United States
| | - Le Lu
- Imaging Biomarkers and CAD Laboratory, Clinical Center, NIH, Bethesda, Maryland, United States
| | - Baris Turkbey
- Molecular Imaging Program, NCI, Bethesda, Maryland, United States
| | - William Gandler
- Imaging Sciences Laboratory, Center of Information Technology, NIH, Bethesda, Maryland, United States
| | - Evan S McCreedy
- Imaging Sciences Laboratory, Center of Information Technology, NIH, Bethesda, Maryland, United States
| | - Tom Pohida
- Computational Bioscience and Engineering Laboratory, Center of Information Technology, NIH, Bethesda, Maryland, United States
| | - Peter A Pinto
- Center of Cancer Research, Urologic Oncology Branch, Bethesda, Maryland, United States
| | - Peter Choyke
- Molecular Imaging Program, NCI, Bethesda, Maryland, United States
| | - Matthew J McAuliffe
- Imaging Sciences Laboratory, Center of Information Technology, NIH, Bethesda, Maryland, United States
| | - Ronald M Summers
- Imaging Biomarkers and CAD Laboratory, Clinical Center, NIH, Bethesda, Maryland, United States
| |
Collapse
|
120
|
Medical image classification via multiscale representation learning. Artif Intell Med 2017; 79:71-78. [PMID: 28701276 DOI: 10.1016/j.artmed.2017.06.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Revised: 05/19/2017] [Accepted: 06/20/2017] [Indexed: 10/19/2022]
Abstract
Multiscale structure is an essential attribute of natural images. Similarly, there exist scaling phenomena in medical images, and therefore a wide range of observation scales would be useful for medical imaging measurements. The present work proposes a multiscale representation learning method via sparse autoencoder networks to capture the intrinsic scales in medical images for the classification task. We obtain the multiscale feature detectors by the sparse autoencoders with different receptive field sizes, and then generate the feature maps by the convolution operation. This strategy can better characterize various size structures in medical imaging than single-scale version. Subsequently, Fisher vector technique is used to encode the extracted features to implement a fixed-length image representation, which provides more abundant information of high-order statistics and enhances the descriptiveness and discriminative ability of feature representation. We carry out experiments on the IRMA-2009 medical collection and the mammographic patch dataset. The extensive experimental results demonstrate that the proposed method have superior performance.
Collapse
|
121
|
Abstract
This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.
Collapse
Affiliation(s)
- Dinggang Shen
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina 27599;
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea;
| | - Guorong Wu
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina 27599;
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea;
| |
Collapse
|
122
|
Wan X, Zhao C. Local receptive field constrained stacked sparse autoencoder for classification of hyperspectral images. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2017; 34:1011-1020. [PMID: 29036085 DOI: 10.1364/josaa.34.001011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Accepted: 04/20/2017] [Indexed: 06/07/2023]
Abstract
As a competitive machine learning algorithm, the stacked sparse autoencoder (SSA) has achieved outstanding popularity in exploiting high-level features for classification of hyperspectral images (HSIs). In general, in the SSA architecture, the nodes between adjacent layers are fully connected and need to be iteratively fine-tuned during the pretraining stage; however, the nodes of previous layers further away may be less likely to have a dense correlation to the given node of subsequent layers. Therefore, to reduce the classification error and increase the learning rate, this paper proposes the general framework of locally connected SSA; that is, the biologically inspired local receptive field (LRF) constrained SSA architecture is employed to simultaneously characterize the local correlations of spectral features and extract high-level feature representations of hyperspectral data. In addition, the appropriate receptive field constraint is concurrently updated by measuring the spatial distances from the neighbor nodes to the corresponding node. Finally, the efficient random forest classifier is cascaded to the last hidden layer of the SSA architecture as a benchmark classifier. Experimental results on two real HSI datasets demonstrate that the proposed hierarchical LRF constrained stacked sparse autoencoder and random forest (SSARF) provides encouraging results with respect to other contrastive methods, for instance, the improvements of overall accuracy in a range of 0.72%-10.87% for the Indian Pines dataset and 0.74%-7.90% for the Kennedy Space Center dataset; moreover, it generates lower running time compared with the result provided by similar SSARF based methodology.
Collapse
|
123
|
Bibault JE, Burgun A, Giraud P. Intelligence artificielle appliquée à la radiothérapie. Cancer Radiother 2017; 21:239-243. [DOI: 10.1016/j.canrad.2016.09.021] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Revised: 09/21/2016] [Accepted: 09/28/2016] [Indexed: 02/04/2023]
|
124
|
Wang C, Elazab A, Wu J, Hu Q. Lung nodule classification using deep feature fusion in chest radiography. Comput Med Imaging Graph 2017; 57:10-18. [DOI: 10.1016/j.compmedimag.2016.11.004] [Citation(s) in RCA: 72] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2016] [Revised: 11/08/2016] [Accepted: 11/10/2016] [Indexed: 11/28/2022]
|
125
|
Chen H, Wu X, Tao G, Peng Q. Automatic content understanding with cascaded spatial–temporal deep framework for capsule endoscopy videos. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.06.077] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
126
|
Ma L, Guo R, Zhang G, Tade F, Schuster DM, Nieh P, Master V, Fei B. Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2017; 10133. [PMID: 30220767 DOI: 10.1117/12.2255755] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Automatic segmentation of the prostate on CT images has many applications in prostate cancer diagnosis and therapy. However, prostate CT image segmentation is challenging because of the low contrast of soft tissue on CT images. In this paper, we propose an automatic segmentation method by combining a deep learning method and multi-atlas refinement. First, instead of segmenting the whole image, we extract the region of interesting (ROI) to delete irrelevant regions. Then, we use the convolutional neural networks (CNN) to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels in order to obtain the preliminary segmentation results. CNN can automatically learn the deep features adapting to the data, which are different from some handcrafted features. Finally, we select some similar atlases to refine the initial segmentation results. The proposed method has been evaluated on a dataset of 92 prostate CT images. Experimental results show that our method achieved a Dice similarity coefficient of 86.80% as compared to the manual segmentation. The deep learning based method can provide a useful tool for automatic segmentation of the prostate on CT images and thus can have a variety of clinical applications.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA.,School of Computer Science, Beijing Institute of Technology
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Guoyi Zhang
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Funmilayo Tade
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Peter Nieh
- Department of Urology, Emory University, Atlanta, GA
| | - Viraj Master
- Department of Urology, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA.,Winship Cancer Institute of Emory University, Atlanta, GA.,The Wallace H. Coulter Department of Biomedical Engineering Georgia Institute of Technology and Emory University, Atlanta, GA
| |
Collapse
|
127
|
Abstract
Deep learning implemented in a collaborative cloud-based platform empowers ophthalmologists in the diagnosis of congenital cataracts.
Collapse
Affiliation(s)
- Qian Wang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA
| |
Collapse
|