351
|
Wu Z, Guo Y, Park SH, Gao Y, Dong P, Lee SW, Shen D. Robust brain ROI segmentation by deformation regression and deformable shape model. Med Image Anal 2017; 43:198-213. [PMID: 29149715 DOI: 10.1016/j.media.2017.11.001] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2016] [Revised: 09/11/2017] [Accepted: 11/01/2017] [Indexed: 01/18/2023]
Abstract
We propose a robust and efficient learning-based deformable model for segmenting regions of interest (ROIs) from structural MR brain images. Different from the conventional deformable-model-based methods that deform a shape model locally around the initialization location, we learn an image-based regressor to guide the deformable model to fit for the target ROI. Specifically, given any voxel in a new image, the image-based regressor can predict the displacement vector from this voxel towards the boundary of target ROI, which can be used to guide the deformable segmentation. By predicting the displacement vector maps for the whole image, our deformable model is able to use multiple non-boundary predictions to jointly determine and iteratively converge the initial shape model to the target ROI boundary, which is more robust to the local prediction error and initialization. In addition, by introducing the prior shape model, our segmentation avoids the isolated segmentations as often occurred in the previous multi-atlas-based methods. In order to learn an image-based regressor for displacement vector prediction, we adopt the following novel strategies in the learning procedure: (1) a joint classification and regression random forest is proposed to learn an image-based regressor together with an ROI classifier in a multi-task manner; (2) high-level context features are extracted from intermediate (estimated) displacement vector and classification maps to enforce the relationship between predicted displacement vectors at neighboring voxels. To validate our method, we compare it with the state-of-the-art multi-atlas-based methods and other learning-based methods on three public brain MR datasets. The results consistently show that our method is better in terms of both segmentation accuracy and computational efficiency.
Collapse
Affiliation(s)
- Zhengwang Wu
- IDEA Lab, BRIC, UNC-Chapel Hill, Chapel Hill, NC, USA
| | - Yanrong Guo
- IDEA Lab, BRIC, UNC-Chapel Hill, Chapel Hill, NC, USA
| | - Sang Hyun Park
- Department of Robotics Engineering, DGIST, Republic of Korea
| | - Yaozong Gao
- IDEA Lab, BRIC, UNC-Chapel Hill, Chapel Hill, NC, USA
| | - Pei Dong
- IDEA Lab, BRIC, UNC-Chapel Hill, Chapel Hill, NC, USA
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Dinggang Shen
- IDEA Lab, BRIC, UNC-Chapel Hill, Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
352
|
Mohseni Salehi SS, Erdogmus D, Gholipour A. Auto-Context Convolutional Neural Network (Auto-Net) for Brain Extraction in Magnetic Resonance Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:2319-2330. [PMID: 28678704 PMCID: PMC5715475 DOI: 10.1109/tmi.2017.2721362] [Citation(s) in RCA: 100] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fetal brains in reconstructed fetal brain magnetic resonance imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97%), where the other methods performed poorly due to the non-standard orientation and geometry of the fetal brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.
Collapse
|
353
|
Qayyum A, Anwar SM, Awais M, Majid M. Medical image retrieval using deep convolutional neural network. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.05.025] [Citation(s) in RCA: 109] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
354
|
Moeskops P, de Bresser J, Kuijf HJ, Mendrik AM, Biessels GJ, Pluim JPW, Išgum I. Evaluation of a deep learning approach for the segmentation of brain tissues and white matter hyperintensities of presumed vascular origin in MRI. Neuroimage Clin 2017; 17:251-262. [PMID: 29159042 PMCID: PMC5683197 DOI: 10.1016/j.nicl.2017.10.007] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2017] [Revised: 09/27/2017] [Accepted: 10/06/2017] [Indexed: 12/03/2022]
Abstract
Automatic segmentation of brain tissues and white matter hyperintensities of presumed vascular origin (WMH) in MRI of older patients is widely described in the literature. Although brain abnormalities and motion artefacts are common in this age group, most segmentation methods are not evaluated in a setting that includes these items. In the present study, our tissue segmentation method for brain MRI was extended and evaluated for additional WMH segmentation. Furthermore, our method was evaluated in two large cohorts with a realistic variation in brain abnormalities and motion artefacts. The method uses a multi-scale convolutional neural network with a T1-weighted image, a T2-weighted fluid attenuated inversion recovery (FLAIR) image and a T1-weighted inversion recovery (IR) image as input. The method automatically segments white matter (WM), cortical grey matter (cGM), basal ganglia and thalami (BGT), cerebellum (CB), brain stem (BS), lateral ventricular cerebrospinal fluid (lvCSF), peripheral cerebrospinal fluid (pCSF), and WMH. Our method was evaluated quantitatively with images publicly available from the MRBrainS13 challenge (n = 20), quantitatively and qualitatively in relatively healthy older subjects (n = 96), and qualitatively in patients from a memory clinic (n = 110). The method can accurately segment WMH (Overall Dice coefficient in the MRBrainS13 data of 0.67) without compromising performance for tissue segmentations (Overall Dice coefficients in the MRBrainS13 data of 0.87 for WM, 0.85 for cGM, 0.82 for BGT, 0.93 for CB, 0.92 for BS, 0.93 for lvCSF, 0.76 for pCSF). Furthermore, the automatic WMH volumes showed a high correlation with manual WMH volumes (Spearman's ρ = 0.83 for relatively healthy older subjects). In both cohorts, our method produced reliable segmentations (as determined by a human observer) in most images (relatively healthy/memory clinic: tissues 88%/77% reliable, WMH 85%/84% reliable) despite various degrees of brain abnormalities and motion artefacts. In conclusion, this study shows that a convolutional neural network-based segmentation method can accurately segment brain tissues and WMH in MR images of older patients with varying degrees of brain abnormalities and motion artefacts.
Collapse
Affiliation(s)
- Pim Moeskops
- Image Sciences Institute, University Medical Center Utrecht and Utrecht University, The Netherlands; Medical Image Analysis, Department of Biomedical Engineering, Eindhoven University of Technology, The Netherlands.
| | - Jeroen de Bresser
- Department of Radiology, University Medical Center Utrecht, The Netherlands
| | - Hugo J Kuijf
- Image Sciences Institute, University Medical Center Utrecht and Utrecht University, The Netherlands
| | - Adriënne M Mendrik
- Image Sciences Institute, University Medical Center Utrecht and Utrecht University, The Netherlands
| | - Geert Jan Biessels
- Department of Neurology, University Medical Center Utrecht, The Netherlands
| | - Josien P W Pluim
- Medical Image Analysis, Department of Biomedical Engineering, Eindhoven University of Technology, The Netherlands
| | - Ivana Išgum
- Image Sciences Institute, University Medical Center Utrecht and Utrecht University, The Netherlands
| |
Collapse
|
355
|
Dou Q, Yu L, Chen H, Jin Y, Yang X, Qin J, Heng PA. 3D deeply supervised network for automated segmentation of volumetric medical images. Med Image Anal 2017; 41:40-54. [DOI: 10.1016/j.media.2017.05.001] [Citation(s) in RCA: 198] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Revised: 04/14/2017] [Accepted: 05/01/2017] [Indexed: 10/19/2022]
|
356
|
Refining diagnosis of Parkinson's disease with deep learning-based interpretation of dopamine transporter imaging. NEUROIMAGE-CLINICAL 2017; 16:586-594. [PMID: 28971009 PMCID: PMC5610036 DOI: 10.1016/j.nicl.2017.09.010] [Citation(s) in RCA: 76] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2017] [Accepted: 09/09/2017] [Indexed: 12/02/2022]
Abstract
Dopaminergic degeneration is a pathologic hallmark of Parkinson's disease (PD), which can be assessed by dopamine transporter imaging such as FP-CIT SPECT. Until now, imaging has been routinely interpreted by human though it can show interobserver variability and result in inconsistent diagnosis. In this study, we developed a deep learning-based FP-CIT SPECT interpretation system to refine the imaging diagnosis of Parkinson's disease. This system trained by SPECT images of PD patients and normal controls shows high classification accuracy comparable with the experts' evaluation referring quantification results. Its high accuracy was validated in an independent cohort composed of patients with PD and nonparkinsonian tremor. In addition, we showed that some patients clinically diagnosed as PD who have scans without evidence of dopaminergic deficit (SWEDD), an atypical subgroup of PD, could be reclassified by our automated system. Our results suggested that the deep learning-based model could accurately interpret FP-CIT SPECT and overcome variability of human evaluation. It could help imaging diagnosis of patients with uncertain Parkinsonism and provide objective patient group classification, particularly for SWEDD, in further clinical studies. Deep learning-based FP-CIT SPECT interpretation model was developed. Deep learning-based model could overcome interobserver variability. Its accuracy for discriminating PD from normal was comparable to the clinical standard. It also showed high accuracy for differentiating PD from nonparkinsonian tremor. Clinical follow-up results showed SWEDD could be reclassified to PD by our model.
Collapse
|
357
|
|
358
|
Claessens NHP, Kelly CJ, Counsell SJ, Benders MJNL. Neuroimaging, cardiovascular physiology, and functional outcomes in infants with congenital heart disease. Dev Med Child Neurol 2017; 59:894-902. [PMID: 28542743 DOI: 10.1111/dmcn.13461] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/17/2017] [Indexed: 01/12/2023]
Abstract
This review integrates data on brain dysmaturation and acquired brain injury using fetal and neonatal magnetic resonance imaging (MRI), including the contribution of cardiovascular physiology to differences in brain development, and the relationship between brain abnormalities and subsequent neurological impairments in infants with congenital heart disease (CHD). The antenatal and neonatal period are critical for optimal brain development; the developing brain is particularly vulnerable to haemodynamic disturbances during this time. Altered cerebral perfusion and decreased cerebral oxygen delivery in the antenatal period can affect functional and structural brain development, while postnatal haemodynamic fluctuations may cause additional injury. In critical CHD, brain dysmaturation and acquired brain injury result from a combination of underlying cardiovascular pathology and surgery performed in the neonatal period. MRI findings in infants with CHD can be used to evaluate potential clinical risk factors for brain abnormalities, and aid prediction of functional outcomes at an early stage. In addition, information on timing of brain dysmaturation and acquired brain injury in CHD has the potential to be used when developing strategies to optimize neurodevelopment.
Collapse
Affiliation(s)
- Nathalie H P Claessens
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Christopher J Kelly
- Centre for the Developing Brain, Division of Imaging Sciences and Biomedical Engineering, King's College London, London, UK
| | - Serena J Counsell
- Centre for the Developing Brain, Division of Imaging Sciences and Biomedical Engineering, King's College London, London, UK
| | - Manon J N L Benders
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht, the Netherlands
| |
Collapse
|
359
|
Fang L, Zhang L, Nie D, Cao X, Bahrami K, He H, Shen D. Brain Image Labeling Using Multi-atlas Guided 3D Fully Convolutional Networks. PATCH-BASED TECHNIQUES IN MEDICAL IMAGING : THIRD INTERNATIONAL WORKSHOP, PATCH-MI 2017, HELD IN CONJUNCTION WITH MICCAI 2017, QUEBEC CITY, QC, CANADA, SEPTEMBER 14, 2017, PROCEEDINGS. PATCH-MI (WORKSHOP) (3RD : 2017 : QUEBEC, QUEBEC) 2017; 10530:12-19. [PMID: 29104969 PMCID: PMC5669261 DOI: 10.1007/978-3-319-67434-6_2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
Abstract
Automatic labeling of anatomical structures in brain images plays an important role in neuroimaging analysis. Among all methods, multi-atlas based segmentation methods are widely used, due to their robustness in propagating prior label information. However, non-linear registration is always needed, which is time-consuming. Alternatively, the patch-based methods have been proposed to relax the requirement of image registration, but the labeling is often determined independently by the target image information, without getting direct assistance from the atlases. To address these limitations, in this paper, we propose a multi-atlas guided 3D fully convolutional networks (FCN) for brain image labeling. Specifically, multi-atlas based guidance is incorporated during the network learning. Based on this, the discriminative of the FCN is boosted, which eventually contribute to accurate prediction. Experiments show that the use of multi-atlas guidance improves the brain labeling performance.
Collapse
Affiliation(s)
- Longwei Fang
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Lichi Zhang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Xiaohuan Cao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Khosro Bahrami
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Huiguang He
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
360
|
Wang S, Zhou M, Liu Z, Liu Z, Gu D, Zang Y, Dong D, Gevaert O, Tian J. Central focused convolutional neural networks: Developing a data-driven model for lung nodule segmentation. Med Image Anal 2017; 40:172-183. [PMID: 28688283 PMCID: PMC5661888 DOI: 10.1016/j.media.2017.06.014] [Citation(s) in RCA: 229] [Impact Index Per Article: 28.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Revised: 06/29/2017] [Accepted: 06/29/2017] [Indexed: 11/23/2022]
Abstract
Accurate lung nodule segmentation from computed tomography (CT) images is of great importance for image-driven lung cancer analysis. However, the heterogeneity of lung nodules and the presence of similar visual characteristics between nodules and their surroundings make it difficult for robust nodule segmentation. In this study, we propose a data-driven model, termed the Central Focused Convolutional Neural Networks (CF-CNN), to segment lung nodules from heterogeneous CT images. Our approach combines two key insights: 1) the proposed model captures a diverse set of nodule-sensitive features from both 3-D and 2-D CT images simultaneously; 2) when classifying an image voxel, the effects of its neighbor voxels can vary according to their spatial locations. We describe this phenomenon by proposing a novel central pooling layer retaining much information on voxel patch center, followed by a multi-scale patch learning strategy. Moreover, we design a weighted sampling to facilitate the model training, where training samples are selected according to their degree of segmentation difficulty. The proposed method has been extensively evaluated on the public LIDC dataset including 893 nodules and an independent dataset with 74 nodules from Guangdong General Hospital (GDGH). We showed that CF-CNN achieved superior segmentation performance with average dice scores of 82.15% and 80.02% for the two datasets respectively. Moreover, we compared our results with the inter-radiologists consistency on LIDC dataset, showing a difference in average dice score of only 1.98%.
Collapse
Affiliation(s)
- Shuo Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Mu Zhou
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University, CA 94305, USA
| | - Zaiyi Liu
- Guangdong General Hospital, Guangzhou, Guangdong 510080, China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Dongsheng Gu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yali Zang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Olivier Gevaert
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University, CA 94305, USA.
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China; Beijing Key Laboratory of Molecular Imaging, Beijing 100190, China.
| |
Collapse
|
361
|
Akkus Z, Galimzianova A, Hoogi A, Rubin DL, Erickson BJ. Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions. J Digit Imaging 2017; 30:449-459. [PMID: 28577131 PMCID: PMC5537095 DOI: 10.1007/s10278-017-9983-4] [Citation(s) in RCA: 472] [Impact Index Per Article: 59.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.
Collapse
Affiliation(s)
- Zeynettin Akkus
- Radiology Informatics Lab, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Alfiia Galimzianova
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Assaf Hoogi
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Daniel L Rubin
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Bradley J Erickson
- Radiology Informatics Lab, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| |
Collapse
|
362
|
Ghafoorian M, Karssemeijer N, Heskes T, van Uden IWM, Sanchez CI, Litjens G, de Leeuw FE, van Ginneken B, Marchiori E, Platel B. Location Sensitive Deep Convolutional Neural Networks for Segmentation of White Matter Hyperintensities. Sci Rep 2017; 7:5110. [PMID: 28698556 PMCID: PMC5505987 DOI: 10.1038/s41598-017-05300-5] [Citation(s) in RCA: 115] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2016] [Accepted: 05/26/2017] [Indexed: 02/06/2023] Open
Abstract
The anatomical location of imaging features is of crucial importance for accurate diagnosis in many medical tasks. Convolutional neural networks (CNN) have had huge successes in computer vision, but they lack the natural ability to incorporate the anatomical location in their decision making process, hindering success in some medical image analysis tasks. In this paper, to integrate the anatomical location information into the network, we propose several deep CNN architectures that consider multi-scale patches or take explicit location features while training. We apply and compare the proposed architectures for segmentation of white matter hyperintensities in brain MR images on a large dataset. As a result, we observe that the CNNs that incorporate location information substantially outperform a conventional segmentation method with handcrafted features as well as CNNs that do not integrate location information. On a test set of 50 scans, the best configuration of our networks obtained a Dice score of 0.792, compared to 0.805 for an independent human observer. Performance levels of the machine and the independent human observer were not statistically significantly different (p-value = 0.06).
Collapse
Affiliation(s)
- Mohsen Ghafoorian
- Institute for Computing and Information Sciences, Radboud University, Nijmegen, The Netherlands.
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Nico Karssemeijer
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Tom Heskes
- Institute for Computing and Information Sciences, Radboud University, Nijmegen, The Netherlands
| | - Inge W M van Uden
- Donders Institute for Brain, Cognition and Behaviour, Department of Neurology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Clara I Sanchez
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Geert Litjens
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Frank-Erik de Leeuw
- Donders Institute for Brain, Cognition and Behaviour, Department of Neurology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Elena Marchiori
- Institute for Computing and Information Sciences, Radboud University, Nijmegen, The Netherlands
| | - Bram Platel
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
363
|
Suzuki K. Overview of deep learning in medical imaging. Radiol Phys Technol 2017; 10:257-273. [PMID: 28689314 DOI: 10.1007/s12194-017-0406-5] [Citation(s) in RCA: 399] [Impact Index Per Article: 49.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Accepted: 06/29/2017] [Indexed: 02/07/2023]
Abstract
The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.
Collapse
Affiliation(s)
- Kenji Suzuki
- Medical Imaging Research Center and Department of Electrical and Computer Engineering, Illinois Institute of Technology, 3440 South Dearborn Street, Chicago, IL, 60616, USA. .,World Research Hub Initiative (WRHI), Tokyo Institute of Technology, Tokyo, Japan.
| |
Collapse
|
364
|
Valverde S, Cabezas M, Roura E, González-Villà S, Pareto D, Vilanova JC, Ramió-Torrentà L, Rovira À, Oliver A, Lladó X. Improving automated multiple sclerosis lesion segmentation with a cascaded 3D convolutional neural network approach. Neuroimage 2017; 155:159-168. [DOI: 10.1016/j.neuroimage.2017.04.034] [Citation(s) in RCA: 206] [Impact Index Per Article: 25.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2016] [Revised: 03/12/2017] [Accepted: 04/14/2017] [Indexed: 12/30/2022] Open
|
365
|
Abstract
This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.
Collapse
Affiliation(s)
- Dinggang Shen
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina 27599;
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea;
| | - Guorong Wu
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina 27599;
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea;
| |
Collapse
|
366
|
Dora L, Agrawal S, Panda R, Abraham A. State-of-the-Art Methods for Brain Tissue Segmentation: A Review. IEEE Rev Biomed Eng 2017. [PMID: 28622675 DOI: 10.1109/rbme.2017.2715350] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Brain tissue segmentation is one of the most sought after research areas in medical image processing. It provides detailed quantitative brain analysis for accurate disease diagnosis, detection, and classification of abnormalities. It plays an essential role in discriminating healthy tissues from lesion tissues. Therefore, accurate disease diagnosis and treatment planning depend merely on the performance of the segmentation method used. In this review, we have studied the recent advances in brain tissue segmentation methods and their state-of-the-art in neuroscience research. The review also highlights the major challenges faced during tissue segmentation of the brain. An effective comparison is made among state-of-the-art brain tissue segmentation methods. Moreover, a study of some of the validation measures to evaluate different segmentation methods is also discussed. The brain tissue segmentation, content in terms of methodologies, and experiments presented in this review are encouraging enough to attract researchers working in this field.
Collapse
|
367
|
Yan L, Guo Y, Qi J, Zhu Q, Gu L, Zheng C, Lin T, Lu Y, Zeng Z, Yu S, Zhu S, Zhou X, Zhang X, Du Y, Yao Z, Lu Y, Liu X. Iodine and freeze-drying enhanced high-resolution MicroCT imaging for reconstructing 3D intraneural topography of human peripheral nerve fascicles. J Neurosci Methods 2017. [PMID: 28634148 DOI: 10.1016/j.jneumeth.2017.06.009] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
BACKGROUND The precise annotation and accurate identification of the topography of fascicles to the end organs are prerequisites for studying human peripheral nerves. NEW METHOD In this study, we present a feasible imaging method that acquires 3D high-resolution (HR) topography of peripheral nerve fascicles using an iodine and freeze-drying (IFD) micro-computed tomography (microCT) method to greatly increase the contrast of fascicle images. RESULTS The enhanced microCT imaging method can facilitate the reconstruction of high-contrast HR fascicle images, fascicle segmentation and extraction, feature analysis, and the tracing of fascicle topography to end organs, which define fascicle functions. COMPARISON WITH EXISTING METHODS The complex intraneural aggregation and distribution of fascicles is typically assessed using histological techniques or MR imaging to acquire coarse axial three-dimensional (3D) maps. However, the disadvantages of histological techniques (static, axial manual registration, and data instability) and MR imaging (low-resolution) limit these applications in reconstructing the topography of nerve fascicles. CONCLUSIONS Thus, enhanced microCT is a new technique for acquiring 3D intraneural topography of the human peripheral nerve fascicles both to improve our understanding of neurobiological principles and to guide accurate repair in the clinic. Additionally, 3D microstructure data can be used as a biofabrication model, which in turn can be used to fabricate scaffolds to repair long nerve gaps.
Collapse
Affiliation(s)
- Liwei Yan
- Department of Microsurgery and Orthopedic Trauma, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, PR China; Center for Peripheral Nerve Tissue Engineering and Technology Research, Guangdong, Guangzhou 510080, PR China.
| | - Yongze Guo
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510080, PR China; Guangdong Province Key Laboratory of Computational Science, Guangzhou 510080, PR China.
| | - Jian Qi
- Department of Microsurgery and Orthopedic Trauma, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, PR China; Center for Peripheral Nerve Tissue Engineering and Technology Research, Guangdong, Guangzhou 510080, PR China.
| | - Qingtang Zhu
- Department of Microsurgery and Orthopedic Trauma, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, PR China; Center for Peripheral Nerve Tissue Engineering and Technology Research, Guangdong, Guangzhou 510080, PR China.
| | - Liqiang Gu
- Department of Microsurgery and Orthopedic Trauma, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, PR China; Center for Peripheral Nerve Tissue Engineering and Technology Research, Guangdong, Guangzhou 510080, PR China.
| | - Canbin Zheng
- Department of Microsurgery and Orthopedic Trauma, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, PR China; Center for Peripheral Nerve Tissue Engineering and Technology Research, Guangdong, Guangzhou 510080, PR China.
| | - Tao Lin
- Department of Microsurgery and Orthopedic Trauma, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, PR China; Center for Peripheral Nerve Tissue Engineering and Technology Research, Guangdong, Guangzhou 510080, PR China.
| | - Yutong Lu
- National Supercomputer Center in GuangZhou, Sun Yat-sen University, Guangzhou 510080, PR China.
| | - Zitao Zeng
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510080, PR China; Guangdong Province Key Laboratory of Computational Science, Guangzhou 510080, PR China.
| | - Sha Yu
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510080, PR China; Guangdong Province Key Laboratory of Computational Science, Guangzhou 510080, PR China.
| | - Shuang Zhu
- Department of Microsurgery and Orthopedic Trauma, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, PR China; Center for Peripheral Nerve Tissue Engineering and Technology Research, Guangdong, Guangzhou 510080, PR China.
| | - Xiang Zhou
- Department of Microsurgery and Orthopedic Trauma, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, PR China; Center for Peripheral Nerve Tissue Engineering and Technology Research, Guangdong, Guangzhou 510080, PR China.
| | - Xi Zhang
- National Supercomputer Center in GuangZhou, Sun Yat-sen University, Guangzhou 510080, PR China.
| | - Yunfei Du
- National Supercomputer Center in GuangZhou, Sun Yat-sen University, Guangzhou 510080, PR China.
| | - Zhi Yao
- Department of Microsurgery and Orthopedic Trauma, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, PR China; Center for Peripheral Nerve Tissue Engineering and Technology Research, Guangdong, Guangzhou 510080, PR China.
| | - Yao Lu
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510080, PR China; Guangdong Province Key Laboratory of Computational Science, Guangzhou 510080, PR China.
| | - Xiaolin Liu
- Department of Microsurgery and Orthopedic Trauma, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, PR China; Center for Peripheral Nerve Tissue Engineering and Technology Research, Guangdong, Guangzhou 510080, PR China.
| |
Collapse
|
368
|
Lee JG, Jun S, Cho YW, Lee H, Kim GB, Seo JB, Kim N. Deep Learning in Medical Imaging: General Overview. Korean J Radiol 2017; 18:570-584. [PMID: 28670152 PMCID: PMC5447633 DOI: 10.3348/kjr.2017.18.4.570] [Citation(s) in RCA: 566] [Impact Index Per Article: 70.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Accepted: 03/29/2017] [Indexed: 02/06/2023] Open
Abstract
The artificial neural network (ANN)-a machine learning technique inspired by the human neuronal synapse system-was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.
Collapse
Affiliation(s)
- June-Goo Lee
- Biomedical Engineering Research Center, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Korea
| | - Sanghoon Jun
- Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Korea.,Department of Convergence Medicine, Biomedical Engineering Research Center, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Korea
| | - Young-Won Cho
- Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Korea.,Department of Convergence Medicine, Biomedical Engineering Research Center, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Korea
| | - Hyunna Lee
- Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Korea.,Department of Convergence Medicine, Biomedical Engineering Research Center, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Korea
| | - Guk Bae Kim
- Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Korea.,Department of Convergence Medicine, Biomedical Engineering Research Center, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Korea
| | - Joon Beom Seo
- Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Korea
| | - Namkug Kim
- Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Korea.,Department of Convergence Medicine, Biomedical Engineering Research Center, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Korea
| |
Collapse
|
369
|
Zhang L, Nogues I, Summers RM, Liu S, Yao J. DeepPap: Deep Convolutional Networks for Cervical Cell Classification. IEEE J Biomed Health Inform 2017; 21:1633-1643. [PMID: 28541229 DOI: 10.1109/jbhi.2017.2705583] [Citation(s) in RCA: 158] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Automation-assisted cervical screening via Pap smear or liquid-based cytology (LBC) is a highly effective cell imaging based cancer detection tool, where cells are partitioned into "abnormal" and "normal" categories. However, the success of most traditional classification methods relies on the presence of accurate cell segmentations. Despite sixty years of research in this field, accurate segmentation remains a challenge in the presence of cell clusters and pathologies. Moreover, previous classification methods are only built upon the extraction of hand-crafted features, such as morphology and texture. This paper addresses these limitations by proposing a method to directly classify cervical cells-without prior segmentation-based on deep features, using convolutional neural networks (ConvNets). First, the ConvNet is pretrained on a natural image dataset. It is subsequently fine-tuned on a cervical cell dataset consisting of adaptively resampled image patches coarsely centered on the nuclei. In the testing phase, aggregation is used to average the prediction scores of a similar set of image patches. The proposed method is evaluated on both Pap smear and LBC datasets. Results show that our method outperforms previous algorithms in classification accuracy (98.3%), area under the curve (0.99) values, and especially specificity (98.3%), when applied to the Herlev benchmark Pap smear dataset and evaluated using five-fold cross validation. Similar superior performances are also achieved on the HEMLBC (H&E stained manual LBC) dataset. Our method is promising for the development of automation-assisted reading systems in primary cervical screening.
Collapse
|
370
|
Prediction of cognitive and motor outcome of preterm infants based on automatic quantitative descriptors from neonatal MR brain images. Sci Rep 2017; 7:2163. [PMID: 28526882 PMCID: PMC5438406 DOI: 10.1038/s41598-017-02307-w] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2017] [Accepted: 04/10/2017] [Indexed: 11/08/2022] Open
Abstract
This study investigates the predictive ability of automatic quantitative brain MRI descriptors for the identification of infants with low cognitive and/or motor outcome at 2-3 years chronological age. MR brain images of 173 patients were acquired at 30 weeks postmenstrual age (PMA) (n = 86) and 40 weeks PMA (n = 153) between 2008 and 2013. Eight tissue volumes and measures of cortical morphology were automatically computed. A support vector machine classifier was employed to identify infants who exhibit low cognitive and/or motor outcome (<85) at 2-3 years chronological age as assessed by the Bayley scales. Based on the images acquired at 30 weeks PMA, the automatic identification resulted in an area under the receiver operation characteristic curve (AUC) of 0.78 for low cognitive outcome, and an AUC of 0.80 for low motor outcome. Identification based on the change of the descriptors between 30 and 40 weeks PMA (n = 66) resulted in an AUC of 0.80 for low cognitive outcome and an AUC of 0.85 for low motor outcome. This study provides evidence of the feasibility of identification of preterm infants at risk of cognitive and motor impairments based on descriptors automatically computed from images acquired at 30 and 40 weeks PMA.
Collapse
|
371
|
Huang L, Xia W, Zhang B, Qiu B, Gao X. MSFCN-multiple supervised fully convolutional networks for the osteosarcoma segmentation of CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 143:67-74. [PMID: 28391820 DOI: 10.1016/j.cmpb.2017.02.013] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2016] [Revised: 12/30/2016] [Accepted: 02/10/2017] [Indexed: 05/04/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic osteosarcoma tumor segmentation on computed tomography (CT) images is a challenging problem, as tumors have large spatial and structural variabilities. In this study, an automatic tumor segmentation method, which was based on a fully convolutional networks with multiple supervised side output layers (MSFCN), was presented. METHODS Image normalization is applied as a pre-processing step for decreasing the differences among images. In the frame of the fully convolutional networks, supervised side output layers were added to three layers in order to guide the multi-scale feature learning as a contracting structure, which was then able to capture both the local and global image features. Multiple feature channels were used in the up-sampling portion to capture more context information, for the assurance of accurate segmentation of the tumor, with low contrast around the soft tissue. The results of all the side outputs were fused to determine the final boundaries of the tumors. RESULTS A quantitative comparison of the 405 osteosarcoma manual segmentation results from the CT images showed that the average Dice similarity coefficient (DSC), average sensitivity, average Hammoude distance (HM) and F1-measure were 87.80%, 86.88%, 19.81% and 0.908, respectively. It was determined that, when compared with the other learning-based algorithms (for example, the fully convolution networks (FCN), U-Net method, and holistically-nested edge detection (HED) method), the MSFCN had the best performances in terms of DSC, sensitivity, HM and F1-measure. CONCLUSION The results indicated that the proposed algorithm contributed to the fast and accurate delineation of tumor boundaries, which could potentially assist doctors in making more precise treatment plans.
Collapse
Affiliation(s)
- Lin Huang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China; University of Science and Technology of China, Hefei, China
| | - Wei Xia
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Bo Zhang
- Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Bensheng Qiu
- University of Science and Technology of China, Hefei, China
| | - Xin Gao
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.
| |
Collapse
|
372
|
Mehta R, Majumdar A, Sivaswamy J. BrainSegNet: a convolutional neural network architecture for automated segmentation of human brain structures. J Med Imaging (Bellingham) 2017; 4:024003. [PMID: 28439524 PMCID: PMC5397775 DOI: 10.1117/1.jmi.4.2.024003] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2016] [Accepted: 03/28/2017] [Indexed: 11/14/2022] Open
Abstract
Automated segmentation of cortical and noncortical human brain structures has been hitherto approached using nonrigid registration followed by label fusion. We propose an alternative approach for this using a convolutional neural network (CNN) which classifies a voxel into one of many structures. Four different kinds of two-dimensional and three-dimensional intensity patches are extracted for each voxel, providing local and global (context) information to the CNN. The proposed approach is evaluated on five different publicly available datasets which differ in the number of labels per volume. The obtained mean Dice coefficient varied according to the number of labels, for example, it is [Formula: see text] and [Formula: see text] for datasets with the least (32) and the most (134) number of labels, respectively. These figures are marginally better or on par with those obtained with the current state-of-the-art methods on nearly all datasets, at a reduced computational time. The consistently good performance of the proposed method across datasets and no requirement for registration make it attractive for many applications where reduced computational time is necessary.
Collapse
Affiliation(s)
- Raghav Mehta
- Centre for Visual Information Technology (CVIT), International Institute of Information Technology - Hyderabad (IIIT-H), Hyderabad, India
| | - Aabhas Majumdar
- Centre for Visual Information Technology (CVIT), International Institute of Information Technology - Hyderabad (IIIT-H), Hyderabad, India
| | - Jayanthi Sivaswamy
- Centre for Visual Information Technology (CVIT), International Institute of Information Technology - Hyderabad (IIIT-H), Hyderabad, India
| |
Collapse
|
373
|
Ghafoorian M, Karssemeijer N, Heskes T, Bergkamp M, Wissink J, Obels J, Keizer K, de Leeuw FE, Ginneken B, Marchiori E, Platel B. Deep multi-scale location-aware 3D convolutional neural networks for automated detection of lacunes of presumed vascular origin. Neuroimage Clin 2017; 14:391-399. [PMID: 28271039 PMCID: PMC5322213 DOI: 10.1016/j.nicl.2017.01.033] [Citation(s) in RCA: 75] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2016] [Revised: 01/20/2017] [Accepted: 01/29/2017] [Indexed: 12/15/2022]
Abstract
Lacunes of presumed vascular origin (lacunes) are associated with an increased risk of stroke, gait impairment, and dementia and are a primary imaging feature of the small vessel disease. Quantification of lacunes may be of great importance to elucidate the mechanisms behind neuro-degenerative disorders and is recommended as part of study standards for small vessel disease research. However, due to the different appearance of lacunes in various brain regions and the existence of other similar-looking structures, such as perivascular spaces, manual annotation is a difficult, elaborative and subjective task, which can potentially be greatly improved by reliable and consistent computer-aided detection (CAD) routines. In this paper, we propose an automated two-stage method using deep convolutional neural networks (CNN). We show that this method has good performance and can considerably benefit readers. We first use a fully convolutional neural network to detect initial candidates. In the second step, we employ a 3D CNN as a false positive reduction tool. As the location information is important to the analysis of candidate structures, we further equip the network with contextual information using multi-scale analysis and integration of explicit location features. We trained, validated and tested our networks on a large dataset of 1075 cases obtained from two different studies. Subsequently, we conducted an observer study with four trained observers and compared our method with them using a free-response operating characteristic analysis. Shown on a test set of 111 cases, the resulting CAD system exhibits performance similar to the trained human observers and achieves a sensitivity of 0.974 with 0.13 false positives per slice. A feasibility study also showed that a trained human observer would considerably benefit once aided by the CAD system.
Collapse
Affiliation(s)
- Mohsen Ghafoorian
- Institute for Computing and Information Sciences, Radboud
University, Nijmegen, The Netherlands
- Diagnostic Image Analysis Group, Department of Radiology and
Nuclear Medicine, Radboud University Medical Center, Nijmegen, The
Netherlands
| | - Nico Karssemeijer
- Diagnostic Image Analysis Group, Department of Radiology and
Nuclear Medicine, Radboud University Medical Center, Nijmegen, The
Netherlands
| | - Tom Heskes
- Institute for Computing and Information Sciences, Radboud
University, Nijmegen, The Netherlands
| | - Mayra Bergkamp
- Donders Institute for Brain, Cognition and Behaviour, Department
of Neurology, Radboud University Medical Center, Nijmegen, The
Netherlands
| | - Joost Wissink
- Donders Institute for Brain, Cognition and Behaviour, Department
of Neurology, Radboud University Medical Center, Nijmegen, The
Netherlands
| | - Jiri Obels
- Diagnostic Image Analysis Group, Department of Radiology and
Nuclear Medicine, Radboud University Medical Center, Nijmegen, The
Netherlands
| | - Karlijn Keizer
- Donders Institute for Brain, Cognition and Behaviour, Department
of Neurology, Radboud University Medical Center, Nijmegen, The
Netherlands
| | - Frank-Erik de Leeuw
- Donders Institute for Brain, Cognition and Behaviour, Department
of Neurology, Radboud University Medical Center, Nijmegen, The
Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Department of Radiology and
Nuclear Medicine, Radboud University Medical Center, Nijmegen, The
Netherlands
| | - Elena Marchiori
- Institute for Computing and Information Sciences, Radboud
University, Nijmegen, The Netherlands
| | - Bram Platel
- Diagnostic Image Analysis Group, Department of Radiology and
Nuclear Medicine, Radboud University Medical Center, Nijmegen, The
Netherlands
| |
Collapse
|
374
|
Yu L, Guo Y, Wang Y, Yu J, Chen P. Segmentation of Fetal Left Ventricle in Echocardiographic Sequences Based on Dynamic Convolutional Neural Networks. IEEE Trans Biomed Eng 2017; 64:1886-1895. [PMID: 28113289 DOI: 10.1109/tbme.2016.2628401] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Segmentation of fetal left ventricle (LV) in echocardiographic sequences is important for further quantitative analysis of fetal cardiac function. However, image gross inhomogeneities and fetal random movements make the segmentation a challenging problem. In this paper, a dynamic convolutional neural networks (CNN) based on multiscale information and fine-tuning is proposed for fetal LV segmentation. The CNN is pretrained by amount of labeled training data. In the segmentation, the first frame of each echocardiographic sequence is delineated manually. The dynamic CNN is fine-tuned by deep tuning with the first frame and shallow tuning with the rest of frames, respectively, to adapt to the individual fetus. Additionally, to separate the connection region between LV and left atrium (LA), a matching approach, which consists of block matching and line matching, is used for mitral valve (MV) base points tracking. Advantages of our proposed method are compared with an active contour model (ACM), a dynamical appearance model (DAM), and a fixed multiscale CNN method. Experimental results in 51 echocardiographic sequences show that the segmentation results agree well with the ground truth, especially in the cases with leakage, blurry boundaries, and subject-to-subject variations. The CNN architecture can be simple, and the dynamic fine-tuning is efficient.
Collapse
|
375
|
Whole Brain Segmentation and Labeling from CT Using Synthetic MR Images. MACHINE LEARNING IN MEDICAL IMAGING 2017. [DOI: 10.1007/978-3-319-67389-9_34] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
376
|
Adversarial Training and Dilated Convolutions for Brain MRI Segmentation. DEEP LEARNING IN MEDICAL IMAGE ANALYSIS AND MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT 2017. [DOI: 10.1007/978-3-319-67558-9_7] [Citation(s) in RCA: 65] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
377
|
Wolterink JM, Leiner T, Viergever MA, Išgum I. Dilated Convolutional Neural Networks for Cardiovascular MR Segmentation in Congenital Heart Disease. RECONSTRUCTION, SEGMENTATION, AND ANALYSIS OF MEDICAL IMAGES 2017. [DOI: 10.1007/978-3-319-52280-7_9] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
378
|
Tversky Loss Function for Image Segmentation Using 3D Fully Convolutional Deep Networks. MACHINE LEARNING IN MEDICAL IMAGING 2017. [DOI: 10.1007/978-3-319-67389-9_44] [Citation(s) in RCA: 233] [Impact Index Per Article: 29.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
379
|
Integrating Online and Offline Three-Dimensional Deep Learning for Automated Polyp Detection in Colonoscopy Videos. IEEE J Biomed Health Inform 2016; 21:65-75. [PMID: 28114049 DOI: 10.1109/jbhi.2016.2637004] [Citation(s) in RCA: 105] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Automated polyp detection in colonoscopy videos has been demonstrated to be a promising way for colorectal cancer prevention and diagnosis. Traditional manual screening is time consuming, operator dependent, and error prone; hence, automated detection approach is highly demanded in clinical practice. However, automated polyp detection is very challenging due to high intraclass variations in polyp size, color, shape, and texture, and low interclass variations between polyps and hard mimics. In this paper, we propose a novel offline and online three-dimensional (3-D) deep learning integration framework by leveraging the 3-D fully convolutional network (3D-FCN) to tackle this challenging problem. Compared with the previous methods employing hand-crafted features or 2-D convolutional neural network, the 3D-FCN is capable of learning more representative spatio-temporal features from colonoscopy videos, and hence has more powerful discrimination capability. More importantly, we propose a novel online learning scheme to deal with the problem of limited training data by harnessing the specific information of an input video in the learning process. We integrate offline and online learning to effectively reduce the number of false positives generated by the offline network and further improve the detection performance. Extensive experiments on the dataset of MICCAI 2015 Challenge on Polyp Detection demonstrated the better performance of our method when compared with other competitors.
Collapse
|
380
|
Choi H, Jin KH. Fast and robust segmentation of the striatum using deep convolutional neural networks. J Neurosci Methods 2016; 274:146-153. [DOI: 10.1016/j.jneumeth.2016.10.007] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2016] [Revised: 09/26/2016] [Accepted: 10/10/2016] [Indexed: 10/20/2022]
|
381
|
Lekadir K, Galimzianova A, Betriu A, Del Mar Vila M, Igual L, Rubin DL, Fernandez E, Radeva P, Napel S. A Convolutional Neural Network for Automatic Characterization of Plaque Composition in Carotid Ultrasound. IEEE J Biomed Health Inform 2016; 21:48-55. [PMID: 27893402 DOI: 10.1109/jbhi.2016.2631401] [Citation(s) in RCA: 99] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Characterization of carotid plaque composition, more specifically the amount of lipid core, fibrous tissue, and calcified tissue, is an important task for the identification of plaques that are prone to rupture, and thus for early risk estimation of cardiovascular and cerebrovascular events. Due to its low costs and wide availability, carotid ultrasound has the potential to become the modality of choice for plaque characterization in clinical practice. However, its significant image noise, coupled with the small size of the plaques and their complex appearance, makes it difficult for automated techniques to discriminate between the different plaque constituents. In this paper, we propose to address this challenging problem by exploiting the unique capabilities of the emerging deep learning framework. More specifically, and unlike existing works which require a priori definition of specific imaging features or thresholding values, we propose to build a convolutional neural network (CNN) that will automatically extract from the images the information that is optimal for the identification of the different plaque constituents. We used approximately 90 000 patches extracted from a database of images and corresponding expert plaque characterizations to train and to validate the proposed CNN. The results of cross-validation experiments show a correlation of about 0.90 with the clinical assessment for the estimation of lipid core, fibrous cap, and calcified tissue areas, indicating the potential of deep learning for the challenging task of automatic characterization of plaque composition in carotid ultrasound.
Collapse
|
382
|
Deep Learning for Multi-task Medical Image Segmentation in Multiple Modalities. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2016 2016. [DOI: 10.1007/978-3-319-46723-8_55] [Citation(s) in RCA: 128] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|