301
|
Fu Y, Mazur TR, Wu X, Liu S, Chang X, Lu Y, Li HH, Kim H, Roach MC, Henke L, Yang D. A novel MRI segmentation method using CNN-based correction network for MRI-guided adaptive radiotherapy. Med Phys 2018; 45:5129-5137. [PMID: 30269345 DOI: 10.1002/mp.13221] [Citation(s) in RCA: 78] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2018] [Revised: 08/20/2018] [Accepted: 09/21/2018] [Indexed: 12/14/2022] Open
Abstract
PURPOSE The purpose of this study was to expedite the contouring process for MRI-guided adaptive radiotherapy (MR-IGART), a convolutional neural network (CNN) deep-learning (DL) model is proposed to accurately segment the liver, kidneys, stomach, bowel and duodenum in 3D MR images. METHODS Images and structure contours for 120 patients were collected retrospectively. Treatment sites included pancreas, liver, stomach, adrenal gland, and prostate. The proposed DL model contains a voxel-wise label prediction CNN and a correction network which consists of two sub-networks. The prediction CNN and sub-networks in the correction network each includes a dense block which consists of twelve densely connected convolutional layers. The correction network was designed to improve the voxel-wise labeling accuracy of a CNN by learning and enforcing implicit anatomical constraints in the segmentation process. Its sub-networks learn to fix the erroneous classification of its previous network by taking as input both the original images and the softmax probability maps generated from its previous sub-network. The parameters of each sub-network were trained independently using piecewise training. The model was trained on 100 datasets, validated on 10 datasets and tested on the remaining 10 datasets. Dice coefficient, Hausdorff distance (HD) were calculated to evaluate the segmentation accuracy. RESULTS The proposed DL model was able to segment the organs with good accuracy. The correction network outperformed the conditional random field (CRF), a most comparable method that is usually applied as a post-processing step. For the 10 testing patients, the average Dice coefficients were 95.3 ± 0.73, 93.1 ± 2.22, 85.0 ± 3.75, 86.6 ± 2.69, and 65.5 ± 8.90 for liver, kidneys, stomach, bowel, and duodenum, respectively. The mean Hausdorff Distance (HD) were 5.41 ± 2.34, 6.23 ± 4.59, 6.88 ± 4.89, 5.90 ± 4.05, and 7.99 ± 6.84 mm, respectively. Manual contouring, as to correct the automatic segmentation results, was four times as fast as manual contouring from scratch. CONCLUSION The proposed method can automatically segment the liver, kidneys, stomach, bowel, and duodenum in 3D MR images with good accuracy. It is useful to expedite the manual contouring for MR-IGART.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, School of Medicine, Washington University in Saint Louis, St.Louis, MO, 63110, USA
| | - Thomas R Mazur
- Department of Radiation Oncology, School of Medicine, Washington University in Saint Louis, St.Louis, MO, 63110, USA
| | - Xue Wu
- Department of Radiation Oncology, School of Medicine, Washington University in Saint Louis, St.Louis, MO, 63110, USA
| | - Shi Liu
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Xiao Chang
- Department of Radiation Oncology, School of Medicine, Washington University in Saint Louis, St.Louis, MO, 63110, USA
| | - Yonggang Lu
- Department of radiology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - H Harold Li
- Department of Radiation Oncology, School of Medicine, Washington University in Saint Louis, St.Louis, MO, 63110, USA
| | - Hyun Kim
- Department of Radiation Oncology, School of Medicine, Washington University in Saint Louis, St.Louis, MO, 63110, USA
| | - Michael C Roach
- Department of Radiation Oncology, School of Medicine, Washington University in Saint Louis, St.Louis, MO, 63110, USA
| | - Lauren Henke
- Department of Radiation Oncology, School of Medicine, Washington University in Saint Louis, St.Louis, MO, 63110, USA
| | - Deshan Yang
- Department of Radiation Oncology, School of Medicine, Washington University in Saint Louis, St.Louis, MO, 63110, USA
| |
Collapse
|
302
|
Khagi B, Kwon GR. Pixel-Label-Based Segmentation of Cross-Sectional Brain MRI Using Simplified SegNet Architecture-Based CNN. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:3640705. [PMID: 30510671 PMCID: PMC6230419 DOI: 10.1155/2018/3640705] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Accepted: 09/25/2018] [Indexed: 11/17/2022]
Abstract
Using deep neural networks for segmenting an MRI image of heterogeneously distributed pixels into a specific class assigning a label to each pixel is the concept of the proposed approach. This approach facilitates the application of the segmentation process on a preprocessed MRI image, with a trained network to be utilized for other test images. As labels are considered expensive assets in supervised training, fewer training images and training labels are used to obtain optimal accuracy. To validate the performance of the proposed approach, an experiment is conducted on other test images (available in the same database) that are not part of the training; the obtained result is of good visual quality in terms of segmentation and quite similar to the ground truth image. The average computed Dice similarity index for the test images is approximately 0.8, whereas the Jaccard similarity measure is approximately 0.6, which is better compared to other methods. This implies that the proposed method can be used to obtain reference images almost similar to the segmented ground truth images.
Collapse
Affiliation(s)
- Bijen Khagi
- Department of Information and Communication Engineering, Chosun University, 375 Seosuk-Dong, Dong-Gu, Gwangju 501-759, Republic of Korea
| | - Goo-Rak Kwon
- Department of Information and Communication Engineering, Chosun University, 375 Seosuk-Dong, Dong-Gu, Gwangju 501-759, Republic of Korea
| |
Collapse
|
303
|
Pesapane F, Codari M, Sardanelli F. Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur Radiol Exp 2018; 2:35. [PMID: 30353365 PMCID: PMC6199205 DOI: 10.1186/s41747-018-0061-6] [Citation(s) in RCA: 359] [Impact Index Per Article: 51.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Accepted: 07/31/2018] [Indexed: 02/08/2023] Open
Abstract
One of the most promising areas of health innovation is the application of artificial intelligence (AI), primarily in medical imaging. This article provides basic definitions of terms such as “machine/deep learning” and analyses the integration of AI into radiology. Publications on AI have drastically increased from about 100–150 per year in 2007–2008 to 700–800 per year in 2016–2017. Magnetic resonance imaging and computed tomography collectively account for more than 50% of current articles. Neuroradiology appears in about one-third of the papers, followed by musculoskeletal, cardiovascular, breast, urogenital, lung/thorax, and abdomen, each representing 6–9% of articles. With an irreversible increase in the amount of data and the possibility to use AI to identify findings either detectable or not by the human eye, radiology is now moving from a subjective perceptual skill to a more objective science. Radiologists, who were on the forefront of the digital era in medicine, can guide the introduction of AI into healthcare. Yet, they will not be replaced because radiology includes communication of diagnosis, consideration of patient’s values and preferences, medical judgment, quality assurance, education, policy-making, and interventional procedures. The higher efficiency provided by AI will allow radiologists to perform more value-added tasks, becoming more visible to patients and playing a vital role in multidisciplinary clinical teams.
Collapse
Affiliation(s)
- Filippo Pesapane
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono 7, 20122, Milan, Italy
| | - Marina Codari
- Unit of Radiology, IRCCS Policlinico San Donato, Via Morandi 30, 20097 San Donato Milanese, Milan, Italy.
| | - Francesco Sardanelli
- Unit of Radiology, IRCCS Policlinico San Donato, Via Morandi 30, 20097 San Donato Milanese, Milan, Italy.,Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Morandi 30, 20097 San Donato Milanese, Milan, Italy
| |
Collapse
|
304
|
Yang H, Sun J, Li H, Wang L, Xu Z. Neural multi-atlas label fusion: Application to cardiac MR images. Med Image Anal 2018; 49:60-75. [DOI: 10.1016/j.media.2018.07.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2017] [Revised: 07/10/2018] [Accepted: 07/30/2018] [Indexed: 10/28/2022]
|
305
|
Mazo C, Bernal J, Trujillo M, Alegre E. Transfer learning for classification of cardiovascular tissues in histological images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 165:69-76. [PMID: 30337082 DOI: 10.1016/j.cmpb.2018.08.006] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2018] [Revised: 07/27/2018] [Accepted: 08/08/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic classification of healthy tissues and organs based on histology images is an open problem, mainly due to the lack of automated tools. Solutions in this regard have potential in educational medicine and medical practices. Some preliminary advances have been made using image processing techniques and classical supervised learning. Due to the breakthrough performance of deep learning in various areas, we present an approach to recognise and classify, automatically, fundamental tissues and organs using Convolutional Neural Networks (CNN). METHODS We adapt four popular CNNs architectures - ResNet, VGG19, VGG16 and Inception - to this problem through transfer learning. The resulting models are evaluated at three stages. Firstly, all the transferred networks are compared to each other. Secondly, the best resulting fine-tuned model is compared to an ad-hoc 2D multi-path model to outline the importance of transfer learning. Thirdly, the same model is evaluated against the state-of-the-art method, a cascade SVM using LBP-based descriptors, to contrast a traditional machine learning approach and a representation learning one. The evaluation task consists of separating six classes accurately: smooth muscle of the elastic artery, smooth muscle of the large vein, smooth muscle of the muscular artery, cardiac muscle, loose connective tissue, and light regions. The different networks are tuned on 6000 blocks of 100 × 100 pixels and tested on 7500. RESULTS Our proposal yields F-score values between 0.717 and 0.928. The highest and lowest performances are for cardiac muscle and smooth muscle of the large vein, respectively. The main issue leading to limited classification scores for the latter class is its similarity with the elastic artery. However, this confusion is evidenced during manual annotation as well. Our algorithm reached improvements in F-score between 0.080 and 0.220 compared to the state-of-the-art machine learning approach. CONCLUSIONS We conclude that it is possible to classify healthy cardiovascular tissues and organs automatically using CNNs and that deep learning holds great promise to improve tissue and organs classification. We left our training and test sets, models and source code publicly available to the research community.
Collapse
Affiliation(s)
- Claudia Mazo
- University College Dublin, CeADAR: Centre for Applied Data Analytics Research, School of Computer Science, Dublin, Ireland.
| | - Jose Bernal
- Universitat de Girona, Institute of Computer Vision and Robotics, Girona, Spain
| | - Maria Trujillo
- Universidad del Valle, Computer and Systems Engineering School, Cali, Colombia
| | - Enrique Alegre
- Universidad de León, Industrial and Informatics Engineering School, León, Spain
| |
Collapse
|
306
|
Tong N, Gou S, Yang S, Ruan D, Sheng K. Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks. Med Phys 2018; 45:4558-4567. [PMID: 30136285 DOI: 10.1002/mp.13147] [Citation(s) in RCA: 124] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2018] [Revised: 08/08/2018] [Accepted: 08/14/2018] [Indexed: 02/05/2023] Open
Abstract
PURPOSE Intensity modulated radiation therapy (IMRT) is commonly employed for treating head and neck (H&N) cancer with uniform tumor dose and conformal critical organ sparing. Accurate delineation of organs-at-risk (OARs) on H&N CT images is thus essential to treatment quality. Manual contouring used in current clinical practice is tedious, time-consuming, and can produce inconsistent results. Existing automated segmentation methods are challenged by the substantial inter-patient anatomical variation and low CT soft tissue contrast. To overcome the challenges, we developed a novel automated H&N OARs segmentation method that combines a fully convolutional neural network (FCNN) with a shape representation model (SRM). METHODS Based on manually segmented H&N CT, the SRM and FCNN were trained in two steps: (a) SRM learned the latent shape representation of H&N OARs from the training dataset; (b) the pre-trained SRM with fixed parameters were used to constrain the FCNN training. The combined segmentation network was then used to delineate nine OARs including the brainstem, optic chiasm, mandible, optical nerves, parotids, and submandibular glands on unseen H&N CT images. Twenty-two and 10 H&N CT scans provided by the Public Domain Database for Computational Anatomy (PDDCA) were utilized for training and validation, respectively. Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average surface distance (ASD), and 95% maximum surface distance (95%SD) were calculated to quantitatively evaluate the segmentation accuracy of the proposed method. The proposed method was compared with an active appearance model that won the 2015 MICCAI H&N Segmentation Grand Challenge based on the same dataset, an atlas method and a deep learning method based on different patient datasets. RESULTS An average DSC = 0.870 (brainstem), DSC = 0.583 (optic chiasm), DSC = 0.937 (mandible), DSC = 0.653 (left optic nerve), DSC = 0.689 (right optic nerve), DSC = 0.835 (left parotid), DSC = 0.832 (right parotid), DSC = 0.755 (left submandibular), and DSC = 0.813 (right submandibular) were achieved. The segmentation results are consistently superior to the results of atlas and statistical shape based methods as well as a patch-wise convolutional neural network method. Once the networks are trained off-line, the average time to segment all 9 OARs for an unseen CT scan is 9.5 s. CONCLUSION Experiments on clinical datasets of H&N patients demonstrated the effectiveness of the proposed deep neural network segmentation method for multi-organ segmentation on volumetric CT scans. The accuracy and robustness of the segmentation were further increased by incorporating shape priors using SMR. The proposed method showed competitive performance and took shorter time to segment multiple organs in comparison to state of the art methods.
Collapse
Affiliation(s)
- Nuo Tong
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China.,Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA, 90095, USA
| | - Shuiping Gou
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China
| | - Shuyuan Yang
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China
| | - Dan Ruan
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA, 90095, USA
| | - Ke Sheng
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA, 90095, USA
| |
Collapse
|
307
|
Bernal J, Kushibar K, Asfaw DS, Valverde S, Oliver A, Martí R, Lladó X. Deep convolutional neural networks for brain image analysis on magnetic resonance imaging: a review. Artif Intell Med 2018; 95:64-81. [PMID: 30195984 DOI: 10.1016/j.artmed.2018.08.008] [Citation(s) in RCA: 150] [Impact Index Per Article: 21.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Revised: 04/25/2018] [Accepted: 08/27/2018] [Indexed: 02/07/2023]
Abstract
In recent years, deep convolutional neural networks (CNNs) have shown record-shattering performance in a variety of computer vision problems, such as visual object recognition, detection and segmentation. These methods have also been utilised in medical image analysis domain for lesion segmentation, anatomical segmentation and classification. We present an extensive literature review of CNN techniques applied in brain magnetic resonance imaging (MRI) analysis, focusing on the architectures, pre-processing, data-preparation and post-processing strategies available in these works. The aim of this study is three-fold. Our primary goal is to report how different CNN architectures have evolved, discuss state-of-the-art strategies, condense their results obtained using public datasets and examine their pros and cons. Second, this paper is intended to be a detailed reference of the research activity in deep CNN for brain MRI analysis. Finally, we present a perspective on the future of CNNs in which we hint some of the research directions in subsequent years.
Collapse
Affiliation(s)
- Jose Bernal
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Kaisar Kushibar
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Daniel S Asfaw
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Sergi Valverde
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Arnau Oliver
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Robert Martí
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Xavier Lladó
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| |
Collapse
|
308
|
A systematic review of structural MRI biomarkers in autism spectrum disorder: A machine learning perspective. Int J Dev Neurosci 2018; 71:68-82. [DOI: 10.1016/j.ijdevneu.2018.08.010] [Citation(s) in RCA: 69] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 08/28/2018] [Accepted: 08/28/2018] [Indexed: 11/19/2022] Open
|
309
|
Sanroma G, Benkarim OM, Piella G, Lekadir K, Hahner N, Eixarch E, González Ballester MA. Learning to combine complementary segmentation methods for fetal and 6-month infant brain MRI segmentation. Comput Med Imaging Graph 2018; 69:52-59. [PMID: 30176518 DOI: 10.1016/j.compmedimag.2018.08.007] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2017] [Revised: 05/21/2018] [Accepted: 08/22/2018] [Indexed: 02/06/2023]
Abstract
Segmentation of brain structures during the pre-natal and early post-natal periods is the first step for subsequent analysis of brain development. Segmentation techniques can be roughly divided into two families. The first, which we denote as registration-based techniques, rely on initial estimates derived by registration to one (or several) templates. The second family, denoted as learning-based techniques, relate imaging (and spatial) features to their corresponding anatomical labels. Each approach has its own qualities and both are complementary to each other. In this paper, we explore two ensembling strategies, namely, stacking and cascading to combine the strengths of both families. We present experiments on segmentation of 6-month infant brains and a cohort of fetuses with isolated non-severe ventriculomegaly (INSVM). INSVM is diagnosed when ventricles are mildly enlarged and no other anomalies are apparent. Prognosis is difficult based solely on the degree of ventricular enlargement. In order to find markers for a more reliable prognosis, we use the resulting segmentations to find abnormalities in the cortical folding of INSVM fetuses. Segmentation results show that either combination strategy outperform all of the individual methods, thus demonstrating the capability of learning systematic combinations that lead to an overall improvement. In particular, the cascading strategy outperforms the ensembling one, the former one obtaining top 5, 7 and 13 results (out of 21 teams) in the segmentation of white matter, gray matter and cerebro-spinal fluid in the iSeg2017 MICCAI Segmentation Challenge. The resulting segmentations reveal that INSVM fetuses have a less convoluted cortex. This points to cortical folding abnormalities as potential markers of later neurodevelopmental outcomes.
Collapse
Affiliation(s)
- Gerard Sanroma
- Universitat Pompeu Fabra, Dept. of Information and Communication Technologies, Tànger 122-140, 08018 Barcelona, Spain.
| | - Oualid M Benkarim
- Universitat Pompeu Fabra, Dept. of Information and Communication Technologies, Tànger 122-140, 08018 Barcelona, Spain
| | - Gemma Piella
- Universitat Pompeu Fabra, Dept. of Information and Communication Technologies, Tànger 122-140, 08018 Barcelona, Spain
| | - Karim Lekadir
- Universitat Pompeu Fabra, Dept. of Information and Communication Technologies, Tànger 122-140, 08018 Barcelona, Spain
| | - Nadine Hahner
- Fetal i+D Fetal Medicine Research Center, BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Déu), IDIBAPS, University of Barcelona, Spain
| | - Elisenda Eixarch
- Fetal i+D Fetal Medicine Research Center, BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Déu), IDIBAPS, University of Barcelona, Spain
| | - Miguel A González Ballester
- Universitat Pompeu Fabra, Dept. of Information and Communication Technologies, Tànger 122-140, 08018 Barcelona, Spain; ICREA, Pg. Lluis Companys 23, 08010 Barcelona, Spain
| |
Collapse
|
310
|
Deep Learning and Medical Diagnosis: A Review of Literature. MULTIMODAL TECHNOLOGIES AND INTERACTION 2018. [DOI: 10.3390/mti2030047] [Citation(s) in RCA: 166] [Impact Index Per Article: 23.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
In this review the application of deep learning for medical diagnosis is addressed. A thorough analysis of various scientific articles in the domain of deep neural networks application in the medical field has been conducted. More than 300 research articles were obtained, and after several selection steps, 46 articles were presented in more detail. The results indicate that convolutional neural networks (CNN) are the most widely represented when it comes to deep learning and medical image analysis. Furthermore, based on the findings of this article, it can be noted that the application of deep learning technology is widespread, but the majority of applications are focused on bioinformatics, medical diagnosis and other similar fields.
Collapse
|
311
|
Jurtz VI, Johansen AR, Nielsen M, Almagro Armenteros JJ, Nielsen H, Sønderby CK, Winther O, Sønderby SK. An introduction to deep learning on biological sequence data: examples and solutions. Bioinformatics 2018; 33:3685-3690. [PMID: 28961695 DOI: 10.1093/bioinformatics/btx531] [Citation(s) in RCA: 74] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2017] [Accepted: 08/22/2017] [Indexed: 11/14/2022] Open
Abstract
Motivation Deep neural network architectures such as convolutional and long short-term memory networks have become increasingly popular as machine learning tools during the recent years. The availability of greater computational resources, more data, new algorithms for training deep models and easy to use libraries for implementation and training of neural networks are the drivers of this development. The use of deep learning has been especially successful in image recognition; and the development of tools, applications and code examples are in most cases centered within this field rather than within biology. Results Here, we aim to further the development of deep learning methods within biology by providing application examples and ready to apply and adapt code templates. Given such examples, we illustrate how architectures consisting of convolutional and long short-term memory neural networks can relatively easily be designed and trained to state-of-the-art performance on three biological sequence problems: prediction of subcellular localization, protein secondary structure and the binding of peptides to MHC Class II molecules. Availability and implementation All implementations and datasets are available online to the scientific community at https://github.com/vanessajurtz/lasagne4bio. Contact skaaesonderby@gmail.com. Supplementary information Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
| | | | - Morten Nielsen
- Department of Bio and Health Informatics.,Instituto de Investigaciones Biotecnológicas, Universidad Nacional de San Martín, Buenos Aires, Argentina
| | | | | | | | - Ole Winther
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark.,Department of Biology, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
312
|
Ma Z, Wu X, Song Q, Luo Y, Wang Y, Zhou J. Automated nasopharyngeal carcinoma segmentation in magnetic resonance images by combination of convolutional neural networks and graph cut. Exp Ther Med 2018; 16:2511-2521. [PMID: 30210602 PMCID: PMC6122541 DOI: 10.3892/etm.2018.6478] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2017] [Accepted: 06/22/2018] [Indexed: 02/05/2023] Open
Abstract
Accurate and reliable segmentation of nasopharyngeal carcinoma (NPC) in medical images is an import task for clinical applications, including radiotherapy. However, NPC features large variations in lesion size and shape, as well as inhomogeneous intensities within the tumor and similar intensity to that of nearby tissues, making its segmentation a challenging task. The present study proposes a novel automated NPC segmentation method in magnetic resonance (MR) images by combining a deep convolutional neural network (CNN) model and a 3-dimensional (3D) graph cut-based method in a two-stage manner. First, a multi-view deep CNN-based segmentation method is performed. A voxel-wise initial segmentation is generated by integrating the inferential classification information of three trained single-view CNNs. Instead of directly using the CNN classification results to achieve a final segmentation, the proposed method uses a 3D graph cut-based method to refine the initial segmentation. Specifically, the probability response map obtained using the multi-view CNN method is utilized to calculate the region cost, which represents the likelihood of a voxel being assigned to the tumor or non-tumor. Structure information in 3D from the original MR images is used to calculate the boundary cost, which measures the difference between the two voxels in the 3D neighborhood. The proposed method was evaluated on T1-weighted images from 30 NPC patients using the leave-one-out method. The experimental results demonstrated that the proposed method is effective and accurate for NPC segmentation.
Collapse
Affiliation(s)
- Zongqing Ma
- College of Computer Science, Sichuan University, Chengdu, Sichuan 610065, P.R. China
| | - Xi Wu
- School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan 610225, P.R. China
| | - Qi Song
- CuraCloud Corp., Seattle, WA 98104, USA
| | - Yong Luo
- Department of Head and Neck and Mammary Oncology, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, P.R. China
| | - Yan Wang
- College of Computer Science, Sichuan University, Chengdu, Sichuan 610065, P.R. China
| | - Jiliu Zhou
- College of Computer Science, Sichuan University, Chengdu, Sichuan 610065, P.R. China.,School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan 610225, P.R. China
| |
Collapse
|
313
|
Wu J, Mazur TR, Ruan S, Lian C, Daniel N, Lashmett H, Ochoa L, Zoberi I, Anastasio MA, Gach HM, Mutic S, Thomas M, Li H. A deep Boltzmann machine-driven level set method for heart motion tracking using cine MRI images. Med Image Anal 2018; 47:68-80. [PMID: 29679848 PMCID: PMC6501847 DOI: 10.1016/j.media.2018.03.015] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2017] [Revised: 03/21/2018] [Accepted: 03/26/2018] [Indexed: 11/19/2022]
Abstract
Heart motion tracking for radiation therapy treatment planning can result in effective motion management strategies to minimize radiation-induced cardiotoxicity. However, automatic heart motion tracking is challenging due to factors that include the complex spatial relationship between the heart and its neighboring structures, dynamic changes in heart shape, and limited image contrast, resolution, and volume coverage. In this study, we developed and evaluated a deep generative shape model-driven level set method to address these challenges. The proposed heart motion tracking method makes use of a heart shape model that characterizes the statistical variations in heart shapes present in a training data set. This heart shape model was established by training a three-layered deep Boltzmann machine (DBM) in order to characterize both local and global heart shape variations. During the tracking phase, a distance regularized level-set evolution (DRLSE) method was applied to delineate the heart contour on each frame of a cine MRI image sequence. The trained shape model was embedded into the DRLSE method as a shape prior term to constrain an evolutional shape to reach the desired heart boundary. Frame-by-frame heart motion tracking was achieved by iteratively mapping the obtained heart contour for each frame to the next frame as a reliable initialization, and performing a level-set evolution. The performance of the proposed motion tracking method was demonstrated using thirty-eight coronal cine MRI image sequences.
Collapse
Affiliation(s)
- Jian Wu
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Thomas R Mazur
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Su Ruan
- Laboratoire LITIS (EA 4108), Equipe Quantif, University of Rouen, Rouen 76183, France
| | - Chunfeng Lian
- Laboratoire LITIS (EA 4108), Equipe Quantif, University of Rouen, Rouen 76183, France
| | - Nalini Daniel
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Hilary Lashmett
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Laura Ochoa
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Imran Zoberi
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Mark A Anastasio
- Department of Biomedical Engineering, Washington University, St. Louis, MO 63110, USA
| | - H Michael Gach
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Sasa Mutic
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Maria Thomas
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA
| | - Hua Li
- Department of Radiation Oncology, Washington University, St. Louis, MO 63110, USA.
| |
Collapse
|
314
|
Chen J, Zhang H, Zhang W, Du X, Zhang Y, Li S. Correlated Regression Feature Learning for Automated Right Ventricle Segmentation. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2018; 6:1800610. [PMID: 30057864 PMCID: PMC6061487 DOI: 10.1109/jtehm.2018.2804947] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2017] [Revised: 01/01/2018] [Accepted: 01/30/2018] [Indexed: 12/21/2022]
Abstract
Accurate segmentation of right ventricle (RV) from cardiac magnetic resonance (MR) images can help a doctor to robustly quantify the clinical indices including ejection fraction. In this paper, we develop one regression convolutional neural network (RegressionCNN) which combines a holistic regression model and a convolutional neural network (CNN) together to determine boundary points' coordinates of RV directly and simultaneously. In our approach, we take the fully connected layers of CNN as the holistic regression model to perform RV segmentation, and the feature maps extracted by convolutional layers of CNN are converted into 1-D vector to connect holistic regression model. Such connection allows us to make full use of the optimization algorithm to constantly optimize the convolutional layers to directly learn the holistic regression model in the training process rather than separate feature extraction and regression model learning. Therefore, RegressionCNN can achieve optimally convolutional feature learning for accurately catching the regression features that are more correlated to RV regression segmentation task in training process, and this can reduce the latent mismatch influence between the feature extraction and the following regression model learning. We evaluate the performance of RegressionCNN on cardiac MR images acquired of 145 human subjects from two clinical centers. The results have shown that RegressionCNN's results are highly correlated (average boundary correlation coefficient equals 0.9827) and consistent with the manual delineation (average dice metric equals 0.8351). Hence, RegressionCNN could be an effective way to segment RV from cardiac MR images accurately and automatically.
Collapse
Affiliation(s)
- Jun Chen
- School of Computer Science and TechnologyAnhui UniversityHefei230601China
| | - Heye Zhang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of SciencesShenzhen518055China
| | - Weiwei Zhang
- School of Computer Science and TechnologyAnhui UniversityHefei230601China
| | - Xiuquan Du
- School of Computer Science and TechnologyAnhui UniversityHefei230601China
| | - Yanping Zhang
- School of Computer Science and TechnologyAnhui UniversityHefei230601China
| | - Shuo Li
- Department of Medical ImagingWestern UniversityLondonONN6A 3K7Canada
| |
Collapse
|
315
|
Hammernik K, Klatzer T, Kobler E, Recht MP, Sodickson DK, Pock T, Knoll F. Learning a variational network for reconstruction of accelerated MRI data. Magn Reson Med 2018; 79:3055-3071. [PMID: 29115689 PMCID: PMC5902683 DOI: 10.1002/mrm.26977] [Citation(s) in RCA: 796] [Impact Index Per Article: 113.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2017] [Revised: 09/19/2017] [Accepted: 09/27/2017] [Indexed: 01/14/2023]
Abstract
PURPOSE To allow fast and high-quality reconstruction of clinical accelerated multi-coil MR data by learning a variational network that combines the mathematical structure of variational models with deep learning. THEORY AND METHODS Generalized compressed sensing reconstruction formulated as a variational model is embedded in an unrolled gradient descent scheme. All parameters of this formulation, including the prior model defined by filter kernels and activation functions as well as the data term weights, are learned during an offline training procedure. The learned model can then be applied online to previously unseen data. RESULTS The variational network approach is evaluated on a clinical knee imaging protocol for different acceleration factors and sampling patterns using retrospectively and prospectively undersampled data. The variational network reconstructions outperform standard reconstruction algorithms, verified by quantitative error measures and a clinical reader study for regular sampling and acceleration factor 4. CONCLUSION Variational network reconstructions preserve the natural appearance of MR images as well as pathologies that were not included in the training data set. Due to its high computational performance, that is, reconstruction time of 193 ms on a single graphics card, and the omission of parameter tuning once the network is trained, this new approach to image reconstruction can easily be integrated into clinical workflow. Magn Reson Med 79:3055-3071, 2018. © 2017 International Society for Magnetic Resonance in Medicine.
Collapse
Affiliation(s)
- Kerstin Hammernik
- Institute of Computer Graphics and Vision, Graz University of
Technology, Graz, Austria
| | - Teresa Klatzer
- Institute of Computer Graphics and Vision, Graz University of
Technology, Graz, Austria
| | - Erich Kobler
- Institute of Computer Graphics and Vision, Graz University of
Technology, Graz, Austria
| | - Michael P Recht
- Center for Biomedical Imaging, Department of Radiology, NYU School
of Medicine, New York, NY, United States
- Center for Advanced Imaging Innovation and Research
(CAIR), NYU School of Medicine, New York, NY, United States
| | - Daniel K Sodickson
- Center for Biomedical Imaging, Department of Radiology, NYU School
of Medicine, New York, NY, United States
- Center for Advanced Imaging Innovation and Research
(CAIR), NYU School of Medicine, New York, NY, United States
| | - Thomas Pock
- Institute of Computer Graphics and Vision, Graz University of
Technology, Graz, Austria
- Center for Vision, Automation & Control, AIT Austrian
Institute of Technology GmbH, Vienna, Austria
| | - Florian Knoll
- Center for Biomedical Imaging, Department of Radiology, NYU School
of Medicine, New York, NY, United States
- Center for Advanced Imaging Innovation and Research
(CAIR), NYU School of Medicine, New York, NY, United States
| |
Collapse
|
316
|
Zahia S, Sierra-Sosa D, Garcia-Zapirain B, Elmaghraby A. Tissue classification and segmentation of pressure injuries using convolutional neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 159:51-58. [PMID: 29650318 DOI: 10.1016/j.cmpb.2018.02.018] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Revised: 01/30/2018] [Accepted: 02/22/2018] [Indexed: 05/17/2023]
Abstract
BACKGROUND AND OBJECTIVES This paper presents a new approach for automatic tissue classification in pressure injuries. These wounds are localized skin damages which need frequent diagnosis and treatment. Therefore, a reliable and accurate systems for segmentation and tissue type identification are needed in order to achieve better treatment results. METHODS Our proposed system is based on a Convolutional Neural Network (CNN) devoted to performing optimized segmentation of the different tissue types present in pressure injuries (granulation, slough, and necrotic tissues). A preprocessing step removes the flash light and creates a set of 5x5 sub-images which are used as input for the CNN network. The network output will classify every sub-image of the validation set into one of the three classes studied. RESULTS The metrics used to evaluate our approach show an overall average classification accuracy of 92.01%, an average total weighted Dice Similarity Coefficient of 91.38%, and an average precision per class of 97.31% for granulation tissue, 96.59% for necrotic tissue, and 77.90% for slough tissue. CONCLUSIONS Our system has been proven to make recognition of complicated structures in biomedical images feasible.
Collapse
Affiliation(s)
- Sofia Zahia
- Department of Computer Engineering and Computer Science, Duthie Center for Engineering, University of Louisville, Louisville, KY 40292, United States; eVida research laboratory, University of Deusto, Bilbao 48007, Spain
| | - Daniel Sierra-Sosa
- Department of Computer Engineering and Computer Science, Duthie Center for Engineering, University of Louisville, Louisville, KY 40292, United States.
| | | | - Adel Elmaghraby
- Department of Computer Engineering and Computer Science, Duthie Center for Engineering, University of Louisville, Louisville, KY 40292, United States
| |
Collapse
|
317
|
Wang L, Li G, Adeli E, Liu M, Wu Z, Meng Y, Lin W, Shen D. Anatomy-guided joint tissue segmentation and topological correction for 6-month infant brain MRI with risk of autism. Hum Brain Mapp 2018; 39:2609-2623. [PMID: 29516625 PMCID: PMC5951769 DOI: 10.1002/hbm.24027] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Revised: 02/01/2018] [Accepted: 02/19/2018] [Indexed: 01/14/2023] Open
Abstract
Tissue segmentation of infant brain MRIs with risk of autism is critically important for characterizing early brain development and identifying biomarkers. However, it is challenging due to low tissue contrast caused by inherent ongoing myelination and maturation. In particular, at around 6 months of age, the voxel intensities in both gray matter and white matter are within similar ranges, thus leading to the lowest image contrast in the first postnatal year. Previous studies typically employed intensity images and tentatively estimated tissue probabilities to train a sequence of classifiers for tissue segmentation. However, the important prior knowledge of brain anatomy is largely ignored during the segmentation. Consequently, the segmentation accuracy is still limited and topological errors frequently exist, which will significantly degrade the performance of subsequent analyses. Although topological errors could be partially handled by retrospective topological correction methods, their results may still be anatomically incorrect. To address these challenges, in this article, we propose an anatomy-guided joint tissue segmentation and topological correction framework for isointense infant MRI. Particularly, we adopt a signed distance map with respect to the outer cortical surface as anatomical prior knowledge, and incorporate such prior information into the proposed framework to guide segmentation in ambiguous regions. Experimental results on the subjects acquired from National Database for Autism Research demonstrate the effectiveness to topological errors and also some levels of robustness to motion. Comparisons with the state-of-the-art methods further demonstrate the advantages of the proposed method in terms of both segmentation accuracy and topological correctness.
Collapse
Affiliation(s)
- Li Wang
- IDEA Lab, Department of Radiology and BRICUniversity of North Carolina at Chapel HillNorth Carolina
| | - Gang Li
- IDEA Lab, Department of Radiology and BRICUniversity of North Carolina at Chapel HillNorth Carolina
| | - Ehsan Adeli
- IDEA Lab, Department of Radiology and BRICUniversity of North Carolina at Chapel HillNorth Carolina
| | - Mingxia Liu
- IDEA Lab, Department of Radiology and BRICUniversity of North Carolina at Chapel HillNorth Carolina
| | - Zhengwang Wu
- IDEA Lab, Department of Radiology and BRICUniversity of North Carolina at Chapel HillNorth Carolina
| | - Yu Meng
- IDEA Lab, Department of Radiology and BRICUniversity of North Carolina at Chapel HillNorth Carolina
- Department of Computer ScienceUniversity of North Carolina at Chapel HillNorth Carolina
| | - Weili Lin
- MRI Lab, Department of Radiology and BRICUniversity of North Carolina at Chapel HillNorth Carolina
| | - Dinggang Shen
- IDEA Lab, Department of Radiology and BRICUniversity of North Carolina at Chapel HillNorth Carolina
- Department of Brain and Cognitive EngineeringKorea UniversitySeoul02841Republic of Korea
| |
Collapse
|
318
|
Rachmadi MF, Valdés-Hernández MDC, Agan MLF, Di Perri C, Komura T. Segmentation of white matter hyperintensities using convolutional neural networks with global spatial information in routine clinical brain MRI with none or mild vascular pathology. Comput Med Imaging Graph 2018. [DOI: 10.1016/j.compmedimag.2018.02.002] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
319
|
Phan TV, Sima DM, Beelen C, Vanderauwera J, Smeets D, Vandermosten M. Evaluation of methods for volumetric analysis of pediatric brain data: The child metrix pipeline versus adult-based approaches. NEUROIMAGE-CLINICAL 2018; 19:734-744. [PMID: 30003026 PMCID: PMC6040578 DOI: 10.1016/j.nicl.2018.05.030] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2017] [Revised: 05/04/2018] [Accepted: 05/22/2018] [Indexed: 12/18/2022]
Abstract
Pediatric brain volumetric analysis based on Magnetic Resonance Imaging (MRI) is of particular interest in order to understand the typical brain development and to characterize neurodevelopmental disorders at an early age. However, it has been shown that the results can be biased due to head motion, inherent to pediatric data, and due to the use of methods based on adult brain data that are not able to accurately model the anatomical disparity of pediatric brains. To overcome these issues, we proposed childmetrix, a tool developed for the analysis of pediatric neuroimaging data that uses an age-specific atlas and a probabilistic model-based approach in order to segment the gray matter (GM) and white matter (WM). The tool was extensively validated on 55 scans of children between 5 and 6 years old (including 13 children with developmental dyslexia) and 10 pairs of test-retest scans of children between 6 and 8 years old and compared with two state-of-the-art methods using an adult atlas, namely icobrain (applying a probabilistic model-based segmentation) and Freesurfer (applying a surface model-based segmentation). The results obtained with childmetrix showed a better reproducibility of GM and WM segmentations and a better robustness to head motion in the estimation of GM volume compared to Freesurfer. Evaluated on two subjects, childmetrix showed good accuracy with 82-84% overlap with manual segmentation for both GM and WM, thereby outperforming the adult-based methods (icobrain and Freesurfer), especially for the subject with poor quality data. We also demonstrated that the adult-based methods needed double the number of subjects to detect significant morphological differences between dyslexics and typical readers. Once further developed and validated, we believe that childmetrix would provide appropriate and reliable measures for the examination of children's brain.
Collapse
Affiliation(s)
- Thanh Vân Phan
- icometrix, Research and Development, Leuven, Belgium; Experimental Oto-rhino-laryngology, Department Neurosciences, KU Leuven, Leuven, Belgium.
| | - Diana M Sima
- icometrix, Research and Development, Leuven, Belgium
| | - Caroline Beelen
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Science, KU Leuven, Leuven, Belgium
| | - Jolijn Vanderauwera
- Experimental Oto-rhino-laryngology, Department Neurosciences, KU Leuven, Leuven, Belgium; Parenting and Special Education Research Unit, Faculty of Psychology and Educational Science, KU Leuven, Leuven, Belgium
| | - Dirk Smeets
- icometrix, Research and Development, Leuven, Belgium
| | - Maaike Vandermosten
- Experimental Oto-rhino-laryngology, Department Neurosciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
320
|
Meyer P, Noblet V, Mazzara C, Lallement A. Survey on deep learning for radiotherapy. Comput Biol Med 2018; 98:126-146. [PMID: 29787940 DOI: 10.1016/j.compbiomed.2018.05.018] [Citation(s) in RCA: 168] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2018] [Revised: 05/15/2018] [Accepted: 05/15/2018] [Indexed: 12/17/2022]
Abstract
More than 50% of cancer patients are treated with radiotherapy, either exclusively or in combination with other methods. The planning and delivery of radiotherapy treatment is a complex process, but can now be greatly facilitated by artificial intelligence technology. Deep learning is the fastest-growing field in artificial intelligence and has been successfully used in recent years in many domains, including medicine. In this article, we first explain the concept of deep learning, addressing it in the broader context of machine learning. The most common network architectures are presented, with a more specific focus on convolutional neural networks. We then present a review of the published works on deep learning methods that can be applied to radiotherapy, which are classified into seven categories related to the patient workflow, and can provide some insights of potential future applications. We have attempted to make this paper accessible to both radiotherapy and deep learning communities, and hope that it will inspire new collaborations between these two communities to develop dedicated radiotherapy applications.
Collapse
Affiliation(s)
- Philippe Meyer
- Department of Medical Physics, Paul Strauss Center, Strasbourg, France.
| | | | | | | |
Collapse
|
321
|
The challenge of cerebral magnetic resonance imaging in neonates: A new method using mathematical morphology for the segmentation of structures including diffuse excessive high signal intensities. Med Image Anal 2018; 48:75-94. [PMID: 29852312 DOI: 10.1016/j.media.2018.05.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2017] [Revised: 05/04/2018] [Accepted: 05/09/2018] [Indexed: 11/20/2022]
Abstract
Preterm birth is a multifactorial condition associated with increased morbidity and mortality. Diffuse excessive high signal intensity (DEHSI) has been recently described on T2-weighted MR sequences in this population and thought to be associated with neuropathologies. To date, no robust and reproducible method to assess the presence of white matter hyperintensities has been developed, perhaps explaining the current controversy over their prognostic value. The aim of this paper is to propose a new semi-automated framework to detect DEHSI on neonatal brain MR images having a particular pattern due to the physiological lack of complete myelination of the white matter. A novel method for semi- automatic segmentation of neonatal brain structures and DEHSI, based on mathematical morphology and on max-tree representations of the images is thus described. It is a mandatory first step to identify and clinically assess homogeneous cohorts of neonates for DEHSI and/or volume of any other segmented structures. Implemented in a user-friendly interface, the method makes it straightforward to select relevant markers of structures to be segmented, and if needed, apply eventually manual corrections. This method responds to the increasing need for providing medical experts with semi-automatic tools for image analysis, and overcomes the limitations of visual analysis alone, prone to subjectivity and variability. Experimental results demonstrate that the method is accurate, with excellent reproducibility and with very few manual corrections needed. Although the method was intended initially for images acquired at 1.5T, which corresponds to the usual clinical practice, preliminary results on images acquired at 3T suggest that the proposed approach can be generalized.
Collapse
|
322
|
Jang H, Liu F, Zhao G, Bradshaw T, McMillan AB. Technical Note: Deep learning based MRAC using rapid ultrashort echo time imaging. Med Phys 2018; 45:10.1002/mp.12964. [PMID: 29763997 PMCID: PMC6443501 DOI: 10.1002/mp.12964] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Revised: 04/16/2018] [Accepted: 05/01/2018] [Indexed: 12/21/2022] Open
Abstract
PURPOSE In this study, we explore the feasibility of a novel framework for MR-based attenuation correction for PET/MR imaging based on deep learning via convolutional neural networks, which enables fully automated and robust estimation of a pseudo CT image based on ultrashort echo time (UTE), fat, and water images obtained by a rapid MR acquisition. METHODS MR images for MRAC are acquired using dual echo ramped hybrid encoding (dRHE), where both UTE and out-of-phase echo images are obtained within a short single acquisition (35 s). Tissue labeling of air, soft tissue, and bone in the UTE image is accomplished via a deep learning network that was pre-trained with T1-weighted MR images. UTE images are used as input to the network, which was trained using labels derived from co-registered CT images. The tissue labels estimated by deep learning are refined by a conditional random field based correction. The soft tissue labels are further separated into fat and water components using the two-point Dixon method. The estimated bone, air, fat, and water images are then assigned appropriate Hounsfield units, resulting in a pseudo CT image for PET attenuation correction. To evaluate the proposed MRAC method, PET/MR imaging of the head was performed on eight human subjects, where Dice similarity coefficients of the estimated tissue labels and relative PET errors were evaluated through comparison to a registered CT image. RESULT Dice coefficients for air (within the head), soft tissue, and bone labels were 0.76 ± 0.03, 0.96 ± 0.006, and 0.88 ± 0.01. In PET quantitation, the proposed MRAC method produced relative PET errors less than 1% within most brain regions. CONCLUSION The proposed MRAC method utilizing deep learning with transfer learning and an efficient dRHE acquisition enables reliable PET quantitation with accurate and rapid pseudo CT generation.
Collapse
Affiliation(s)
- Hyungseok Jang
- Departments of Radiology, University of California San
Diego, 200 West Arbor Drive, San Diego, California 92103-8226
| | - Fang Liu
- Departments of Radiology, University of Wisconsin School of
Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53705-2275
| | - Gengyan Zhao
- Departments of Medical Physics, University of Wisconsin
School of Medicine and Public Health, 1111 Highland Avenue, Madison, Wisconsin
53705-2275
| | - Tyler Bradshaw
- Departments of Radiology, University of Wisconsin School of
Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53705-2275
| | - Alan B McMillan
- Departments of Radiology, University of Wisconsin School of
Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53705-2275
| |
Collapse
|
323
|
Pan F, He P, Liu C, Li T, Murray A, Zheng D. Variation of the Korotkoff Stethoscope Sounds During Blood Pressure Measurement: Analysis Using a Convolutional Neural Network. IEEE J Biomed Health Inform 2018; 21:1593-1598. [PMID: 29136608 DOI: 10.1109/jbhi.2017.2703115] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Korotkoff sounds are known to change their characteristics during blood pressure (BP) measurement, resulting in some uncertainties for systolic and diastolic pressure (SBP and DBP) determinations. The aim of this study was to assess the variation of Korotkoff sounds during BP measurement by examining all stethoscope sounds associated with each heartbeat from above systole to below diastole during linear cuff deflation. Three repeat BP measurements were taken from 140 healthy subjects (age 21 to 73 years; 62 female and 78 male) by a trained observer, giving 420 measurements. During the BP measurements, the cuff pressure and stethoscope signals were simultaneously recorded digitally to a computer for subsequent analysis. Heartbeats were identified from the oscillometric cuff pressure pulses. The presence of each beat was used to create a time window (1 s, 2000 samples) centered on the oscillometric pulse peak for extracting beat-by-beat stethoscope sounds. A time-frequency two-dimensional matrix was obtained for the stethoscope sounds associated with each beat, and all beats between the manually determined SBPs and DBPs were labeled as "Korotkoff." A convolutional neural network was then used to analyze consistency in sound patterns that were associated with Korotkoff sounds. A 10-fold cross-validation strategy was applied to the stethoscope sounds from all 140 subjects, with the data from ten groups of 14 subjects being analyzed separately, allowing consistency to be evaluated between groups. Next, within-subject variation of the Korotkoff sounds analyzed from the three repeats was quantified, separately for each stethoscope sound beat. There was consistency between folds with no significant differences between groups of 14 subjects (P = 0.09 to P = 0.62). Our results showed that 80.7% beats at SBP and 69.5% at DBP were analyzed as Korotkoff sounds, with significant differences between adjacent beats at systole (13.1%, P = 0.001) and diastole (17.4%, P < 0.001). Results reached stability for SBP (97.8%, at sixth beat below SBP) and DBP (98.1%, at sixth beat above DBP) with no significant differences between adjacent beats (SBP P = 0.74; DBP P = 0.88). There were no significant differences at high-cuff pressures, but at low pressures close to diastole there was a small difference (3.3%, P = 0.02). In addition, greater within subject variability was observed at SBP (21.4%) and DBP (28.9%), with a significant difference between both (P < 0.02). In conclusion, this study has demonstrated that Korotkoff sounds can be consistently identified during the period below SBP and above DBP, but that at systole and diastole there can be substantial variations that are associated with high variation in the three repeat measurements in each subject.
Collapse
|
324
|
Eppenhof KAJ, Pluim JPW. Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks. J Med Imaging (Bellingham) 2018; 5:024003. [PMID: 29750177 DOI: 10.1117/1.jmi.5.2.024003] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2017] [Accepted: 04/23/2018] [Indexed: 11/14/2022] Open
Abstract
Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.
Collapse
Affiliation(s)
- Koen A J Eppenhof
- Eindhoven University of Technology, Medical Image Analysis, Department of Biomedical Engineering, Eindhoven, The Netherlands
| | - Josien P W Pluim
- Eindhoven University of Technology, Medical Image Analysis, Department of Biomedical Engineering, Eindhoven, The Netherlands.,University Medical Center Utrecht, Image Sciences Institute, Utrecht, The Netherlands
| |
Collapse
|
325
|
He JY, Wu X, Jiang YG, Peng Q, Jain R. Hookworm Detection in Wireless Capsule Endoscopy Images With Deep Learning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:2379-2392. [PMID: 29470172 DOI: 10.1109/tip.2018.2801119] [Citation(s) in RCA: 74] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
As one of the most common human helminths, hookworm is a leading cause of maternal and child morbidity, which seriously threatens human health. Recently, wireless capsule endoscopy (WCE) has been applied to automatic hookworm detection. Unfortunately, it remains a challenging task. In recent years, deep convolutional neural network (CNN) has demonstrated impressive performance in various image and video analysis tasks. In this paper, a novel deep hookworm detection framework is proposed for WCE images, which simultaneously models visual appearances and tubular patterns of hookworms. This is the first deep learning framework specifically designed for hookworm detection in WCE images. Two CNN networks, namely edge extraction network and hookworm classification network, are seamlessly integrated in the proposed framework, which avoid the edge feature caching and speed up the classification. Two edge pooling layers are introduced to integrate the tubular regions induced from edge extraction network and the feature maps from hookworm classification network, leading to enhanced feature maps emphasizing the tubular regions. Experiments have been conducted on one of the largest WCE datasets with WCE images, which demonstrate the effectiveness of the proposed hookworm detection framework. It significantly outperforms the state-of-the-art approaches. The high sensitivity and accuracy of the proposed method in detecting hookworms shows its potential for clinical application.
Collapse
|
326
|
Lee H, Troschel FM, Tajmir S, Fuchs G, Mario J, Fintelmann FJ, Do S. Pixel-Level Deep Segmentation: Artificial Intelligence Quantifies Muscle on Computed Tomography for Body Morphometric Analysis. J Digit Imaging 2018; 30:487-498. [PMID: 28653123 PMCID: PMC5537099 DOI: 10.1007/s10278-017-9988-z] [Citation(s) in RCA: 120] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
Pretreatment risk stratification is key for personalized medicine. While many physicians rely on an “eyeball test” to assess whether patients will tolerate major surgery or chemotherapy, “eyeballing” is inherently subjective and difficult to quantify. The concept of morphometric age derived from cross-sectional imaging has been found to correlate well with outcomes such as length of stay, morbidity, and mortality. However, the determination of the morphometric age is time intensive and requires highly trained experts. In this study, we propose a fully automated deep learning system for the segmentation of skeletal muscle cross-sectional area (CSA) on an axial computed tomography image taken at the third lumbar vertebra. We utilized a fully automated deep segmentation model derived from an extended implementation of a fully convolutional network with weight initialization of an ImageNet pre-trained model, followed by post processing to eliminate intramuscular fat for a more accurate analysis. This experiment was conducted by varying window level (WL), window width (WW), and bit resolutions in order to better understand the effects of the parameters on the model performance. Our best model, fine-tuned on 250 training images and ground truth labels, achieves 0.93 ± 0.02 Dice similarity coefficient (DSC) and 3.68 ± 2.29% difference between predicted and ground truth muscle CSA on 150 held-out test cases. Ultimately, the fully automated segmentation system can be embedded into the clinical environment to accelerate the quantification of muscle and expanded to volume analysis of 3D datasets.
Collapse
Affiliation(s)
- Hyunkwang Lee
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Fabian M. Troschel
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Shahein Tajmir
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Georg Fuchs
- Department of Radiology, Charite - Universitaetsmedizin Berlin, Chariteplatz 1, 10117 Berlin, Germany
| | - Julia Mario
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Florian J. Fintelmann
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Synho Do
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| |
Collapse
|
327
|
Huo J, Wu J, Cao J, Wang G. Supervoxel based method for multi-atlas segmentation of brain MR images. Neuroimage 2018; 175:201-214. [PMID: 29625235 DOI: 10.1016/j.neuroimage.2018.04.001] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Revised: 03/30/2018] [Accepted: 04/01/2018] [Indexed: 01/01/2023] Open
Abstract
Multi-atlas segmentation has been widely applied to the analysis of brain MR images. However, the state-of-the-art techniques in multi-atlas segmentation, including both patch-based and learning-based methods, are strongly dependent on the pairwise registration or exhibit huge spatial inconsistency. The paper proposes a new segmentation framework based on supervoxels to solve the existing challenges of previous methods. The supervoxel is an aggregation of voxels with similar attributes, which can be used to replace the voxel grid. By formulating the segmentation as a tissue labeling problem associated with a maximum-a-posteriori inference in Markov random field, the problem is solved via a graphical model with supervoxels being considered as the nodes. In addition, a dense labeling scheme is developed to refine the supervoxel labeling results, and the spatial consistency is incorporated in the proposed method. The proposed approach is robust to the pairwise registration errors and of high computational efficiency. Extensive experimental evaluations on three publically available brain MR datasets demonstrate the effectiveness and superior performance of the proposed approach.
Collapse
Affiliation(s)
- Jie Huo
- Department of ECE, University of Windsor, Windsor N9B 3P4, Canada.
| | - Jonathan Wu
- Department of ECE, University of Windsor, Windsor N9B 3P4, Canada; Institute of Information and Control, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Jiuwen Cao
- Institute of Information and Control, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Guanghui Wang
- Department of EECS, University of Kansas, Lawrence, KS 66045, USA.
| |
Collapse
|
328
|
Chen H, Dou Q, Yu L, Qin J, Heng PA. VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images. Neuroimage 2018; 170:446-455. [PMID: 28445774 DOI: 10.1016/j.neuroimage.2017.04.041] [Citation(s) in RCA: 329] [Impact Index Per Article: 47.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2016] [Revised: 03/24/2017] [Accepted: 04/18/2017] [Indexed: 01/04/2023] Open
Affiliation(s)
- Hao Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| | - Lequan Yu
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China; Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
329
|
Drost FJ, Keunen K, Moeskops P, Claessens NHP, van Kalken F, Išgum I, Voskuil-Kerkhof ESM, Groenendaal F, de Vries LS, Benders MJNL, Termote JUM. Severe retinopathy of prematurity is associated with reduced cerebellar and brainstem volumes at term and neurodevelopmental deficits at 2 years. Pediatr Res 2018; 83:818-824. [PMID: 29320482 DOI: 10.1038/pr.2018.2] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Accepted: 12/24/2017] [Indexed: 11/09/2022]
Abstract
BackgroundTo evaluate the association between severe retinopathy of prematurity (ROP), measures of brain morphology at term-equivalent age (TEA), and neurodevelopmental outcome.MethodsEighteen infants with severe ROP (median gestational age (GA) 25.3 (range 24.6-25.9 weeks) were included in this retrospective case-control study. Each infant was matched to two extremely preterm control infants (n=36) by GA, birth weight, sex, and brain injury. T2-weighted images were obtained on a 3 T magnetic resonance imaging (MRI) at TEA. Brain volumes were computed using an automatic segmentation method. In addition, cortical folding metrics were extracted. Neurodevelopment was formally assessed at the ages of 15 and 24 months.ResultsInfants with severe ROP had smaller cerebellar volumes (21.4±3.2 vs. 23.1±2.6 ml; P=0.04) and brainstem volumes (5.4±0.5 ml vs. 5.8±0.5 ml; P=0.01) compared with matched control infants. Furthermore, ROP patients showed a significantly lower development quotient (Griffiths Mental Development Scales) at the age of 15 months (93±15 vs. 102±10; P=0.01) and lower fine motor scores (10±3 vs. 12±2; P=0.02) on Bayley Scales (Third Edition) at the age of 24 months.ConclusionSevere ROP was associated with smaller volumes of the cerebellum and brainstem and with poorer early neurodevelopmental outcome. Follow-up through childhood is needed to evaluate the long-term consequences of our findings.
Collapse
Affiliation(s)
- Femke J Drost
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht and Utrecht University, Utrecht, Netherlands
| | - Kristin Keunen
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht and Utrecht University, Utrecht, Netherlands
| | - Pim Moeskops
- Image Sciences Institute, University Medical Center Utrecht and Utrecht University, Utrecht, Netherlands
| | - Nathalie H P Claessens
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht and Utrecht University, Utrecht, Netherlands
| | - Femke van Kalken
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht and Utrecht University, Utrecht, Netherlands
| | - Ivana Išgum
- Image Sciences Institute, University Medical Center Utrecht and Utrecht University, Utrecht, Netherlands
| | | | - Floris Groenendaal
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht and Utrecht University, Utrecht, Netherlands
| | - Linda S de Vries
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht and Utrecht University, Utrecht, Netherlands
| | - Manon J N L Benders
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht and Utrecht University, Utrecht, Netherlands
| | - Jacqueline U M Termote
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht and Utrecht University, Utrecht, Netherlands
| |
Collapse
|
330
|
3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study. Neuroimage 2018; 170:456-470. [DOI: 10.1016/j.neuroimage.2017.04.039] [Citation(s) in RCA: 219] [Impact Index Per Article: 31.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Revised: 02/23/2017] [Accepted: 04/17/2017] [Indexed: 01/08/2023] Open
|
331
|
Makropoulos A, Counsell SJ, Rueckert D. A review on automatic fetal and neonatal brain MRI segmentation. Neuroimage 2018; 170:231-248. [DOI: 10.1016/j.neuroimage.2017.06.074] [Citation(s) in RCA: 100] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Revised: 03/06/2017] [Accepted: 06/26/2017] [Indexed: 01/18/2023] Open
|
332
|
Xu B, Chai Y, Galarza CM, Vu CQ, Tamrazi B, Gaonkar B, Macyszyn L, Coates TD, Lepore N, Wood JC. ORCHESTRAL FULLY CONVOLUTIONAL NETWORKS FOR SMALL LESION SEGMENTATION IN BRAIN MRI. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2018; 2018:889-892. [PMID: 30344893 PMCID: PMC6192017 DOI: 10.1109/isbi.2018.8363714] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
White matter (WM) lesion identification and segmentation has proved of clinical importance for diagnosis, treatment and neurological outcomes. Convolutional neural networks (CNN) have demonstrated their success for large lesion load segmentation, but are not sensitive to small deep WM and sub-cortical lesion segmentation. We propose to use multi-scale and supervised fully convolutional networks (FCN) to segment small WM lesions in 22 anemic patients. The multiple scales enable us to identify the small lesions while reducing many false alarms, and the multi-supervised scheme allows a better management of the unbalanced data. Compared to a single FCN (Dice score ~0.31), the performance on the testing dataset of our proposed networks achieved a Dice score of 0.78.
Collapse
Affiliation(s)
- Botian Xu
- CIBORG laboratory, Department of Radiology, Children's Hospital Los Angeles (CHLA)
- Department of Electrical Engineering, USC
| | - Yaqiong Chai
- CIBORG laboratory, Department of Radiology, Children's Hospital Los Angeles (CHLA)
- Department of Biomedical Engineering, University of Southern California (USC)
- Department of Radiology, CHLA
| | - Cristina M Galarza
- CIBORG laboratory, Department of Radiology, Children's Hospital Los Angeles (CHLA)
- Keck School of Medicine, USC
| | - Chau Q Vu
- Department of Biomedical Engineering, University of Southern California (USC)
- Department of Radiology, CHLA
| | | | - Bilwaj Gaonkar
- Department of Neurosurgery, David Geffen School of Medicine, University of California Los Angeles
| | - Luke Macyszyn
- Department of Neurosurgery, David Geffen School of Medicine, University of California Los Angeles
| | | | - Natasha Lepore
- CIBORG laboratory, Department of Radiology, Children's Hospital Los Angeles (CHLA)
- Department of Biomedical Engineering, University of Southern California (USC)
- Department of Radiology, CHLA
| | | |
Collapse
|
333
|
Liu F, Zhou Z, Jang H, Samsonov A, Zhao G, Kijowski R. Deep convolutional neural network and 3D deformable approach for tissue segmentation in musculoskeletal magnetic resonance imaging. Magn Reson Med 2018; 79:2379-2391. [PMID: 28733975 PMCID: PMC6271435 DOI: 10.1002/mrm.26841] [Citation(s) in RCA: 173] [Impact Index Per Article: 24.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2017] [Revised: 05/16/2017] [Accepted: 06/24/2017] [Indexed: 02/06/2023]
Abstract
PURPOSE To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. METHODS A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. RESULTS The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. CONCLUSION The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med 79:2379-2391, 2018. © 2017 International Society for Magnetic Resonance in Medicine.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Zhaoye Zhou
- Department of Biomedical Engineering, University of Minnesota, Minneapolis, Minnesota, USA
| | - Hyungseok Jang
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Alexey Samsonov
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Gengyan Zhao
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Richard Kijowski
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| |
Collapse
|
334
|
Jin H, Li Z, Tong R, Lin L. A deep 3D residual CNN for false-positive reduction in pulmonary nodule detection. Med Phys 2018; 45:2097-2107. [PMID: 29500816 DOI: 10.1002/mp.12846] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2017] [Revised: 02/13/2018] [Accepted: 02/17/2018] [Indexed: 12/19/2022] Open
Abstract
PURPOSE The automatic detection of pulmonary nodules using CT scans improves the efficiency of lung cancer diagnosis, and false-positive reduction plays a significant role in the detection. In this paper, we focus on the false-positive reduction task and propose an effective method for this task. METHODS We construct a deep 3D residual CNN (convolution neural network) to reduce false-positive nodules from candidate nodules. The proposed network is much deeper than the traditional 3D CNNs used in medical image processing. Specifically, in the network, we design a spatial pooling and cropping (SPC) layer to extract multilevel contextual information of CT data. Moreover, we employ an online hard sample selection strategy in the training process to make the network better fit hard samples (e.g., nodules with irregular shapes). RESULTS Our method is evaluated on 888 CT scans from the dataset of the LUNA16 Challenge. The free-response receiver operating characteristic (FROC) curve shows that the proposed method achieves a high detection performance. CONCLUSIONS Our experiments confirm that our method is robust and that the SPC layer helps increase the prediction accuracy. Additionally, the proposed method can easily be extended to other 3D object detection tasks in medical image processing.
Collapse
Affiliation(s)
- Hongsheng Jin
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310027, China
| | - Zongyao Li
- The School of Aeronautics and Astronautics, Zhejiang University, Hangzhou, 310027, China
| | - Ruofeng Tong
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310027, China
| | - Lanfen Lin
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310027, China
| |
Collapse
|
335
|
Computational neuroanatomy of baby brains: A review. Neuroimage 2018; 185:906-925. [PMID: 29574033 DOI: 10.1016/j.neuroimage.2018.03.042] [Citation(s) in RCA: 116] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2017] [Revised: 02/23/2018] [Accepted: 03/19/2018] [Indexed: 12/12/2022] Open
Abstract
The first postnatal years are an exceptionally dynamic and critical period of structural, functional and connectivity development of the human brain. The increasing availability of non-invasive infant brain MR images provides unprecedented opportunities for accurate and reliable charting of dynamic early brain developmental trajectories in understanding normative and aberrant growth. However, infant brain MR images typically exhibit reduced tissue contrast (especially around 6 months of age), large within-tissue intensity variations, and regionally-heterogeneous, dynamic changes, in comparison with adult brain MR images. Consequently, the existing computational tools developed typically for adult brains are not suitable for infant brain MR image processing. To address these challenges, many infant-tailored computational methods have been proposed for computational neuroanatomy of infant brains. In this review paper, we provide a comprehensive review of the state-of-the-art computational methods for infant brain MRI processing and analysis, which have advanced our understanding of early postnatal brain development. We also summarize publically available infant-dedicated resources, including MRI datasets, computational tools, grand challenges, and brain atlases. Finally, we discuss the limitations in current research and suggest potential future research directions.
Collapse
|
336
|
Makropoulos A, Robinson EC, Schuh A, Wright R, Fitzgibbon S, Bozek J, Counsell SJ, Steinweg J, Vecchiato K, Passerat-Palmbach J, Lenz G, Mortari F, Tenev T, Duff EP, Bastiani M, Cordero-Grande L, Hughes E, Tusor N, Tournier JD, Hutter J, Price AN, Teixeira RPAG, Murgasova M, Victor S, Kelly C, Rutherford MA, Smith SM, Edwards AD, Hajnal JV, Jenkinson M, Rueckert D. The developing human connectome project: A minimal processing pipeline for neonatal cortical surface reconstruction. Neuroimage 2018. [PMID: 29409960 DOI: 10.1101/125526] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
The Developing Human Connectome Project (dHCP) seeks to create the first 4-dimensional connectome of early life. Understanding this connectome in detail may provide insights into normal as well as abnormal patterns of brain development. Following established best practices adopted by the WU-MINN Human Connectome Project (HCP), and pioneered by FreeSurfer, the project utilises cortical surface-based processing pipelines. In this paper, we propose a fully automated processing pipeline for the structural Magnetic Resonance Imaging (MRI) of the developing neonatal brain. This proposed pipeline consists of a refined framework for cortical and sub-cortical volume segmentation, cortical surface extraction, and cortical surface inflation, which has been specifically designed to address considerable differences between adult and neonatal brains, as imaged using MRI. Using the proposed pipeline our results demonstrate that images collected from 465 subjects ranging from 28 to 45 weeks post-menstrual age (PMA) can be processed fully automatically; generating cortical surface models that are topologically correct, and correspond well with manual evaluations of tissue boundaries in 85% of cases. Results improve on state-of-the-art neonatal tissue segmentation models and significant errors were found in only 2% of cases, where these corresponded to subjects with high motion. Downstream, these surfaces will enhance comparisons of functional and diffusion MRI datasets, supporting the modelling of emerging patterns of brain connectivity.
Collapse
Affiliation(s)
- Antonios Makropoulos
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Emma C Robinson
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom; Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom.
| | - Andreas Schuh
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Robert Wright
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Sean Fitzgibbon
- Wellcome Centre for Integrative Neuroimaging, FMRIB Centre, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Jelena Bozek
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Serena J Counsell
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Johannes Steinweg
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Katy Vecchiato
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Jonathan Passerat-Palmbach
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Gregor Lenz
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Filippo Mortari
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Tencho Tenev
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Eugene P Duff
- Wellcome Centre for Integrative Neuroimaging, FMRIB Centre, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Matteo Bastiani
- Wellcome Centre for Integrative Neuroimaging, FMRIB Centre, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Lucilio Cordero-Grande
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Emer Hughes
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Nora Tusor
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Jacques-Donald Tournier
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Jana Hutter
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Anthony N Price
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Rui Pedro A G Teixeira
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Maria Murgasova
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Suresh Victor
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Christopher Kelly
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Mary A Rutherford
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Stephen M Smith
- Wellcome Centre for Integrative Neuroimaging, FMRIB Centre, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - A David Edwards
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Joseph V Hajnal
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Mark Jenkinson
- Wellcome Centre for Integrative Neuroimaging, FMRIB Centre, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Daniel Rueckert
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| |
Collapse
|
337
|
Liu F, Jang H, Kijowski R, Bradshaw T, McMillan AB. Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging. Radiology 2018; 286:676-684. [PMID: 28925823 PMCID: PMC5790303 DOI: 10.1148/radiol.2017170700] [Citation(s) in RCA: 248] [Impact Index Per Article: 35.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared with current MR imaging-based AC approaches. © RSNA, 2017 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Fang Liu
- From the Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Ave, Madison, WI 53705-2275
| | - Hyungseok Jang
- From the Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Ave, Madison, WI 53705-2275
| | - Richard Kijowski
- From the Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Ave, Madison, WI 53705-2275
| | - Tyler Bradshaw
- From the Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Ave, Madison, WI 53705-2275
| | - Alan B. McMillan
- From the Department of Radiology, University of Wisconsin School of Medicine and Public Health, 600 Highland Ave, Madison, WI 53705-2275
| |
Collapse
|
338
|
Dormer JD, Guo R, Shen M, Jiang R, Wagner MB, Fei B. Ultrasound Segmentation of Rat Hearts Using Convolution Neural Networks. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10580:105801A. [PMID: 30197465 PMCID: PMC6126353 DOI: 10.1117/12.2293558] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Ultrasound is widely used for diagnosing cardiovascular diseases. However, estimates such as left ventricle volume currently require manual segmentation, which can be time consuming. In addition, cardiac ultrasound is often complicated by imaging artifacts such as shadowing and mirror images, making it difficult for simple intensity-based automated segmentation methods. In this work, we use convolutional neural networks (CNNs) to segment ultrasound images of rat hearts embedded in agar phantoms into four classes: background, myocardium, left ventricle cavity, and right ventricle cavity. We also explore how the inclusion of a single diseased heart changes the results in a small dataset. We found an average overall segmentation accuracy of 70.0% ± 7.3% when combining the healthy and diseased data, compared to 72.4% ± 6.6% for just the healthy hearts. This work suggests that including diseased hearts with healthy hearts in training data could improve segmentation results, while testing a diseased heart with a model trained on healthy hearts can produce accurate segmentation results for some classes but not others. More data are needed in order to improve the accuracy of the CNN based segmentation.
Collapse
Affiliation(s)
- James D. Dormer
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Ming Shen
- Department of Pediatrics, Emory University, Atlanta, GA
| | - Rong Jiang
- Department of Pediatrics, Emory University, Atlanta, GA
| | | | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA
- Winship Cancer Institute of Emory University, Atlanta, GA
| |
Collapse
|
339
|
Cao C, Liu F, Tan H, Song D, Shu W, Li W, Zhou Y, Bo X, Xie Z. Deep Learning and Its Applications in Biomedicine. GENOMICS, PROTEOMICS & BIOINFORMATICS 2018; 16:17-32. [PMID: 29522900 PMCID: PMC6000200 DOI: 10.1016/j.gpb.2017.07.003] [Citation(s) in RCA: 259] [Impact Index Per Article: 37.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/18/2017] [Revised: 06/18/2017] [Accepted: 07/05/2017] [Indexed: 12/19/2022]
Abstract
Advances in biological and medical technologies have been providing us explosive volumes of biological and physiological data, such as medical images, electroencephalography, genomic and protein sequences. Learning from these data facilitates the understanding of human health and disease. Developed from artificial neural networks, deep learning-based algorithms show great promise in extracting features and learning patterns from complex data. The aim of this paper is to provide an overview of deep learning techniques and some of the state-of-the-art applications in the biomedical field. We first introduce the development of artificial neural network and deep learning. We then describe two main components of deep learning, i.e., deep learning architectures and model optimization. Subsequently, some examples are demonstrated for deep learning applications, including medical image classification, genomic sequence analysis, as well as protein structure classification and prediction. Finally, we offer our perspectives for the future directions in the field of deep learning.
Collapse
Affiliation(s)
- Chensi Cao
- CapitalBio Corporation, Beijing 102206, China
| | - Feng Liu
- Department of Biotechnology, Beijing Institute of Radiation Medicine, Beijing 100850, China
| | - Hai Tan
- State Key Lab of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 500040, China
| | - Deshou Song
- State Key Lab of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 500040, China
| | - Wenjie Shu
- Department of Biotechnology, Beijing Institute of Radiation Medicine, Beijing 100850, China
| | - Weizhong Li
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 500040, China
| | - Yiming Zhou
- CapitalBio Corporation, Beijing 102206, China; Department of Biomedical Engineering, Medical Systems Biology Research Center, Tsinghua University School of Medicine, Beijing 100084, China.
| | - Xiaochen Bo
- Department of Biotechnology, Beijing Institute of Radiation Medicine, Beijing 100850, China.
| | - Zhi Xie
- State Key Lab of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 500040, China.
| |
Collapse
|
340
|
Zhang L, Lu L, Summers RM, Kebebew E, Yao J. Convolutional Invasion and Expansion Networks for Tumor Growth Prediction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:638-648. [PMID: 29408791 PMCID: PMC5812268 DOI: 10.1109/tmi.2017.2774044] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Tumor growth is associated with cell invasion and mass-effect, which are traditionally formulated by mathematical models, namely reaction-diffusion equations and biomechanics. Such models can be personalized based on clinical measurements to build the predictive models for tumor growth. In this paper, we investigate the possibility of using deep convolutional neural networks to directly represent and learn the cell invasion and mass-effect, and to predict the subsequent involvement regions of a tumor. The invasion network learns the cell invasion from information related to metabolic rate, cell density, and tumor boundary derived from multimodal imaging data. The expansion network models the mass-effect from the growing motion of tumor mass. We also study different architectures that fuse the invasion and expansion networks, in order to exploit the inherent correlations among them. Our network can easily be trained on population data and personalized to a target patient, unlike most previous mathematical modeling methods that fail to incorporate population data. Quantitative experiments on a pancreatic tumor data set show that the proposed method substantially outperforms a state-of-the-art mathematical model-based approach in both accuracy and efficiency, and that the information captured by each of the two subnetworks is complementary.
Collapse
|
341
|
Makropoulos A, Robinson EC, Schuh A, Wright R, Fitzgibbon S, Bozek J, Counsell SJ, Steinweg J, Vecchiato K, Passerat-Palmbach J, Lenz G, Mortari F, Tenev T, Duff EP, Bastiani M, Cordero-Grande L, Hughes E, Tusor N, Tournier JD, Hutter J, Price AN, Teixeira RPAG, Murgasova M, Victor S, Kelly C, Rutherford MA, Smith SM, Edwards AD, Hajnal JV, Jenkinson M, Rueckert D. The developing human connectome project: A minimal processing pipeline for neonatal cortical surface reconstruction. Neuroimage 2018; 173:88-112. [PMID: 29409960 DOI: 10.1016/j.neuroimage.2018.01.054] [Citation(s) in RCA: 274] [Impact Index Per Article: 39.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2017] [Revised: 01/19/2018] [Accepted: 01/21/2018] [Indexed: 12/11/2022] Open
Abstract
The Developing Human Connectome Project (dHCP) seeks to create the first 4-dimensional connectome of early life. Understanding this connectome in detail may provide insights into normal as well as abnormal patterns of brain development. Following established best practices adopted by the WU-MINN Human Connectome Project (HCP), and pioneered by FreeSurfer, the project utilises cortical surface-based processing pipelines. In this paper, we propose a fully automated processing pipeline for the structural Magnetic Resonance Imaging (MRI) of the developing neonatal brain. This proposed pipeline consists of a refined framework for cortical and sub-cortical volume segmentation, cortical surface extraction, and cortical surface inflation, which has been specifically designed to address considerable differences between adult and neonatal brains, as imaged using MRI. Using the proposed pipeline our results demonstrate that images collected from 465 subjects ranging from 28 to 45 weeks post-menstrual age (PMA) can be processed fully automatically; generating cortical surface models that are topologically correct, and correspond well with manual evaluations of tissue boundaries in 85% of cases. Results improve on state-of-the-art neonatal tissue segmentation models and significant errors were found in only 2% of cases, where these corresponded to subjects with high motion. Downstream, these surfaces will enhance comparisons of functional and diffusion MRI datasets, supporting the modelling of emerging patterns of brain connectivity.
Collapse
Affiliation(s)
- Antonios Makropoulos
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Emma C Robinson
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom; Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom.
| | - Andreas Schuh
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Robert Wright
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Sean Fitzgibbon
- Wellcome Centre for Integrative Neuroimaging, FMRIB Centre, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Jelena Bozek
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Serena J Counsell
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Johannes Steinweg
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Katy Vecchiato
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Jonathan Passerat-Palmbach
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Gregor Lenz
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Filippo Mortari
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Tencho Tenev
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Eugene P Duff
- Wellcome Centre for Integrative Neuroimaging, FMRIB Centre, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Matteo Bastiani
- Wellcome Centre for Integrative Neuroimaging, FMRIB Centre, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Lucilio Cordero-Grande
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Emer Hughes
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Nora Tusor
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Jacques-Donald Tournier
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Jana Hutter
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Anthony N Price
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Rui Pedro A G Teixeira
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Maria Murgasova
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Suresh Victor
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Christopher Kelly
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Mary A Rutherford
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Stephen M Smith
- Wellcome Centre for Integrative Neuroimaging, FMRIB Centre, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - A David Edwards
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Joseph V Hajnal
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Mark Jenkinson
- Wellcome Centre for Integrative Neuroimaging, FMRIB Centre, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Daniel Rueckert
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| |
Collapse
|
342
|
|
343
|
Effects of early nutrition and growth on brain volumes, white matter microstructure, and neurodevelopmental outcome in preterm newborns. Pediatr Res 2018; 83:102-110. [PMID: 28915232 DOI: 10.1038/pr.2017.227] [Citation(s) in RCA: 123] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/02/2017] [Accepted: 09/08/2017] [Indexed: 01/31/2023]
Abstract
BackgroundThis study aimed to investigate the effect of nutrition and growth during the first 4 weeks after birth on cerebral volumes and white matter maturation at term equivalent age (TEA) and on neurodevelopmental outcome at 2 years' corrected age (CA), in preterm infants.MethodsOne hundred thirty-one infants born at a gestational age (GA) <31 weeks with magnetic resonance imaging (MRI) at TEA were studied. Cortical gray matter (CGM) volumes, basal ganglia and thalami (BGT) volumes, cerebellar volumes, and total brain volume (TBV) were computed. Fractional anisotropy (FA) in the posterior limb of internal capsule (PLIC) was obtained. Cognitive and motor scores were assessed at 2 years' CA.ResultsCumulative fat and enteral intakes were positively related to larger cerebellar and BGT volumes. Weight gain was associated with larger cerebellar, BGT, and CGM volume. Cumulative fat and caloric intake, and enteral intakes were positively associated with FA in the PLIC. Cumulative protein intake was positively associated with higher cognitive and motor scores (all P<0.05).ConclusionOur study demonstrated a positive association between nutrition, weight gain, and brain volumes. Moreover, we found a positive relationship between nutrition, white matter maturation at TEA, and neurodevelopment in infancy. These findings emphasize the importance of growth and nutrition with a balanced protein, fat, and caloric content for brain development.
Collapse
|
344
|
Zhao X, Wu Y, Song G, Li Z, Zhang Y, Fan Y. A deep learning model integrating FCNNs and CRFs for brain tumor segmentation. Med Image Anal 2018; 43:98-111. [PMID: 29040911 PMCID: PMC6029627 DOI: 10.1016/j.media.2017.10.002] [Citation(s) in RCA: 292] [Impact Index Per Article: 41.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2017] [Revised: 07/09/2017] [Accepted: 10/04/2017] [Indexed: 02/07/2023]
Abstract
Accurate and reliable brain tumor segmentation is a critical component in cancer diagnosis, treatment planning, and treatment outcome evaluation. Build upon successful deep learning techniques, a novel brain tumor segmentation method is developed by integrating fully convolutional neural networks (FCNNs) and Conditional Random Fields (CRFs) in a unified framework to obtain segmentation results with appearance and spatial consistency. We train a deep learning based segmentation model using 2D image patches and image slices in following steps: 1) training FCNNs using image patches; 2) training CRFs as Recurrent Neural Networks (CRF-RNN) using image slices with parameters of FCNNs fixed; and 3) fine-tuning the FCNNs and the CRF-RNN using image slices. Particularly, we train 3 segmentation models using 2D image patches and slices obtained in axial, coronal and sagittal views respectively, and combine them to segment brain tumors using a voting based fusion strategy. Our method could segment brain images slice-by-slice, much faster than those based on image patches. We have evaluated our method based on imaging data provided by the Multimodal Brain Tumor Image Segmentation Challenge (BRATS) 2013, BRATS 2015 and BRATS 2016. The experimental results have demonstrated that our method could build a segmentation model with Flair, T1c, and T2 scans and achieve competitive performance as those built with Flair, T1, T1c, and T2 scans.
Collapse
Affiliation(s)
- Xiaomei Zhao
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; University of Chinese Academy of Sciences, Beijing, China
| | - Yihong Wu
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.
| | - Guidong Song
- Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Zhenye Li
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Yazhuo Zhang
- Beijing Neurosurgical Institute, Capital Medical University, Beijing, China; Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China; Beijing Institute for Brain Disorders Brain Tumor Center, Beijing, China; China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Yong Fan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
345
|
Segmentation of the hippocampus by transferring algorithmic knowledge for large cohort processing. Med Image Anal 2018; 43:214-228. [DOI: 10.1016/j.media.2017.11.004] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2017] [Revised: 09/14/2017] [Accepted: 11/06/2017] [Indexed: 01/27/2023]
|
346
|
On the Fuzziness of Machine Learning, Neural Networks, and Artificial Intelligence in Radiation Oncology. Int J Radiat Oncol Biol Phys 2018; 100:1-4. [DOI: 10.1016/j.ijrobp.2017.06.011] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Accepted: 06/08/2017] [Indexed: 12/25/2022]
|
347
|
Cherukuri V, Ssenyonga P, Warf BC, Kulkarni AV, Monga V, Schiff SJ. Learning Based Segmentation of CT Brain Images: Application to Postoperative Hydrocephalic Scans. IEEE Trans Biomed Eng 2017; 65:1871-1884. [PMID: 29989926 DOI: 10.1109/tbme.2017.2783305] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Hydrocephalus is a medical condition in which there is an abnormal accumulation of cerebrospinal fluid (CSF) in the brain. Segmentation of brain imagery into brain tissue and CSF [before and after surgery, i.e., preoperative (pre-op) versus postoperative (post-op)] plays a crucial role in evaluating surgical treatment. Segmentation of pre-op images is often a relatively straightforward problem and has been well researched. However, segmenting post-op computational tomographic (CT) scans becomes more challenging due to distorted anatomy and subdural hematoma collections pressing on the brain. Most intensity- and feature-based segmentation methods fail to separate subdurals from brain and CSF as subdural geometry varies greatly across different patients and their intensity varies with time. We combat this problem by a learning approach that treats segmentation as supervised classification at the pixel level, i.e., a training set of CT scans with labeled pixel identities is employed. METHODS Our contributions include: 1) a dictionary learning framework that learns class (segment) specific dictionaries that can efficiently represent test samples from the same class while poorly represent corresponding samples from other classes; 2) quantification of associated computation and memory footprint; and 3) a customized training and test procedure for segmenting post-op hydrocephalic CT images. RESULTS Experiments performed on infant CT brain images acquired from the CURE Children's Hospital of Uganda reveal the success of our method against the state-of-the-art alternatives. We also demonstrate that the proposed algorithm is computationally less burdensome and exhibits a graceful degradation against a number of training samples, enhancing its deployment potential.
Collapse
|
348
|
Wolterink JM, Leiner T, Viergever MA, Isgum I. Generative Adversarial Networks for Noise Reduction in Low-Dose CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:2536-2545. [PMID: 28574346 DOI: 10.1109/tmi.2017.2708987] [Citation(s) in RCA: 457] [Impact Index Per Article: 57.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Noise is inherent to low-dose CT acquisition. We propose to train a convolutional neural network (CNN) jointly with an adversarial CNN to estimate routine-dose CT images from low-dose CT images and hence reduce noise. A generator CNN was trained to transform low-dose CT images into routine-dose CT images using voxelwise loss minimization. An adversarial discriminator CNN was simultaneously trained to distinguish the output of the generator from routine-dose CT images. The performance of this discriminator was used as an adversarial loss for the generator. Experiments were performed using CT images of an anthropomorphic phantom containing calcium inserts, as well as patient non-contrast-enhanced cardiac CT images. The phantom and patients were scanned at 20% and 100% routine clinical dose. Three training strategies were compared: the first used only voxelwise loss, the second combined voxelwise loss and adversarial loss, and the third used only adversarial loss. The results showed that training with only voxelwise loss resulted in the highest peak signal-to-noise ratio with respect to reference routine-dose images. However, CNNs trained with adversarial loss captured image statistics of routine-dose images better. Noise reduction improved quantification of low-density calcified inserts in phantom CT images and allowed coronary calcium scoring in low-dose patient CT images with high noise levels. Testing took less than 10 s per CT volume. CNN-based low-dose CT noise reduction in the image domain is feasible. Training with an adversarial network improves the CNNs ability to generate images with an appearance similar to that of reference routine-dose CT images.
Collapse
|
349
|
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42:60-88. [PMID: 28778026 DOI: 10.1016/j.media.2017.07.005] [Citation(s) in RCA: 4777] [Impact Index Per Article: 597.1] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 07/24/2017] [Accepted: 07/25/2017] [Indexed: 02/07/2023]
Affiliation(s)
- Geert Litjens
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Thijs Kooi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | | | - Francesco Ciompi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Mohsen Ghafoorian
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
350
|
Zreik M, Lessmann N, van Hamersvelt RW, Wolterink JM, Voskuil M, Viergever MA, Leiner T, Išgum I. Deep learning analysis of the myocardium in coronary CT angiography for identification of patients with functionally significant coronary artery stenosis. Med Image Anal 2017; 44:72-85. [PMID: 29197253 DOI: 10.1016/j.media.2017.11.008] [Citation(s) in RCA: 116] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2017] [Revised: 11/16/2017] [Accepted: 11/20/2017] [Indexed: 12/11/2022]
Abstract
In patients with coronary artery stenoses of intermediate severity, the functional significance needs to be determined. Fractional flow reserve (FFR) measurement, performed during invasive coronary angiography (ICA), is most often used in clinical practice. To reduce the number of ICA procedures, we present a method for automatic identification of patients with functionally significant coronary artery stenoses, employing deep learning analysis of the left ventricle (LV) myocardium in rest coronary CT angiography (CCTA). The study includes consecutively acquired CCTA scans of 166 patients who underwent invasive FFR measurements. To identify patients with a functionally significant coronary artery stenosis, analysis is performed in several stages. First, the LV myocardium is segmented using a multiscale convolutional neural network (CNN). To characterize the segmented LV myocardium, it is subsequently encoded using unsupervised convolutional autoencoder (CAE). As ischemic changes are expected to appear locally, the LV myocardium is divided into a number of spatially connected clusters, and statistics of the encodings are computed as features. Thereafter, patients are classified according to the presence of functionally significant stenosis using an SVM classifier based on the extracted features. Quantitative evaluation of LV myocardium segmentation in 20 images resulted in an average Dice coefficient of 0.91 and an average mean absolute distance between the segmented and reference LV boundaries of 0.7 mm. Twenty CCTA images were used to train the LV myocardium encoder. Classification of patients was evaluated in the remaining 126 CCTA scans in 50 10-fold cross-validation experiments and resulted in an area under the receiver operating characteristic curve of 0.74 ± 0.02. At sensitivity levels 0.60, 0.70 and 0.80, the corresponding specificity was 0.77, 0.71 and 0.59, respectively. The results demonstrate that automatic analysis of the LV myocardium in a single CCTA scan acquired at rest, without assessment of the anatomy of the coronary arteries, can be used to identify patients with functionally significant coronary artery stenosis. This might reduce the number of patients undergoing unnecessary invasive FFR measurements.
Collapse
Affiliation(s)
- Majd Zreik
- Image Sciences Institute, University Medical Center Utrecht and Utrecht University, Utrecht, The Netherlands.
| | - Nikolas Lessmann
- Image Sciences Institute, University Medical Center Utrecht and Utrecht University, Utrecht, The Netherlands.
| | - Robbert W van Hamersvelt
- Department of Radiology, University Medical Center Utrecht and Utrecht University, Utrecht, The Netherlands.
| | - Jelmer M Wolterink
- Image Sciences Institute, University Medical Center Utrecht and Utrecht University, Utrecht, The Netherlands.
| | - Michiel Voskuil
- Department of Cardiology, University Medical Center Utrecht and Utrecht University, Utrecht, The Netherlands.
| | - Max A Viergever
- Image Sciences Institute, University Medical Center Utrecht and Utrecht University, Utrecht, The Netherlands.
| | - Tim Leiner
- Department of Radiology, University Medical Center Utrecht and Utrecht University, Utrecht, The Netherlands.
| | - Ivana Išgum
- Image Sciences Institute, University Medical Center Utrecht and Utrecht University, Utrecht, The Netherlands.
| |
Collapse
|