201
|
Bi XA, Liu Y, Xie Y, Hu X, Jiang Q. Morbigenous brain region and gene detection with a genetically evolved random neural network cluster approach in late mild cognitive impairment. Bioinformatics 2020; 36:2561-2568. [PMID: 31971559 PMCID: PMC7178433 DOI: 10.1093/bioinformatics/btz967] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Revised: 12/12/2019] [Accepted: 01/18/2020] [Indexed: 02/06/2023] Open
Abstract
MOTIVATION The multimodal data fusion analysis becomes another important field for brain disease detection and increasing researches concentrate on using neural network algorithms to solve a range of problems. However, most current neural network optimizing strategies focus on internal nodes or hidden layer numbers, while ignoring the advantages of external optimization. Additionally, in the multimodal data fusion analysis of brain science, the problems of small sample size and high-dimensional data are often encountered due to the difficulty of data collection and the specialization of brain science data, which may result in the lower generalization performance of neural network. RESULTS We propose a genetically evolved random neural network cluster (GERNNC) model. Specifically, the fusion characteristics are first constructed to be taken as the input and the best type of neural network is selected as the base classifier to form the initial random neural network cluster. Second, the cluster is adaptively genetically evolved. Based on the GERNNC model, we further construct a multi-tasking framework for the classification of patients with brain disease and the extraction of significant characteristics. In a study of genetic data and functional magnetic resonance imaging data from the Alzheimer's Disease Neuroimaging Initiative, the framework exhibits great classification performance and strong morbigenous factor detection ability. This work demonstrates that how to effectively detect pathogenic components of the brain disease on the high-dimensional medical data and small samples. AVAILABILITY AND IMPLEMENTATION The Matlab code is available at https://github.com/lizi1234560/GERNNC.git.
Collapse
Affiliation(s)
- Xia-an Bi
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing
- College of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Yingchao Liu
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing
- College of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Yiming Xie
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing
- College of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Xi Hu
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing
- College of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Qinghua Jiang
- Center for Bioinformatics, School of Life Science and Technology, Harbin Institute of Technology, Harbin, China
| |
Collapse
|
202
|
Liu L, Chen S, Zhu X, Zhao XM, Wu FX, Wang J. Deep convolutional neural network for accurate segmentation and quantification of white matter hyperintensities. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.12.050] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
203
|
Sun L, Ma W, Ding X, Huang Y, Liang D, Paisley J. A 3D Spatially Weighted Network for Segmentation of Brain Tissue From MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:898-909. [PMID: 31449009 DOI: 10.1109/tmi.2019.2937271] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The segmentation of brain tissue in MRI is valuable for extracting brain structure to aid diagnosis, treatment and tracking the progression of different neurologic diseases. Medical image data are volumetric and some neural network models for medical image segmentation have addressed this using a 3D convolutional architecture. However, this volumetric spatial information has not been fully exploited to enhance the representative ability of deep networks, and these networks have not fully addressed the practical issues facing the analysis of multimodal MRI data. In this paper, we propose a spatially-weighted 3D network (SW-3D-UNet) for brain tissue segmentation of single-modality MRI, and extend it using multimodality MRI data. We validate our model on the MRBrainS13 and MALC12 datasets. This unpublished model ranked first on the leaderboard of the MRBrainS13 Challenge.
Collapse
|
204
|
Thyreau B, Taki Y. Learning a cortical parcellation of the brain robust to the MRI segmentation with convolutional neural networks. Med Image Anal 2020; 61:101639. [DOI: 10.1016/j.media.2020.101639] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Revised: 12/27/2019] [Accepted: 01/09/2020] [Indexed: 10/25/2022]
|
205
|
Ding Y, Acosta R, Enguix V, Suffren S, Ortmann J, Luck D, Dolz J, Lodygensky GA. Using Deep Convolutional Neural Networks for Neonatal Brain Image Segmentation. Front Neurosci 2020; 14:207. [PMID: 32273836 PMCID: PMC7114297 DOI: 10.3389/fnins.2020.00207] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 02/25/2020] [Indexed: 12/13/2022] Open
Abstract
INTRODUCTION Deep learning neural networks are especially potent at dealing with structured data, such as images and volumes. Both modified LiviaNET and HyperDense-Net performed well at a prior competition segmenting 6-month-old infant magnetic resonance images, but neonatal cerebral tissue type identification is challenging given its uniquely inverted tissue contrasts. The current study aims to evaluate the two architectures to segment neonatal brain tissue types at term equivalent age. METHODS Both networks were retrained over 24 pairs of neonatal T1 and T2 data from the Developing Human Connectome Project public data set and validated on another eight pairs against ground truth. We then reported the best-performing model from training and its performance by computing the Dice similarity coefficient (DSC) for each tissue type against eight test subjects. RESULTS During the testing phase, among the segmentation approaches tested, the dual-modality HyperDense-Net achieved the best statistically significantly test mean DSC values, obtaining 0.94/0.95/0.92 for the tissue types and took 80 h to train and 10 min to segment, including preprocessing. The single-modality LiviaNET was better at processing T2-weighted images than processing T1-weighted images across all tissue types, achieving mean DSC values of 0.90/0.90/0.88 for gray matter, white matter, and cerebrospinal fluid, respectively, while requiring 30 h to train and 8 min to segment each brain, including preprocessing. DISCUSSION Our evaluation demonstrates that both neural networks can segment neonatal brains, achieving previously reported performance. Both networks will be continuously retrained over an increasingly larger repertoire of neonatal brain data and be made available through the Canadian Neonatal Brain Platform to better serve the neonatal brain imaging research community.
Collapse
Affiliation(s)
- Yang Ding
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Rolando Acosta
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Vicente Enguix
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Sabrina Suffren
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Janosch Ortmann
- Department of Management and Technology, Université du Québec à Montréal, Montreal, QC, Canada
| | - David Luck
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Jose Dolz
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), École de Technologie Supérieure, Montreal, QC, Canada
| | - Gregory A. Lodygensky
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), École de Technologie Supérieure, Montreal, QC, Canada
| |
Collapse
|
206
|
Koçak B, Durmaz EŞ, Ateş E, Kılıçkesmez Ö. Radiomics with artificial intelligence: a practical guide for beginners. ACTA ACUST UNITED AC 2020; 25:485-495. [PMID: 31650960 DOI: 10.5152/dir.2019.19321] [Citation(s) in RCA: 221] [Impact Index Per Article: 44.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Radiomics is a relatively new word for the field of radiology, meaning the extraction of a high number of quantitative features from medical images. Artificial intelligence (AI) is broadly a set of advanced computational algorithms that basically learn the patterns in the data provided to make predictions on unseen data sets. Radiomics can be coupled with AI because of its better capability of handling a massive amount of data compared with the traditional statistical methods. Together, the primary purpose of these fields is to extract and analyze as much and meaningful hidden quantitative data as possible to be used in decision support. Nowadays, both radiomics and AI have been getting attention for their remarkable success in various radiological tasks, which has been met with anxiety by most of the radiologists due to the fear of replacement by intelligent machines. Considering ever-developing advances in computational power and availability of large data sets, the marriage of humans and machines in future clinical practice seems inevitable. Therefore, regardless of their feelings, the radiologists should be familiar with these concepts. Our goal in this paper was three-fold: first, to familiarize radiologists with the radiomics and AI; second, to encourage the radiologists to get involved in these ever-developing fields; and, third, to provide a set of recommendations for good practice in design and assessment of future works.
Collapse
Affiliation(s)
- Burak Koçak
- Department of Radiology İstanbul Training and Research Hospital, İstanbul, Turkey
| | - Emine Şebnem Durmaz
- Department of Radiology, Büyükçekmece Mimar Sinan State Hospital, İstanbul, Turkey
| | - Ece Ateş
- Department of Radiology İstanbul Training and Research Hospital, İstanbul, Turkey
| | - Özgür Kılıçkesmez
- Department of Radiology İstanbul Training and Research Hospital, İstanbul, Turkey
| |
Collapse
|
207
|
Dijkshoorn ABC, Turk E, Hortensius LM, van der Aa NE, Hoebeek FE, Groenendaal F, Benders MJNL, Dudink J. Preterm infants with isolated cerebellar hemorrhage show bilateral cortical alterations at term equivalent age. Sci Rep 2020; 10:5283. [PMID: 32210267 PMCID: PMC7093404 DOI: 10.1038/s41598-020-62078-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Accepted: 03/06/2020] [Indexed: 01/08/2023] Open
Abstract
The cerebellum is connected to numerous regions of the contralateral side of the cerebrum. Motor and cognitive deficits following neonatal cerebellar hemorrhages (CbH) in extremely preterm neonates may be related to remote cortical alterations, following disrupted cerebello-cerebral connectivity as was previously shown within six CbH infants. In this retrospective case series study, we used MRI and advanced surface-based analyses to reconstruct gray matter (GM) changes in cortical thickness and cortical surface area in extremely preterm neonates (median age = 26; range: 24.9-26.7 gestational weeks) with large isolated unilateral CbH (N = 5 patients). Each CbH infant was matched with their own preterm infant cohort (range: 20-36 infants) based on sex and gestational age at birth. On a macro level, our data revealed that the contralateral cerebral hemisphere of CbH neonates did not show less cortical thickness or cortical surface area than their ipsilateral cerebral hemisphere at term. None of the cases differed from their matched cohort groups in average cortical thickness or average cortical surface area in the ipsilateral or contralateral cerebral hemisphere. On a micro (i.e. vertex) level, we established high variability in significant local cortical GM alteration patterns across case-cohort groups, in which the cases showed thicker or bigger volume in some regions, among which the caudal middle frontal gyrus, insula and parahippocampal gyrus, and thinner or less volume in other regions, among which the cuneus, precuneus and supratentorial gyrus. This study highlights that cerebellar injury during postnatal stages may have widespread bilateral influence on the early maturation of cerebral cortical regions, which implicate complex cerebello-cerebral interactions to be present at term birth.
Collapse
Affiliation(s)
- Aicha B C Dijkshoorn
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Elise Turk
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands.,UMC Utrecht Brain Center, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Lisa M Hortensius
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands.,UMC Utrecht Brain Center, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Niek E van der Aa
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands.,UMC Utrecht Brain Center, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Freek E Hoebeek
- UMC Utrecht Brain Center, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands.,Department for Developmental Origins of Disease, Wilhelmina Children's hospital, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Floris Groenendaal
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Manon J N L Benders
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands.,UMC Utrecht Brain Center, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Jeroen Dudink
- Department of Neonatology, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands. .,UMC Utrecht Brain Center, Wilhelmina Children's Hospital, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands.
| |
Collapse
|
208
|
Biswas B, Ghosh SK, Ghosh A, Chakraborty C, Mitra P. Target Object Recognition Using Multiresolution SVD and Guided Filter with Convolutional Neural Network. INT J PATTERN RECOGN 2020. [DOI: 10.1142/s0218001420520084] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
To design an efficient fusion scheme for the generation of a highly informative fused image by combining multiple images is still a challenging task in computer vision. A fast and effective image fusion scheme based on multi-resolution singular value decomposition (MR-SVD) with guided filter (GF) has been introduced in this paper. The proposed scheme decomposes an image of two-scale by MR-SVD into a lower approximate layer and a detailed layer containing the lower and higher variations of pixel intensity. It generates lower and details of left focused (LF) and right focused (RF) layers by applying the MR-SVD on each series of multi-focus images. GF is utilized to create a refined and smooth-textured weight fusion map by the weighted average approach on spatial features of the lower and detail layers of each image. A fused image of LF and RF has been achieved by the inverse MR-SVD. Finally, a deep convolutional autoencoder (CAE) has been applied to segment the fused results by generating the trained-patches mechanism. Comparing the results by state-of-the-art fusion and segmentation methods, we have illustrated that the proposed schemes provide superior fused and its segment results in terms of both qualitatively and quantitatively.
Collapse
Affiliation(s)
- Biswajit Biswas
- Computer Science & Engineering, University of Calcutta, Kolkata, India
| | - Swarup Kr Ghosh
- Computer Science & Engineering, Sister Nivedita University, Kolkata, India
| | - Anupam Ghosh
- Computer Science & Engineering, Netaji Subhash Engineering College, Kolkata, India
| | - Chandan Chakraborty
- School of Medical Science & Technology, Indian Institute of Technology, Kharagpur, India
| | - Pabitra Mitra
- Department of Computer Science & Engineering, Indian Institute of Technology, Kharagpur, India
| |
Collapse
|
209
|
Tan C, Guan Y, Feng Z, Ni H, Zhang Z, Wang Z, Li X, Yuan J, Gong H, Luo Q, Li A. DeepBrainSeg: Automated Brain Region Segmentation for Micro-Optical Images With a Convolutional Neural Network. Front Neurosci 2020; 14:179. [PMID: 32265621 PMCID: PMC7099146 DOI: 10.3389/fnins.2020.00179] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Accepted: 02/18/2020] [Indexed: 12/27/2022] Open
Abstract
The segmentation of brain region contours in three dimensions is critical for the analysis of different brain structures, and advanced approaches are emerging continuously within the field of neurosciences. With the development of high-resolution micro-optical imaging, whole-brain images can be acquired at the cellular level. However, brain regions in microscopic images are aggregated by discrete neurons with blurry boundaries, the complex and variable features of brain regions make it challenging to accurately segment brain regions. Manual segmentation is a reliable method, but is unrealistic to apply on a large scale. Here, we propose an automated brain region segmentation framework, DeepBrainSeg, which is inspired by the principle of manual segmentation. DeepBrainSeg incorporates three feature levels to learn local and contextual features in different receptive fields through a dual-pathway convolutional neural network (CNN), and to provide global features of localization by image registration and domain-condition constraints. Validated on biological datasets, DeepBrainSeg can not only effectively segment brain-wide regions with high accuracy (Dice ratio > 0.9), but can also be applied to various types of datasets and to datasets with noises. It has the potential to automatically locate information in the brain space on the large scale.
Collapse
Affiliation(s)
- Chaozhen Tan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yue Guan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Zhao Feng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Hong Ni
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Zoutao Zhang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Zhiguang Wang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Xiangning Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou, China
| | - Jing Yuan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou, China
| |
Collapse
|
210
|
Polap D, Wlodarczyk-Sielicka M. Classification of Non-Conventional Ships Using a Neural Bag-Of-Words Mechanism. SENSORS 2020; 20:s20061608. [PMID: 32183184 PMCID: PMC7146570 DOI: 10.3390/s20061608] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 03/12/2020] [Accepted: 03/12/2020] [Indexed: 11/21/2022]
Abstract
The existing methods for monitoring vessels are mainly based on radar and automatic identification systems. Additional sensors that are used include video cameras. Such systems feature cameras that capture images and software that analyzes the selected video frames. Methods for the classification of non-conventional vessels are not widely known. These methods, based on image samples, can be considered difficult. This paper is intended to show an alternative way to approach image classification problems; not by classifying the entire input data, but smaller parts. The described solution is based on splitting the image of a ship into smaller parts and classifying them into vectors that can be identified as features using a convolutional neural network (CNN). This idea is a representation of a bag-of-words mechanism, where created feature vectors might be called words, and by using them a solution can assign images a specific class. As part of the experiment, the authors performed two tests. In the first, two classes were analyzed and the results obtained show great potential for application. In the second, the authors used much larger sets of images belonging to five vessel types. The proposed method indeed improved the results of classic approaches by 5%. The paper shows an alternative approach for the classification of non-conventional vessels to increase accuracy.
Collapse
Affiliation(s)
- Dawid Polap
- Marine Technology Ltd., 81-521 Gdynia, Poland;
| | - Marta Wlodarczyk-Sielicka
- Department of Navigation, Maritime University of Szczecin, Waly Chrobrego 1-2, 70-500 Szczecin, Poland
- Correspondence: ; Tel.: +48-513-846-391
| |
Collapse
|
211
|
A Survey on Computer-Aided Diagnosis of Brain Disorders through MRI Based on Machine Learning and Data Mining Methodologies with an Emphasis on Alzheimer Disease Diagnosis and the Contribution of the Multimodal Fusion. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10051894] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Computer-aided diagnostic (CAD) systems use machine learning methods that provide a synergistic effect between the neuroradiologist and the computer, enabling an efficient and rapid diagnosis of the patient’s condition. As part of the early diagnosis of Alzheimer’s disease (AD), which is a major public health problem, the CAD system provides a neuropsychological assessment that helps mitigate its effects. The use of data fusion techniques by CAD systems has proven to be useful, they allow for the merging of information relating to the brain and its tissues from MRI, with that of other types of modalities. This multimodal fusion refines the quality of brain images by reducing redundancy and randomness, which contributes to improving the clinical reliability of the diagnosis compared to the use of a single modality. The purpose of this article is first to determine the main steps of the CAD system for brain magnetic resonance imaging (MRI). Then to bring together some research work related to the diagnosis of brain disorders, emphasizing AD. Thus the most used methods in the stages of classification and brain regions segmentation are described, highlighting their advantages and disadvantages. Secondly, on the basis of the raised problem, we propose a solution within the framework of multimodal fusion. In this context, based on quantitative measurement parameters, a performance study of multimodal CAD systems is proposed by comparing their effectiveness with those exploiting a single MRI modality. In this case, advances in information fusion techniques in medical imagery are accentuated, highlighting their advantages and disadvantages. The contribution of multimodal fusion and the interest of hybrid models are finally addressed, as well as the main scientific assertions made, in the field of brain disease diagnosis.
Collapse
|
212
|
Baid U, Talbar S, Rane S, Gupta S, Thakur MH, Moiyadi A, Sable N, Akolkar M, Mahajan A. A Novel Approach for Fully Automatic Intra-Tumor Segmentation With 3D U-Net Architecture for Gliomas. Front Comput Neurosci 2020; 14:10. [PMID: 32132913 PMCID: PMC7041417 DOI: 10.3389/fncom.2020.00010] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2019] [Accepted: 01/27/2020] [Indexed: 02/05/2023] Open
Abstract
Purpose: Gliomas are the most common primary brain malignancies, with varying degrees of aggressiveness and prognosis. Understanding of tumor biology and intra-tumor heterogeneity is necessary for planning personalized therapy and predicting response to therapy. Accurate tumoral and intra-tumoral segmentation on MRI is the first step toward understanding the tumor biology through computational methods. The purpose of this study was to design a segmentation algorithm and evaluate its performance on pre-treatment brain MRIs obtained from patients with gliomas. Materials and Methods: In this study, we have designed a novel 3D U-Net architecture that segments various radiologically identifiable sub-regions like edema, enhancing tumor, and necrosis. Weighted patch extraction scheme from the tumor border regions is proposed to address the problem of class imbalance between tumor and non-tumorous patches. The architecture consists of a contracting path to capture context and the symmetric expanding path that enables precise localization. The Deep Convolutional Neural Network (DCNN) based architecture is trained on 285 patients, validated on 66 patients and tested on 191 patients with Glioma from Brain Tumor Segmentation (BraTS) 2018 challenge dataset. Three dimensional patches are extracted from multi-channel BraTS training dataset to train 3D U-Net architecture. The efficacy of the proposed approach is also tested on an independent dataset of 40 patients with High Grade Glioma from our tertiary cancer center. Segmentation results are assessed in terms of Dice Score, Sensitivity, Specificity, and Hausdorff 95 distance (ITCN intra-tumoral classification network). Result: Our proposed architecture achieved Dice scores of 0.88, 0.83, and 0.75 for the whole tumor, tumor core and enhancing tumor, respectively, on BraTS validation dataset and 0.85, 0.77, 0.67 on test dataset. The results were similar on the independent patients' dataset from our hospital, achieving Dice scores of 0.92, 0.90, and 0.81 for the whole tumor, tumor core and enhancing tumor, respectively. Conclusion: The results of this study show the potential of patch-based 3D U-Net for the accurate intra-tumor segmentation. From experiments, it is observed that the weighted patch-based segmentation approach gives comparable performance with the pixel-based approach when there is a thin boundary between tumor subparts.
Collapse
Affiliation(s)
- Ujjwal Baid
- Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | - Sanjay Talbar
- Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | - Swapnil Rane
- Department of Pathology, Tata Memorial Centre, Tata Memorial Hospital, Mumbai, India
| | - Sudeep Gupta
- Department of Medical Oncology, Tata Memorial Centre, Tata Memorial Hospital, Mumbai, India
| | - Meenakshi H. Thakur
- Department of Radiodiagnosis and Imaging, Tata Memorial Centre, Tata Memorial Hospital, Mumbai, India
| | - Aliasgar Moiyadi
- Department of Neurosurgery Services, Tata Memorial Centre, Tata Memorial Hospital, Mumbai, India
| | - Nilesh Sable
- Department of Radiodiagnosis and Imaging, Tata Memorial Centre, Tata Memorial Hospital, Mumbai, India
| | - Mayuresh Akolkar
- Department of Radiodiagnosis and Imaging, Tata Memorial Centre, Tata Memorial Hospital, Mumbai, India
| | - Abhishek Mahajan
- Department of Radiodiagnosis and Imaging, Tata Memorial Centre, Tata Memorial Hospital, Mumbai, India
| |
Collapse
|
213
|
Thanapandiyaraj R, Rajendran T, Mohammedgani PB. Performance Analysis of Various Nanocontrast Agents and CAD Systems for Cancer Diagnosis. Curr Med Imaging 2020; 15:831-852. [PMID: 32008531 DOI: 10.2174/1573405614666180924124736] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2018] [Revised: 07/30/2018] [Accepted: 08/19/2018] [Indexed: 11/22/2022]
Abstract
BACKGROUND Cancer is a disease which involves the abnormal cell growth that has the potential of dispersal to other parts of the body. Among various conventional anatomical imaging techniques for cancer diagnosis, Magnetic Resonance Imaging (MRI) provides the best spatial resolution and is noninvasive. Current efforts are directed at enhancing the capabilities of MRI in oncology by adding contrast agents. DISCUSSION Recently, the superior properties of nanomaterials (extremely smaller in size, good biocompatibility and ease in chemical modification) allow its application as a contrast agent for early and specific cancer detection through the MRI. The precise detection of cancer region from any imaging modality will lead to a thriving treatment for cancer patients. The better localization of radiation dose can be attained from MRI by using suitable image processing algorithms. As there are many works that have been proposed for automatic detection for cancers, the effort is also put in to provide an effective survey of Computer Aided Diagnosis (CAD) system for different types of cancer detection with increased efficiency based on recent research works. Even though there are many surveys on MRI contrast agents, they only focused on a particular type of cancer. This study deeply presents the use of nanocontrast agents in MRI for different types of cancer diagnosis. CONCLUSION The main aim of this paper is to critically review the available compounds used as nanocontrast agents in MRI modality for different types of cancers. It also includes the review of different methods for cancer cell detection and classification. A comparative analysis is performed to analyze the effect of different CAD systems.
Collapse
Affiliation(s)
- Ruba Thanapandiyaraj
- Department of Electronics and Communication Engineering, Sethu Institute of Technology, Pullur, Tamilnadu-626115, India
| | - Tamilselvi Rajendran
- Department of Electronics and Communication Engineering, Sethu Institute of Technology, Pullur, Tamilnadu-626115, India
| | - Parisa Beham Mohammedgani
- Department of Electronics and Communication Engineering, Sethu Institute of Technology, Pullur, Tamilnadu-626115, India
| |
Collapse
|
214
|
Li J, Udupa JK, Tong Y, Wang L, Torigian DA. LinSEM: Linearizing segmentation evaluation metrics for medical images. Med Image Anal 2020; 60:101601. [PMID: 31811980 PMCID: PMC6980787 DOI: 10.1016/j.media.2019.101601] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Revised: 08/06/2019] [Accepted: 11/07/2019] [Indexed: 10/25/2022]
Abstract
Numerous algorithms are available for segmenting medical images. Empirical discrepancy metrics are commonly used in measuring the similarity or difference between segmentations by algorithms and "true" segmentations. However, one issue with the commonly used metrics is that the same metric value often represents different levels of "clinical acceptability" for different objects depending on their size, shape, and complexity of form. An ideal segmentation evaluation metric should be able to reflect degrees of acceptability directly from metric values and be able to show the same acceptability meaning by the same metric value for objects of different shape, size, and form. Intuitively, metrics which have a linear relationship with degree of acceptability will satisfy these conditions of the ideal metric. This issue has not been addressed in the medical image segmentation literature. In this paper, we propose a method called LinSEM for linearizing commonly used segmentation evaluation metrics based on corresponding degrees of acceptability evaluated by an expert in a reader study. LinSEM consists of two main parts: (a) estimating the relationship between metric values and degrees of acceptability separately for each considered metric and object, and (b) linearizing any given metric value corresponding to a given segmentation of an object based on the estimated relationship. Since algorithmic segmentations do not usually cover the full range of variability of acceptability, we create a set (SS) of simulated segmentations for each object that guarantee such coverage by using image transformations applied to a set (ST) of true segmentations of the object. We then conduct a reader study wherein the reader assigns an acceptability score (AS) for each sample in SS, expressing the acceptability of the sample on a 1 to 5 scale. Then the metric-AS relationship is constructed for the object by using an estimation method. With the idea that the ideal metric should be linear with respect to acceptability, we can then linearize the metric value of any segmentation sample of the object from a set (SA) of actual segmentations to its linearized value by using the constructed metric-acceptability relationship curve. Experiments are conducted involving three metrics - Dice coefficient (DC), Jaccard index (JI), and Hausdorff Distance (HD) - on five objects: skin outer boundary of the head and neck (cervico-thoracic) body region superior to the shoulders, right parotid gland, mandible, cervical esophagus, and heart. Actual segmentations (SA) of these objects are generated via our Automatic Anatomy Recognition (AAR) method. Our results indicate that, generally, JI has a more linear relationship with acceptability before linearization than other metrics. LinSEM achieves significantly improved uniformity of meaning post-linearization across all tested objects and metrics, except in a few cases where the departure from linearity was insignificant. This improvement is generally the largest for DC and HD reaching 8-25% for many tested cases. Although some objects (such as right parotid gland and esophagus for DC and JI) are close in their meaning between themselves before linearization, they are distant in this meaning from other objects but are brought close to other objects after linearization. This suggests the importance of performing linearization considering all objects in a body region and body-wide.
Collapse
Affiliation(s)
- Jieyu Li
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, 800 Dongchuan RD, Shanghai 200240, China; Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard Building, 3710 Hamilton Walk, Philadelphia, PA 19104, United States
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard Building, 3710 Hamilton Walk, Philadelphia, PA 19104, United States.
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard Building, 3710 Hamilton Walk, Philadelphia, PA 19104, United States
| | - Lisheng Wang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, 800 Dongchuan RD, Shanghai 200240, China
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard Building, 3710 Hamilton Walk, Philadelphia, PA 19104, United States
| |
Collapse
|
215
|
Wu J, Zhang Y, Tang X. Simultaneous Tissue Classification and Lateral Ventricle Segmentation via a 2D U-net Driven by a 3D Fully Convolutional Neural Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:5928-5931. [PMID: 31947198 DOI: 10.1109/embc.2019.8856668] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this paper, we proposed and validated a novel and fully automatic pipeline for simultaneous tissue classification and lateral ventricle segmentation via a 2D U-net. The 2D U-net was driven by a 3D fully convolutional neural network (FCN). Multiple T1-weighted atlases which had been pre-segmented into six whole-brain regions including the gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), lateral ventricles (LVs), skull, and the background of the entire image were used. In the proposed pipeline, probability maps of the six whole-brain regions of interest (ROIs) were obtained after a pre-segmentation through a trained 3D patch-based FCN. To further capture the global context of the entire image, the to-be-segmented image and the corresponding six probability maps were input to a trained 2D U-net in a 2D slice fashion to obtain the final segmentation map. Experiments were performed on a dataset consisting of 18 T1-weighted images. Compared to the 3D patch-based FCN on segmenting five ROIs (GM, WM, CSF, LVs, skull) and another two classical methods (SPM and FSL) on segmenting GM and WM, the proposed pipeline showed a superior segmentation performance. The proposed segmentation architecture can also be extended to other medical image segmentation tasks.
Collapse
|
216
|
Cao H, Liu H, Song E, Hung CC, Ma G, Xu X, Jin R, Lu J. Dual-branch residual network for lung nodule segmentation. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2019.105934] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
217
|
Nalepa J, Ribalta Lorenzo P, Marcinkiewicz M, Bobek-Billewicz B, Wawrzyniak P, Walczak M, Kawulok M, Dudzik W, Kotowski K, Burda I, Machura B, Mrukwa G, Ulrych P, Hayball MP. Fully-automated deep learning-powered system for DCE-MRI analysis of brain tumors. Artif Intell Med 2020; 102:101769. [DOI: 10.1016/j.artmed.2019.101769] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Revised: 10/28/2019] [Accepted: 11/20/2019] [Indexed: 02/01/2023]
|
218
|
Classification of COVID-19 patients from chest CT images using multi-objective differential evolution-based convolutional neural networks. Eur J Clin Microbiol Infect Dis 2020; 39. [PMID: 32337662 PMCID: PMC7183816 DOI: 10.1007/s10096-020-03901-z 10.1007/s10096-020-03901-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Early classification of 2019 novel coronavirus disease (COVID-19) is essential for disease cure and control. Compared with reverse-transcription polymerase chain reaction (RT-PCR), chest computed tomography (CT) imaging may be a significantly more trustworthy, useful, and rapid technique to classify and evaluate COVID-19, specifically in the epidemic region. Almost all hospitals have CT imaging machines; therefore, the chest CT images can be utilized for early classification of COVID-19 patients. However, the chest CT-based COVID-19 classification involves a radiology expert and considerable time, which is valuable when COVID-19 infection is growing at rapid rate. Therefore, an automated analysis of chest CT images is desirable to save the medical professionals' precious time. In this paper, a convolutional neural networks (CNN) is used to classify the COVID-19-infected patients as infected (+ve) or not (-ve). Additionally, the initial parameters of CNN are tuned using multi-objective differential evolution (MODE). Extensive experiments are performed by considering the proposed and the competitive machine learning techniques on the chest CT images. Extensive analysis shows that the proposed model can classify the chest CT images at a good accuracy rate.
Collapse
|
219
|
Wang R, He Y, Yao C, Wang S, Xue Y, Zhang Z, Wang J, Liu X. Classification and Segmentation of Hyperspectral Data of Hepatocellular Carcinoma Samples Using 1-D Convolutional Neural Network. Cytometry A 2020; 97:31-38. [PMID: 31403260 DOI: 10.1002/cyto.a.23871] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2019] [Revised: 07/16/2019] [Accepted: 07/19/2019] [Indexed: 12/24/2022]
Abstract
Pathological diagnosis plays an important role in the diagnosis and treatment of hepatocellular carcinoma (HCC). The traditional method of pathological diagnosis of most cancers requires freezing, slicing, hematoxylin and eosin staining, and manual analysis, limiting the speed of the diagnosis process. In this study, we designed a one-dimensional convolutional neural network to classify the hyperspectral data of HCC sample slices acquired by our hyperspectral imaging system. A weighted loss function was employed to promote the performance of the model. The proposed method allows us to accelerate the diagnosis process of identifying tumor tissues. Our deep learning model achieved good performance on our data set with sensitivity, specificity, and area under receiver operating characteristic curve of 0.871, 0.888, and 0.950, respectively. Meanwhile, our deep learning model outperformed the other machine learning methods assessed on our data set. The proposed method is a potential tool for the label-free and real-time pathologic diagnosis. © 2019 International Society for Advancement of Cytometry.
Collapse
Affiliation(s)
- Rendong Wang
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China
| | - Yida He
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China
| | - Cuiping Yao
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China
| | - Sijia Wang
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China
| | - Yuan Xue
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China
| | - Zhenxi Zhang
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China
| | - Jing Wang
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China
| | - Xiaolong Liu
- The United Innovation of Mengchao Hepatobiliary Technology Key Laboratory of Fujian Province, Mengchao Hepatobiliary Hospital of Fujian Medical University, Fuzhou, 350025, People's Republic of China
| |
Collapse
|
220
|
Salem M, Valverde S, Cabezas M, Pareto D, Oliver A, Salvi J, Rovira À, Lladó X. A fully convolutional neural network for new T2-w lesion detection in multiple sclerosis. NEUROIMAGE-CLINICAL 2019; 25:102149. [PMID: 31918065 PMCID: PMC7036701 DOI: 10.1016/j.nicl.2019.102149] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 12/23/2019] [Accepted: 12/26/2019] [Indexed: 11/17/2022]
Abstract
A deep learning model for new T2-w lesions detection in multiple sclerosis is presented. Combining a learning-based registration network with a segmentation one increases the performance. The proposed model decreases false-positives while increasing true-positives. Better performance compared to other supervised and unsupervised state-of-the-art approaches.
Introduction: Longitudinal magnetic resonance imaging (MRI) has an important role in multiple sclerosis (MS) diagnosis and follow-up. Specifically, the presence of new T2-w lesions on brain MR scans is considered a predictive biomarker for the disease. In this study, we propose a fully convolutional neural network (FCNN) to detect new T2-w lesions in longitudinal brain MR images. Methods: One year apart, multichannel brain MR scans (T1-w, T2-w, PD-w, and FLAIR) were obtained for 60 patients, 36 of them with new T2-w lesions. Modalities from both temporal points were preprocessed and linearly coregistered. Afterwards, an FCNN, whose inputs were from the baseline and follow-up images, was trained to detect new MS lesions. The first part of the network consisted of U-Net blocks that learned the deformation fields (DFs) and nonlinearly registered the baseline image to the follow-up image for each input modality. The learned DFs together with the baseline and follow-up images were then fed to the second part, another U-Net that performed the final detection and segmentation of new T2-w lesions. The model was trained end-to-end, simultaneously learning both the DFs and the new T2-w lesions, using a combined loss function. We evaluated the performance of the model following a leave-one-out cross-validation scheme. Results: In terms of the detection of new lesions, we obtained a mean Dice similarity coefficient of 0.83 with a true positive rate of 83.09% and a false positive detection rate of 9.36%. In terms of segmentation, we obtained a mean Dice similarity coefficient of 0.55. The performance of our model was significantly better compared to the state-of-the-art methods (p < 0.05). Conclusions: Our proposal shows the benefits of combining a learning-based registration network with a segmentation network. Compared to other methods, the proposed model decreases the number of false positives. During testing, the proposed model operates faster than the other two state-of-the-art methods based on the DF obtained by Demons.
Collapse
Affiliation(s)
- Mostafa Salem
- Research Institute of Computer Vision and Robotics, University of Girona, Spain; Computer Science Department, Faculty of Computers and Information, Assiut University, Egypt.
| | - Sergi Valverde
- Research Institute of Computer Vision and Robotics, University of Girona, Spain
| | - Mariano Cabezas
- Research Institute of Computer Vision and Robotics, University of Girona, Spain
| | - Deborah Pareto
- Magnetic Resonance Unit, Dept of Radiology, Vall d'Hebron University Hospital, Spain
| | - Arnau Oliver
- Research Institute of Computer Vision and Robotics, University of Girona, Spain
| | - Joaquim Salvi
- Research Institute of Computer Vision and Robotics, University of Girona, Spain
| | - Àlex Rovira
- Magnetic Resonance Unit, Dept of Radiology, Vall d'Hebron University Hospital, Spain
| | - Xavier Lladó
- Research Institute of Computer Vision and Robotics, University of Girona, Spain
| |
Collapse
|
221
|
DMCNN: A Deep Multiscale Convolutional Neural Network Model for Medical Image Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2019; 2019:8597606. [PMID: 31949890 PMCID: PMC6948302 DOI: 10.1155/2019/8597606] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2019] [Revised: 11/21/2019] [Accepted: 11/28/2019] [Indexed: 12/31/2022]
Abstract
Medical image segmentation is one of the hot issues in the related area of image processing. Precise segmentation for medical images is a vital guarantee for follow-up treatment. At present, however, low gray contrast and blurred tissue boundaries are common in medical images, and the segmentation accuracy of medical images cannot be effectively improved. Especially, deep learning methods need more training samples, which lead to time-consuming process. Therefore, we propose a novelty model for medical image segmentation based on deep multiscale convolutional neural network (CNN) in this article. First, we extract the region of interest from the raw medical images. Then, data augmentation is operated to acquire more training datasets. Our proposed method contains three models: encoder, U-net, and decoder. Encoder is mainly responsible for feature extraction of 2D image slice. The U-net cascades the features of each block of the encoder with those obtained by deconvolution in the decoder under different scales. The decoding is mainly responsible for the upsampling of the feature graph after feature extraction of each group. Simulation results show that the new method can boost the segmentation accuracy. And, it has strong robustness compared with other segmentation methods.
Collapse
|
222
|
Formulation of Pruning Maps with Rhythmic Neural Firing. MATHEMATICS 2019. [DOI: 10.3390/math7121247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Rhythmic neural firing is thought to underlie the operation of neural function. This triggers the construction of dynamical network models to investigate how the rhythms interact with each other. Recently, an approach concerning neural path pruning has been proposed in a dynamical network system, in which critical neuronal connections are identified and adjusted according to the pruning maps, enabling neurons to produce rhythmic, oscillatory activity in simulation. Here, we construct a sort of homomorphic functions based on different rhythms of neural firing in network dynamics. Armed with the homomorphic functions, the pruning maps can be simply expressed in terms of interactive rhythms of neural firing and allow a concrete analysis of coupling operators to control network dynamics. Such formulation of pruning maps is applied to probe the consolidation of rhythmic patterns between layers of neurons in feedforward neural networks.
Collapse
|
223
|
Hallac RR, Lee J, Pressler M, Seaward JR, Kane AA. Identifying Ear Abnormality from 2D Photographs Using Convolutional Neural Networks. Sci Rep 2019; 9:18198. [PMID: 31796839 PMCID: PMC6890688 DOI: 10.1038/s41598-019-54779-7] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Accepted: 11/19/2019] [Indexed: 01/22/2023] Open
Abstract
Quantifying ear deformity using linear measurements and mathematical modeling is difficult due to the ear's complex shape. Machine learning techniques, such as convolutional neural networks (CNNs), are well-suited for this role. CNNs are deep learning methods capable of finding complex patterns from medical images, automatically building solution models capable of machine diagnosis. In this study, we applied CNN to automatically identify ear deformity from 2D photographs. Institutional review board (IRB) approval was obtained for this retrospective study to train and test the CNNs. Photographs of patients with and without ear deformity were obtained as standard of care in our photography studio. Profile photographs were obtained for one or both ears. A total of 671 profile pictures were used in this study including: 457 photographs of patients with ear deformity and 214 photographs of patients with normal ears. Photographs were cropped to the ear boundary and randomly divided into training (60%), validation (20%), and testing (20%) datasets. We modified the softmax classifier in the last layer in GoogLeNet, a deep CNN, to generate an ear deformity detection model in Matlab. All images were deemed of high quality and usable for training and testing. It took about 2 hours to train the system and the training accuracy reached almost 100%. The test accuracy was about 94.1%. We demonstrate that deep learning has a great potential in identifying ear deformity. These machine learning techniques hold the promise in being used in the future to evaluate treatment outcomes.
Collapse
Affiliation(s)
- Rami R Hallac
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd., Dallas, TX, 75390, United States. .,Analytical Imaging and Modeling Center, Children's Medical Center, Dallas, 1935 Medical District Dr., Dallas, Texas, 75235, United States.
| | - Jeon Lee
- Department of Bioinformatics, UT Southwestern, 5323 Harry Hines Blvd., Dallas, TX, 75390, United States
| | - Mark Pressler
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd., Dallas, TX, 75390, United States
| | - James R Seaward
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd., Dallas, TX, 75390, United States
| | - Alex A Kane
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd., Dallas, TX, 75390, United States.,Analytical Imaging and Modeling Center, Children's Medical Center, Dallas, 1935 Medical District Dr., Dallas, Texas, 75235, United States
| |
Collapse
|
224
|
Mostapha M, Styner M. Role of deep learning in infant brain MRI analysis. Magn Reson Imaging 2019; 64:171-189. [PMID: 31229667 PMCID: PMC6874895 DOI: 10.1016/j.mri.2019.06.009] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2019] [Revised: 06/06/2019] [Accepted: 06/08/2019] [Indexed: 12/17/2022]
Abstract
Deep learning algorithms and in particular convolutional networks have shown tremendous success in medical image analysis applications, though relatively few methods have been applied to infant MRI data due numerous inherent challenges such as inhomogenous tissue appearance across the image, considerable image intensity variability across the first year of life, and a low signal to noise setting. This paper presents methods addressing these challenges in two selected applications, specifically infant brain tissue segmentation at the isointense stage and presymptomatic disease prediction in neurodevelopmental disorders. Corresponding methods are reviewed and compared, and open issues are identified, namely low data size restrictions, class imbalance problems, and lack of interpretation of the resulting deep learning solutions. We discuss how existing solutions can be adapted to approach these issues as well as how generative models seem to be a particularly strong contender to address them.
Collapse
Affiliation(s)
- Mahmoud Mostapha
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States of America.
| | - Martin Styner
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States of America; Neuro Image Research and Analysis Lab, Department of Psychiatry, University of North Carolina at Chapel Hill, NC 27599, United States of America.
| |
Collapse
|
225
|
Khalili N, Lessmann N, Turk E, Claessens N, Heus RD, Kolk T, Viergever M, Benders M, Išgum I. Automatic brain tissue segmentation in fetal MRI using convolutional neural networks. Magn Reson Imaging 2019; 64:77-89. [DOI: 10.1016/j.mri.2019.05.020] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2018] [Revised: 05/04/2019] [Accepted: 05/15/2019] [Indexed: 10/26/2022]
|
226
|
Barnard R, Tan J, Roller B, Chiles C, Weaver AA, Boutin RD, Kritchevsky SB, Lenchik L. Machine Learning for Automatic Paraspinous Muscle Area and Attenuation Measures on Low-Dose Chest CT Scans. Acad Radiol 2019; 26:1686-1694. [PMID: 31326311 PMCID: PMC6878160 DOI: 10.1016/j.acra.2019.06.017] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Revised: 06/21/2019] [Accepted: 06/26/2019] [Indexed: 12/17/2022]
Abstract
RATIONALE AND OBJECTIVES To develop and evaluate an automated machine learning (ML) algorithm for segmenting the paraspinous muscles on chest computed tomography (CT) scans to evaluate for presence of sarcopenia. MATERIALS AND METHODS A convolutional neural network based on the U-Net architecture was trained to perform muscle segmentation on a dataset of 1875 single slice CT images and was tested on 209 CT images of participants in the National Lung Screening Trial. Low-dose, noncontrast CT examinations were obtained at 33 clinical sites, using scanners from four manufacturers. The study participants had a mean age of 71.6 years (range, 70-74 years). Ground truth was obtained by manually segmenting the left paraspinous muscle at the level of the T12 vertebra. Muscle cross-sectional area (CSA) and muscle attenuation (MA) were recorded. Comparison between the ML algorithm and ground truth measures of muscle CSA and MA were obtained using Dice similarity coefficients and Pearson correlations. RESULTS Compared to ground truth segmentation, the ML algorithm achieved median (standard deviation) Dice scores of 0.94 (0.04) in the test set. Mean (SD) muscle CSA was 14.3 (3.6) cm2 for ground truth and 13.7 (3.5) cm2 for ML segmentation. Mean (SD) MA was 41.6 (7.6) Hounsfield units (HU) for ground truth and 43.5 (7.9) HU for ML segmentation. There was high correlation between ML algorithm and ground truth for muscle CSA (r2 = 0.86; p < 0.0001) and MA (r2 = 0.95; p < 0.0001). CONCLUSION The ML algorithm for measurement of paraspinous muscles compared favorably to manual ground truth measurements in the NLST. The algorithm generalized well to a heterogeneous set of low-dose CT images and may be capable of automated quantification of muscle metrics to screen for sarcopenia on routine chest CT examinations.
Collapse
Affiliation(s)
- Ryan Barnard
- Department of Biostatistical Sciences, Division of Public Health Sciences, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Josh Tan
- Department of Radiology, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157
| | - Brandon Roller
- Department of Radiology, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157
| | - Caroline Chiles
- Department of Radiology, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157
| | - Ashley A Weaver
- Department of Biomedical Engineering, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Robert D Boutin
- Department of Radiology, University of California Davis School of Medicine, Sacramento, California
| | - Stephen B Kritchevsky
- Department of Internal Medicine, Section on Gerontology and Geriatric Medicine, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Leon Lenchik
- Department of Radiology, Wake Forest School of Medicine, Medical Center Blvd, Winston-Salem, NC 27157.
| |
Collapse
|
227
|
Wang M, Li P. Label fusion method combining pixel greyscale probability for brain MR segmentation. Sci Rep 2019; 9:17987. [PMID: 31784630 PMCID: PMC6884484 DOI: 10.1038/s41598-019-54527-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 11/13/2019] [Indexed: 11/08/2022] Open
Abstract
Multi-atlas-based segmentation (MAS) methods have demonstrated superior performance in the field of automatic image segmentation, and label fusion is an important part of MAS methods. In this paper, we propose a label fusion method that incorporates pixel greyscale probability information. The proposed method combines the advantages of label fusion methods based on sparse representation (SRLF) and weighted voting methods using patch similarity weights (PSWV) and introduces pixel greyscale probability information to improve the segmentation accuracy. We apply the proposed method to the segmentation of deep brain tissues in challenging 3D brain MR images from publicly available IBSR datasets, including images of the thalamus, hippocampus, caudate, putamen, pallidum and amygdala. The experimental results show that the proposed method has higher segmentation accuracy and robustness than the related methods. Compared with the state-of-the-art methods, the proposed method obtains the best putamen, pallidum and amygdala segmentation results and hippocampus and caudate segmentation results that are similar to those of the comparison methods.
Collapse
Affiliation(s)
- Monan Wang
- School of Mechanical & Power Engineering, Harbin University of Science and Technology, Xue Fu Road No. 52, Nangang District, Harbin City, Heilongjiang Province, 150080, People's Republic of China.
| | - Pengcheng Li
- School of Mechanical & Power Engineering, Harbin University of Science and Technology, Xue Fu Road No. 52, Nangang District, Harbin City, Heilongjiang Province, 150080, People's Republic of China
| |
Collapse
|
228
|
Choudhary P, Hazra A. Chest disease radiography in twofold: using convolutional neural networks and transfer learning. EVOLVING SYSTEMS 2019. [DOI: 10.1007/s12530-019-09316-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
229
|
Deep CNN ensembles and suggestive annotations for infant brain MRI segmentation. Comput Med Imaging Graph 2019; 79:101660. [PMID: 31785402 DOI: 10.1016/j.compmedimag.2019.101660] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Revised: 08/30/2019] [Accepted: 09/24/2019] [Indexed: 01/02/2023]
Abstract
Precise 3D segmentation of infant brain tissues is an essential step towards comprehensive volumetric studies and quantitative analysis of early brain development. However, computing such segmentations is very challenging, especially for 6-month infant brain, due to the poor image quality, among other difficulties inherent to infant brain MRI, e.g., the isointense contrast between white and gray matter and the severe partial volume effect due to small brain sizes. This study investigates the problem with an ensemble of semi-dense fully convolutional neural networks (CNNs), which employs T1-weighted and T2-weighted MR images as input. We demonstrate that the ensemble agreement is highly correlated with the segmentation errors. Therefore, our method provides measures that can guide local user corrections. To the best of our knowledge, this work is the first ensemble of 3D CNNs for suggesting annotations within images. Our quasi-dense architecture allows the efficient propagation of gradients during training, while limiting the number of parameters, requiring one order of magnitude less parameters than popular medical image segmentation networks such as 3D U-Net (Çiçek, et al.). We also investigated the impact that early or late fusions of multiple image modalities might have on the performances of deep architectures. We report evaluations of our method on the public data of the MICCAI iSEG-2017 Challenge on 6-month infant brain MRI segmentation, and show very competitive results among 21 teams, ranking first or second in most metrics.
Collapse
|
230
|
A Convolutional Neural Network for Impact Detection and Characterization of Complex Composite Structures. SENSORS 2019; 19:s19224933. [PMID: 31726762 PMCID: PMC6891538 DOI: 10.3390/s19224933] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Revised: 11/03/2019] [Accepted: 11/10/2019] [Indexed: 01/22/2023]
Abstract
This paper reports on a novel metamodel for impact detection, localization and characterization of complex composite structures based on Convolutional Neural Networks (CNN) and passive sensing. Methods to generate appropriate input datasets and network architectures for impact localization and characterization were proposed, investigated and optimized. The ultrasonic waves generated by external impact events and recorded by piezoelectric sensors are transferred to 2D images which are used for impact detection and characterization. The accuracy of the detection was tested on a composite fuselage panel which was shown to be over 94%. In addition, the scalability of this metamodelling technique has been investigated by training the CNN metamodels with the data from part of the stiffened panel and testing the performance on other sections with similar geometry. Impacts were detected with an accuracy of over 95%. Impact energy levels were also successfully categorized while trained at coupon level and applied to sub-components with greater complexity. These results validated the applicability of the proposed CNN-based metamodel to real-life application such as composite aircraft parts.
Collapse
|
231
|
Khalili N, Turk E, Benders MJNL, Moeskops P, Claessens NHP, de Heus R, Franx A, Wagenaar N, Breur JMPJ, Viergever MA, Išgum I. Automatic extraction of the intracranial volume in fetal and neonatal MR scans using convolutional neural networks. NEUROIMAGE-CLINICAL 2019; 24:102061. [PMID: 31835284 PMCID: PMC6909142 DOI: 10.1016/j.nicl.2019.102061] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Revised: 10/24/2019] [Accepted: 10/26/2019] [Indexed: 01/21/2023]
Abstract
Automatic intracranial volume segmentation. Fetal and neonatal MRI. Deep learning.
MR images of infants and fetuses allow non-invasive analysis of the brain. Quantitative analysis of brain development requires automatic brain tissue segmentation that is typically preceded by segmentation of the intracranial volume (ICV). Fast changes in the size and morphology of the developing brain, motion artifacts, and large variation in the field of view make ICV segmentation a challenging task. We propose an automatic method for segmentation of the ICV in fetal and neonatal MRI scans. The method was developed and tested with a diverse set of scans regarding image acquisition parameters (i.e. field strength, image acquisition plane, image resolution), infant age (23–45 weeks post menstrual age), and pathology (posthaemorrhagic ventricular dilatation, stroke, asphyxia, and Down syndrome). The results demonstrate that the method achieves accurate segmentation with a Dice coefficient (DC) ranging from 0.98 to 0.99 in neonatal and fetal scans regardless of image acquisition parameters or patient characteristics. Hence, the algorithm provides a generic tool for segmentation of the ICV that may be used as a preprocessing step for brain tissue segmentation in fetal and neonatal brain MR scans.
Collapse
Affiliation(s)
- Nadieh Khalili
- Image Sciences Institute, Utrecht University and University Medical Center Utrecht, Utrecht, the Netherlands.
| | - E Turk
- Department of Neonatology, Wilhelmina Childrens Hospital, University Medical Center Utrecht, Utrecht, the Netherlands; Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, the Netherlands
| | - M J N L Benders
- Department of Neonatology, Wilhelmina Childrens Hospital, University Medical Center Utrecht, Utrecht, the Netherlands; Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, the Netherlands
| | - P Moeskops
- Medical Image Analysis, Department of Biomedical Engineering, Eindhoven University of Technology, the Netherlands
| | - N H P Claessens
- Department of Neonatology, Wilhelmina Childrens Hospital, University Medical Center Utrecht, Utrecht, the Netherlands; Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, the Netherlands
| | - R de Heus
- Department of Obstetrics, University Medical Center Utrecht, the Netherlands
| | - A Franx
- Department of Obstetrics, University Medical Center Utrecht, the Netherlands
| | - N Wagenaar
- Department of Neonatology, Wilhelmina Childrens Hospital, University Medical Center Utrecht, Utrecht, the Netherlands; Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, the Netherlands
| | - J M P J Breur
- Department of Neonatology, Wilhelmina Childrens Hospital, University Medical Center Utrecht, Utrecht, the Netherlands; Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, the Netherlands
| | - M A Viergever
- Image Sciences Institute, Utrecht University and University Medical Center Utrecht, Utrecht, the Netherlands; Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, the Netherlands
| | - I Išgum
- Image Sciences Institute, Utrecht University and University Medical Center Utrecht, Utrecht, the Netherlands; Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, the Netherlands
| |
Collapse
|
232
|
Rashed EA, Gomez-Tames J, Hirata A. Development of accurate human head models for personalized electromagnetic dosimetry using deep learning. Neuroimage 2019; 202:116132. [DOI: 10.1016/j.neuroimage.2019.116132] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Revised: 08/22/2019] [Accepted: 08/24/2019] [Indexed: 11/30/2022] Open
|
233
|
Sun L, Fan Z, Ding X, Huang Y, Paisley J. Region-of-interest undersampled MRI reconstruction: A deep convolutional neural network approach. Magn Reson Imaging 2019; 63:185-192. [DOI: 10.1016/j.mri.2019.07.010] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2019] [Revised: 07/06/2019] [Accepted: 07/14/2019] [Indexed: 01/06/2023]
|
234
|
Computer-Aided Diagnosis System of Alzheimer's Disease Based on Multimodal Fusion: Tissue Quantification Based on the Hybrid Fuzzy-Genetic-Possibilistic Model and Discriminative Classification Based on the SVDD Model. Brain Sci 2019; 9:brainsci9100289. [PMID: 31652635 PMCID: PMC6826987 DOI: 10.3390/brainsci9100289] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Accepted: 10/17/2019] [Indexed: 11/16/2022] Open
Abstract
An improved computer-aided diagnosis (CAD) system is proposed for the early diagnosis of Alzheimer’s disease (AD) based on the fusion of anatomical (magnetic resonance imaging (MRI)) and functional (8F-fluorodeoxyglucose positron emission tomography (FDG-PET)) multimodal images, and which helps to address the strong ambiguity or the uncertainty produced in brain images. The merit of this fusion is that it provides anatomical information for the accurate detection of pathological areas characterized in functional imaging by physiological abnormalities. First, quantification of brain tissue volumes is proposed based on a fusion scheme in three successive steps: modeling, fusion and decision. (1) Modeling which consists of three sub-steps: the initialization of the centroids of the tissue clusters by applying the Bias corrected Fuzzy C-Means (FCM) clustering algorithm. Then, the optimization of the initial partition is performed by running genetic algorithms. Finally, the creation of white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) tissue maps by applying the Possibilistic FCM clustering algorithm. (2) Fusion using a possibilistic operator to merge the maps of the MRI and PET images highlighting redundancies and managing ambiguities. (3) Decision offering more representative anatomo-functional fusion images. Second, a support vector data description (SVDD) classifier is used that must reliably distinguish AD from normal aging and automatically detects outliers. The “divide and conquer” strategy is then used, which speeds up the SVDD process and reduces the load and cost of the calculating. The robustness of the tissue quantification process is proven against noise (20% level), partial volume effects and when inhomogeneities of spatial intensity are high. Thus, the superiority of the SVDD classifier over competing conventional systems is also demonstrated with the adoption of the 10-fold cross-validation approach for synthetic datasets (Alzheimer disease neuroimaging (ADNI) and Open Access Series of Imaging Studies (OASIS)) and real images. The percentage of classification in terms of accuracy, sensitivity, specificity and area under ROC curve was 93.65%, 90.08%, 92.75% and 97.3%; 91.46%, 92%, 91.78% and 96.7%; 85.09%, 86.41%, 84.92% and 94.6% in the case of the ADNI, OASIS and real images respectively.
Collapse
|
235
|
Zhang F, Li Z, Zhang B, Du H, Wang B, Zhang X. Multi-modal deep learning model for auxiliary diagnosis of Alzheimer’s disease. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.04.093] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
236
|
Holistic decomposition convolution for effective semantic segmentation of medical volume images. Med Image Anal 2019; 57:149-164. [DOI: 10.1016/j.media.2019.07.003] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2018] [Revised: 05/22/2019] [Accepted: 07/04/2019] [Indexed: 11/24/2022]
|
237
|
Jin Z, Udupa JK, Torigian DA. How many models/atlases are needed as priors for capturing anatomic population variations? Med Image Anal 2019; 58:101550. [PMID: 31557632 DOI: 10.1016/j.media.2019.101550] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2018] [Revised: 08/24/2019] [Accepted: 08/29/2019] [Indexed: 12/24/2022]
Abstract
Many medical image processing and analysis operations can benefit a great deal from prior information encoded in the form of models/atlases to capture variations over a population in form, shape, anatomic layout, and image appearance of objects. However, two fundamental questions have not been addressed in the literature: "How many models/atlases are needed for optimally encoding prior information to address the differing body habitus factor in that population?" and "Images of how many subjects in the given population are needed to optimally harness prior information?" We propose a method to seek answers to these questions. We assume that there is a well-defined body region of interest and a subject population under consideration, and that we are given a set of representative images of the body region for the population. After images are trimmed to the exact body region, a hierarchical agglomerative clustering algorithm partitions the set of images into a specified number of groups by using pairwise image (dis)similarity as a cost function. Optionally the images may be pre-registered among themselves prior to this partitioning operation. We define a measure called Residual Dissimilarity (RD) to determine the goodness of each partition. We then ascertain how RD varies as a function of the number of elements in the partition for finding the optimum number(s) of groups. Breakpoints in this function are taken as the recommended number of groups/models/atlases. Our results from analysis of sizeable CT data sets of adult patients from two body regions - thorax (346) and head and neck (298) - can be summarized as follows. (1) A minimum of 5 to 8 groups (or models/atlases) seems essential to properly capture information about differing anatomic forms and body habitus. (2) A minimum of 150 images from different subjects in a population seems essential to cover the anatomical variations for a given body region. (3) In grouping, body habitus variations seem to override differences due to other factors such as gender, with/without contrast enhancement in image acquisition, and presence of moderate pathology. This method may be helpful for constructing high quality models/atlases from a sufficiently large population of images and in optimally selecting the training image sets needed in deep learning strategies.
Collapse
Affiliation(s)
- Ze Jin
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, United States
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, United States.
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, United States
| |
Collapse
|
238
|
Que Q, Tang Z, Wang R, Zeng Z, Wang J, Chua M, Gee TS, Yang X, Veeravalli B. CardioXNet: Automated Detection for Cardiomegaly Based on Deep Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:612-615. [PMID: 30440471 DOI: 10.1109/embc.2018.8512374] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In this paper, we present an automated procedure to determine the presence of cardiomegaly on chest X-ray image based on deep learning. The proposed algorithm CardioXNet uses deep learning methods U-NET and cardiothoracic ratio for diagnosis of cardiomegaly from chest X-rays. U-NET learns the segmentation task from the ground truth data. OpenCV is used to denoise and maintain the precision of region of interest once minor errors occur. Therefore, Cardiothoracic ratio (CTR) is calculated as a criterion to determine cardiomegaly from U-net segmentations. End-to-end Dense-Net neural network is used as baseline. This study has shown that the feasibility of combing deep learning segmentation and medical criterion to automatically recognize heart disease in medical images with high accuracy and agreement with the clinical results.
Collapse
|
239
|
Yanase J, Triantaphyllou E. The seven key challenges for the future of computer-aided diagnosis in medicine. Int J Med Inform 2019; 129:413-422. [DOI: 10.1016/j.ijmedinf.2019.06.017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2019] [Revised: 06/15/2019] [Accepted: 06/19/2019] [Indexed: 12/23/2022]
|
240
|
A review on brain tumor segmentation of MRI images. Magn Reson Imaging 2019; 61:247-259. [DOI: 10.1016/j.mri.2019.05.043] [Citation(s) in RCA: 119] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2019] [Revised: 05/30/2019] [Accepted: 05/30/2019] [Indexed: 01/17/2023]
|
241
|
Bui TD, Shin J, Moon T. Skip-connected 3D DenseNet for volumetric infant brain MRI segmentation. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.101613] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
242
|
Pesapane F. How scientific mobility can help current and future radiology research: a radiology trainee's perspective. Insights Imaging 2019; 10:85. [PMID: 31456090 PMCID: PMC6712195 DOI: 10.1186/s13244-019-0773-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Accepted: 07/15/2019] [Indexed: 12/13/2022] Open
Abstract
One of the ways in which modern radiology is manifesting itself in higher education and research is through the increasing importance of scientific mobility. This article seeks to provide an overview and a prospective of radiology fellows in their last year of training about the current trends and policy tools for promoting mobility among young radiologists, especially inside the European Union. Nowadays, the need to promote international cooperation is even greater to ensure that the best evidence-based medical practices become a common background of a next cross-border generation of radiologists. Organisations such as the European Society of Radiology (ESR) and the Radiological Society of North America (RSNA) are called upon to play as guarantors of the training of young radiologists building know-how and world-class excellence. Today, it is not just being certified radiologist that matters, the place where the training was done plays an important role in enhancing chances when applying for a high-level job or fellowship. The article argues that the mobility of radiology trainees is an indispensable prerequisite to face new challenges, including the application of artificial intelligence to medical imaging, which will require a large multicentre collaboration.
Collapse
Affiliation(s)
- Filippo Pesapane
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono 7, 20122, Milan, Italy.
| |
Collapse
|
243
|
Yu C, Xie S, Niu S, Ji Z, Fan W, Yuan S, Liu Q, Chen Q. Hyper‐reflective foci segmentation in SD‐OCT retinal images with diabetic retinopathy using deep convolutional neural networks. Med Phys 2019; 46:4502-4519. [PMID: 31315159 DOI: 10.1002/mp.13728] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2019] [Revised: 07/08/2019] [Accepted: 07/11/2019] [Indexed: 11/07/2022] Open
Affiliation(s)
- Chenchen Yu
- School of Computer Science and Engineering Nanjing University of Science and Technology Nanjing 210094 China
| | - Sha Xie
- School of Computer Science and Engineering Nanjing University of Science and Technology Nanjing 210094 China
| | - Sijie Niu
- School of Information Science and Engineering University of Jinan Jinan 250022 China
| | - Zexuan Ji
- School of Computer Science and Engineering Nanjing University of Science and Technology Nanjing 210094 China
| | - Wen Fan
- Department of Ophthalmology the First Affiliated Hospital with Nanjing Medical University Nanjing 210029 China
| | - Songtao Yuan
- Department of Ophthalmology the First Affiliated Hospital with Nanjing Medical University Nanjing 210029 China
- The Affiliated Jiangsu Shengze Hospital of Nanjing Medical University Suzhou 215228 China
| | - Qinghuai Liu
- Department of Ophthalmology the First Affiliated Hospital with Nanjing Medical University Nanjing 210029 China
| | - Qiang Chen
- School of Computer Science and Engineering Nanjing University of Science and Technology Nanjing 210094 China
| |
Collapse
|
244
|
Verburg E, Wolterink JM, Waard SN, Išgum I, Gils CH, Veldhuis WB, Gilhuijs KGA. Knowledge‐based and deep learning‐based automated chest wall segmentation in magnetic resonance images of extremely dense breasts. Med Phys 2019; 46:4405-4416. [DOI: 10.1002/mp.13699] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2019] [Revised: 06/21/2019] [Accepted: 06/26/2019] [Indexed: 11/07/2022] Open
Affiliation(s)
- Erik Verburg
- Image Sciences Institute University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Jelmer M. Wolterink
- Image Sciences Institute University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Stephanie N. Waard
- Department of Radiology University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Ivana Išgum
- Image Sciences Institute University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Carla H. Gils
- Julius Center for Health Sciences and Primary Care University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Wouter B. Veldhuis
- Department of Radiology University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| | - Kenneth G. A. Gilhuijs
- Image Sciences Institute University Medical Center Utrecht, Utrecht University Utrecht 3584 CX the Netherlands
| |
Collapse
|
245
|
Convolutional Neural Networks for Spectroscopic Analysis in Retinal Oximetry. Sci Rep 2019; 9:11387. [PMID: 31388136 PMCID: PMC6684811 DOI: 10.1038/s41598-019-47621-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Accepted: 06/20/2019] [Indexed: 01/06/2023] Open
Abstract
Retinal oximetry is a non-invasive technique to investigate the hemodynamics, vasculature and health of the eye. Current techniques for retinal oximetry have been plagued by quantitatively inconsistent measurements and this has greatly limited their adoption in clinical environments. To become clinically relevant oximetry measurements must become reliable and reproducible across studies and locations. To this end, we have developed a convolutional neural network algorithm for multi-wavelength oximetry, showing a greatly improved calculation performance in comparison to previously reported techniques. The algorithm is calibration free, performs sensing of the four main hemoglobin conformations with no prior knowledge of their characteristic absorption spectra and, due to the convolution-based calculation, is invariable to spectral shifting. We show, herein, the dramatic performance improvements in using this algorithm to deduce effective oxygenation (SO2), as well as the added functionality to accurately measure fractional oxygenation (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${{\bf{SO}}}_{{\bf{2}}}^{{\boldsymbol{f}}{\boldsymbol{r}}}$$\end{document}SO2fr). Furthermore, this report compares, for the first time, the relative performance of several previously reported multi-wavelength oximetry algorithms in the face of controlled spectral variations. The improved ability of the algorithm to accurately and independently measure hemoglobin concentrations offers a high potential tool for disease diagnosis and monitoring when applied to retinal spectroscopy.
Collapse
|
246
|
Tobore I, Li J, Yuhang L, Al-Handarish Y, Kandwal A, Nie Z, Wang L. Deep Learning Intervention for Health Care Challenges: Some Biomedical Domain Considerations. JMIR Mhealth Uhealth 2019; 7:e11966. [PMID: 31376272 PMCID: PMC6696854 DOI: 10.2196/11966] [Citation(s) in RCA: 61] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 04/14/2019] [Accepted: 06/12/2019] [Indexed: 01/10/2023] Open
Abstract
The use of deep learning (DL) for the analysis and diagnosis of biomedical and health care problems has received unprecedented attention in the last decade. The technique has recorded a number of achievements for unearthing meaningful features and accomplishing tasks that were hitherto difficult to solve by other methods and human experts. Currently, biological and medical devices, treatment, and applications are capable of generating large volumes of data in the form of images, sounds, text, graphs, and signals creating the concept of big data. The innovation of DL is a developing trend in the wake of big data for data representation and analysis. DL is a type of machine learning algorithm that has deeper (or more) hidden layers of similar function cascaded into the network and has the capability to make meaning from medical big data. Current transformation drivers to achieve personalized health care delivery will be possible with the use of mobile health (mHealth). DL can provide the analysis for the deluge of data generated from mHealth apps. This paper reviews the fundamentals of DL methods and presents a general view of the trends in DL by capturing literature from PubMed and the Institute of Electrical and Electronics Engineers database publications that implement different variants of DL. We highlight the implementation of DL in health care, which we categorize into biological system, electronic health record, medical image, and physiological signals. In addition, we discuss some inherent challenges of DL affecting biomedical and health domain, as well as prospective research directions that focus on improving health management by promoting the application of physiological signals and modern internet technology.
Collapse
Affiliation(s)
- Igbe Tobore
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China.,Graduate University, Chinese Academy of Sciences, Beijing, China
| | - Jingzhen Li
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Liu Yuhang
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yousef Al-Handarish
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Abhishek Kandwal
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zedong Nie
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lei Wang
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
247
|
Semi-supervised deep learning of brain tissue segmentation. Neural Netw 2019; 116:25-34. [DOI: 10.1016/j.neunet.2019.03.014] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Revised: 09/28/2018] [Accepted: 03/22/2019] [Indexed: 12/23/2022]
|
248
|
Aslani S, Dayan M, Storelli L, Filippi M, Murino V, Rocca MA, Sona D. Multi-branch convolutional neural network for multiple sclerosis lesion segmentation. Neuroimage 2019; 196:1-15. [DOI: 10.1016/j.neuroimage.2019.03.068] [Citation(s) in RCA: 72] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2018] [Revised: 03/23/2019] [Accepted: 03/28/2019] [Indexed: 11/26/2022] Open
|
249
|
Cerrolaza JJ, Picazo ML, Humbert L, Sato Y, Rueckert D, Ballester MÁG, Linguraru MG. Computational anatomy for multi-organ analysis in medical imaging: A review. Med Image Anal 2019; 56:44-67. [DOI: 10.1016/j.media.2019.04.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 02/05/2019] [Accepted: 04/13/2019] [Indexed: 12/19/2022]
|
250
|
Fully automated intracranial ventricle segmentation on CT with 2D regional convolutional neural network to estimate ventricular volume. Int J Comput Assist Radiol Surg 2019; 14:1923-1932. [DOI: 10.1007/s11548-019-02038-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Accepted: 07/22/2019] [Indexed: 10/26/2022]
|