1
|
Früh D, Mendl‐Heinisch C, Bittner N, Weis S, Caspers S. Prediction of Verbal Abilities From Brain Connectivity Data Across the Lifespan Using a Machine Learning Approach. Hum Brain Mapp 2025; 46:e70191. [PMID: 40130301 PMCID: PMC11933761 DOI: 10.1002/hbm.70191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Revised: 01/27/2025] [Accepted: 03/02/2025] [Indexed: 03/26/2025] Open
Abstract
Compared to nonverbal cognition such as executive or memory functions, language-related cognition generally appears to remain more stable until later in life. Nevertheless, different language-related processes, for example, verbal fluency versus vocabulary knowledge, appear to show different trajectories across the life span. One potential explanation for differences in verbal functions may be alterations in the functional and structural network architecture of different large-scale brain networks. For example, differences in verbal abilities have been linked to the communication within and between the frontoparietal (FPN) and default mode network (DMN). It, however, remains open whether brain connectivity within these networks may be informative for language performance at the individual level across the life span. Further information in this regard may be highly desirable as verbal abilities allow us to participate in daily activities, are associated with quality of life, and may be considered in preventive and interventional setups to foster cognitive health across the life span. So far, mixed prediction results based on resting-state functional connectivity (FC) and structural connectivity (SC) data have been reported for language abilities across different samples, age groups, and machine-learning (ML) approaches. Therefore, the current study set out to investigate the predictability of verbal fluency and vocabulary knowledge based on brain connectivity data in the DMN, FPN, and the whole brain using an ML approach in a lifespan sample (N = 717; age range: 18-85) from the 1000BRAINS study. Prediction performance was, thereby, systematically compared across (i) verbal [verbal fluency and vocabulary knowledge] and nonverbal abilities [processing speed and visual working memory], (ii) modalities [FC and SC data], (iii) feature sets [DMN, FPN, DMN-FPN, and whole brain], and (iv) samples [total, younger, and older aged group]. Results from the current study showed that verbal abilities could not be reliably predicted from FC and SC data across feature sets and samples. Thereby, no predictability differences emerged between verbal fluency and vocabulary knowledge across input modalities, feature sets, and samples. In contrast to verbal functions, nonverbal abilities could be moderately predicted from connectivity data, particularly SC, in the total and younger age group. Satisfactory prediction performance for nonverbal cognitive functions based on currently chosen connectivity data was, however, not encountered in the older age group. Current results, hence, emphasized that verbal functions may be more difficult to predict from brain connectivity data in domain-general cognitive networks and the whole brain compared to nonverbal abilities, particularly executive functions, across the life span. Thus, it appears warranted to more closely investigate differences in predictability between different cognitive functions and age groups.
Collapse
Affiliation(s)
- Deborah Früh
- Institute of Neuroscience and Medicine (INM‐1)Research Centre JülichJülichGermany
- Institute for Anatomy I, Medical Faculty & University Hospital DüsseldorfHeinrich Heine University DüsseldorfDüsseldorfGermany
| | - Camilla Mendl‐Heinisch
- Institute of Neuroscience and Medicine (INM‐1)Research Centre JülichJülichGermany
- Institute for Anatomy I, Medical Faculty & University Hospital DüsseldorfHeinrich Heine University DüsseldorfDüsseldorfGermany
| | - Nora Bittner
- Institute of Neuroscience and Medicine (INM‐1)Research Centre JülichJülichGermany
- Institute for Anatomy I, Medical Faculty & University Hospital DüsseldorfHeinrich Heine University DüsseldorfDüsseldorfGermany
| | - Susanne Weis
- Institute of Neuroscience and Medicine, Brain and Behaviour (INM‐7)Research Centre JülichJülichGermany
- Institute of Systems Neuroscience, Medical Faculty & University Hospital DüsseldorfHeinrich Heine University DüsseldorfDüsseldorfGermany
| | - Svenja Caspers
- Institute of Neuroscience and Medicine (INM‐1)Research Centre JülichJülichGermany
- Institute for Anatomy I, Medical Faculty & University Hospital DüsseldorfHeinrich Heine University DüsseldorfDüsseldorfGermany
| |
Collapse
|
2
|
Olchanyi MD, Augustinack J, Haynes RL, Lewis LD, Cicero N, Li J, Destrieux C, Folkerth RD, Kinney HC, Fischl B, Brown EN, Iglesias JE, Edlow BL. Histology-guided MRI segmentation of brainstem nuclei critical to consciousness. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.09.26.24314117. [PMID: 39399006 PMCID: PMC11469455 DOI: 10.1101/2024.09.26.24314117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/15/2024]
Abstract
While substantial progress has been made in mapping the connectivity of cortical networks responsible for conscious awareness, neuroimaging analysis of subcortical arousal networks that modulate arousal (i.e., wakefulness) has been limited by a lack of a robust segmentation procedures for brainstem arousal nuclei. Automated segmentation of brainstem arousal nuclei is an essential step toward elucidating the physiology of arousal in human consciousness and the pathophysiology of disorders of consciousness. We created a probabilistic atlas of brainstem arousal nuclei built on diffusion MRI scans of five ex vivo human brain specimens scanned at 750 μm isotropic resolution. Labels of arousal nuclei used to generate the probabilistic atlas were manually annotated with reference to nucleus-specific immunostaining in two of the five brain specimens. We then developed a Bayesian segmentation algorithm that utilizes the probabilistic atlas as a generative model and automatically identifies brainstem arousal nuclei in a resolution- and contrast-agnostic manner. The segmentation method displayed high accuracy in both healthy and lesioned in vivo T1 MRI scans and high test-retest reliability across both T1 and T2 MRI contrasts. Finally, we show that the segmentation algorithm can detect volumetric changes and differences in magnetic susceptibility within brainstem arousal nuclei in Alzheimer's disease and traumatic coma, respectively. We release the probabilistic atlas and Bayesian segmentation tool in FreeSurfer to advance the study of human consciousness and its disorders.
Collapse
|
3
|
Casamitjana A, Mancini M, Robinson E, Peter L, Annunziata R, Althonayan J, Crampsie S, Blackburn E, Billot B, Atzeni A, Puonti O, Balbastre Y, Schmidt P, Hughes J, Augustinack JC, Edlow BL, Zöllei L, Thomas DL, Kliemann D, Bocchetta M, Strand C, Holton JL, Jaunmuktane Z, Iglesias JE. A next-generation, histological atlas of the human brain and its application to automated brain MRI segmentation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.05.579016. [PMID: 39282320 PMCID: PMC11398399 DOI: 10.1101/2024.02.05.579016] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/21/2024]
Abstract
Magnetic resonance imaging (MRI) is the standard tool to image the human brain in vivo. In this domain, digital brain atlases are essential for subject-specific segmentation of anatomical regions of interest (ROIs) and spatial comparison of neuroanatomy from different subjects in a common coordinate frame. High-resolution, digital atlases derived from histology (e.g., Allen atlas [7], BigBrain [13], Julich [15]), are currently the state of the art and provide exquisite 3D cytoarchitectural maps, but lack probabilistic labels throughout the whole brain. Here we present NextBrain, a next-generation probabilistic atlas of human brain anatomy built from serial 3D histology and corresponding highly granular delineations of five whole brain hemispheres. We developed AI techniques to align and reconstruct ~10,000 histological sections into coherent 3D volumes with joint geometric constraints (no overlap or gaps between sections), as well as to semi-automatically trace the boundaries of 333 distinct anatomical ROIs on all these sections. Comprehensive delineation on multiple cases enabled us to build the first probabilistic histological atlas of the whole human brain. Further, we created a companion Bayesian tool for automated segmentation of the 333 ROIs in any in vivo or ex vivo brain MRI scan using the NextBrain atlas. We showcase two applications of the atlas: automated segmentation of ultra-high-resolution ex vivo MRI and volumetric analysis of Alzheimer's disease and healthy brain ageing based on ~4,000 publicly available in vivo MRI scans. We publicly release: the raw and aligned data (including an online visualisation tool); the probabilistic atlas; the segmentation tool; and ground truth delineations for a 100 μm isotropic ex vivo hemisphere (that we use for quantitative evaluation of our segmentation method in this paper). By enabling researchers worldwide to analyse brain MRI scans at a superior level of granularity without manual effort or highly specific neuroanatomical knowledge, NextBrain holds promise to increase the specificity of MRI findings and ultimately accelerate our quest to understand the human brain in health and disease.
Collapse
Affiliation(s)
- Adrià Casamitjana
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Matteo Mancini
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Department of Cardiovascular, Endocrine-Metabolic Diseases and Aging, Italian National Institute of Health, Rome, Italy
- Cardiff University Brain Research Imaging Centre, Cardiff University, Cardiff, United Kingdom
| | - Eleanor Robinson
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Loïc Peter
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Roberto Annunziata
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Juri Althonayan
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Shauna Crampsie
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Emily Blackburn
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Benjamin Billot
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Alessia Atzeni
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Oula Puonti
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital - Amager and Hvidovre, Copenhagen, Denmark
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Yaël Balbastre
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Peter Schmidt
- Advanced Research Computing Centre, University College London, London, United Kingdom
| | - James Hughes
- Advanced Research Computing Centre, University College London, London, United Kingdom
| | - Jean C Augustinack
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Brian L Edlow
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Lilla Zöllei
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - David L Thomas
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Dorit Kliemann
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, United States
| | - Martina Bocchetta
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- Centre for Cognitive and Clinical Neuroscience, Division of Psychology, Department of Life Sciences, College of Health, Medicine and Life Sciences, Brunel University London, United Kingdom
| | - Catherine Strand
- Queen Square Brain Bank for Neurological Disorders, Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Janice L Holton
- Queen Square Brain Bank for Neurological Disorders, Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Zane Jaunmuktane
- Queen Square Brain Bank for Neurological Disorders, Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Juan Eugenio Iglesias
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| |
Collapse
|
4
|
Mostafa RR, Khedr AM, Aghbari ZA, Afyouni I, Kamel I, Ahmed N. Medical image segmentation approach based on hybrid adaptive differential evolution and crayfish optimizer. Comput Biol Med 2024; 180:109011. [PMID: 39146840 DOI: 10.1016/j.compbiomed.2024.109011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 07/18/2024] [Accepted: 08/07/2024] [Indexed: 08/17/2024]
Abstract
Image segmentation plays a pivotal role in medical image analysis, particularly for accurately isolating tumors and lesions. Effective segmentation improves diagnostic precision and facilitates quantitative analysis, which is vital for medical professionals. However, traditional segmentation methods often struggle with multilevel thresholding due to the associated computational complexity. Therefore, determining the optimal threshold set is an NP-hard problem, highlighting the pressing need for efficient optimization strategies to overcome these challenges. This paper introduces a multi-threshold image segmentation (MTIS) method that integrates a hybrid approach combining Differential Evolution (DE) and the Crayfish Optimization Algorithm (COA), known as HADECO. Utilizing two-dimensional (2D) Kapur's entropy and a 2D histogram, this method aims to enhance the efficiency and accuracy of subsequent image analysis and diagnosis. HADECO is a hybrid algorithm that combines DE and COA by exchanging information based on predefined rules, leveraging the strengths of both for superior optimization results. It employs Latin Hypercube Sampling (LHS) to generate a high-quality initial population. HADECO introduces an improved DE algorithm (IDE) with adaptive and dynamic adjustments to key DE parameters and new mutation strategies to enhance its search capability. In addition, it incorporates an adaptive COA (ACOA) with dynamic adjustments to the switching probability parameter, effectively balancing exploration and exploitation. To evaluate the effectiveness of HADECO, its performance is initially assessed using CEC'22 benchmark functions. HADECO is evaluated against several contemporary algorithms using the Wilcoxon signed rank test (WSRT) and the Friedman test (FT) to integrate the results. The findings highlight HADECO's superior optimization abilities, demonstrated by its lowest average Friedman ranking of 1.08. Furthermore, the HADECO-based MTIS method is evaluated using MRI images for knee and CT scans for brain intracranial hemorrhage (ICH). Quantitative results in brain hemorrhage image segmentation show that the proposed method achieves a superior average peak signal-to-noise ratio (PSNR) and feature similarity index (FSIM) of 1.5 and 1.7 at the 6-level threshold. In knee image segmentation, it attains an average PSNR and FSIM of 1.3 and 1.2 at the 5-level threshold, demonstrating the method's effectiveness in solving image segmentation problems.
Collapse
Affiliation(s)
- Reham R Mostafa
- Big Data Mining and Multimedia Research Group, Centre for Data Analytics and Cybersecurity (CDAC), Research Institute of Sciences and Engineering (RISE), University of Sharjah, Sharjah 27272, United Arab Emirates; Information Systems Department, Faculty of Computers and Information Sciences, Mansoura University, Mansoura 35516, Egypt.
| | - Ahmed M Khedr
- Computer Science Department, University of Sharjah, Sharjah 27272, United Arab Emirates.
| | - Zaher Al Aghbari
- Computer Science Department, University of Sharjah, Sharjah 27272, United Arab Emirates.
| | - Imad Afyouni
- Computer Science Department, University of Sharjah, Sharjah 27272, United Arab Emirates.
| | - Ibrahim Kamel
- Electrical & Computer Engineering Department, University of Sharjah, Sharjah 27272, United Arab Emirates.
| | - Naveed Ahmed
- Computer Science Department, University of Sharjah, Sharjah 27272, United Arab Emirates.
| |
Collapse
|
5
|
Xue H, Xu X, Yan Z, Cheng J, Zhang L, Zhu W, Cui G, Zhang Q, Qiu S, Yao Z, Qin W, Liu F, Liang M, Fu J, Xu Q, Xu J, Xie Y, Zhang P, Li W, Wang C, Shen W, Zhang X, Xu K, Zuo XN, Ye Z, Yu Y, Xian J, Yu C. Genome-wide association study of hippocampal blood-oxygen-level-dependent-cerebral blood flow correlation in Chinese Han population. iScience 2023; 26:108005. [PMID: 37822511 PMCID: PMC10562876 DOI: 10.1016/j.isci.2023.108005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 07/29/2023] [Accepted: 09/18/2023] [Indexed: 10/13/2023] Open
Abstract
Correlation between blood-oxygen-level-dependent (BOLD) and cerebral blood flow (CBF) has been used as an index of neurovascular coupling. Hippocampal BOLD-CBF correlation is associated with neurocognition, and the reduced correlation is associated with neuropsychiatric disorders. We conducted the first genome-wide association study of the hippocampal BOLD-CBF correlation in 4,832 Chinese Han subjects. The hippocampal BOLD-CBF correlation had an estimated heritability of 16.2-23.9% and showed reliable genome-wide significant association with a locus at 3q28, in which many variants have been linked to neuroimaging and cerebrospinal fluid markers of Alzheimer's disease. Gene-based association analyses showed four significant genes (GMNC, CRTC2, DENND4B, and GATAD2B) and revealed enrichment for mast cell calcium mobilization, microglial cell proliferation, and ubiquitin-related proteolysis pathways that regulate different cellular components of the neurovascular unit. This is the first unbiased identification of the association of hippocampal BOLD-CBF correlation, providing fresh insights into the genetic architecture of hippocampal neurovascular coupling.
Collapse
Affiliation(s)
- Hui Xue
- Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin 300052, China
| | - Xiaojun Xu
- Department of Radiology, The Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou 310009, China
| | - Zhihan Yan
- Department of Radiology, The Second Affiliated Hospital and Yuying Children’s Hospital of Wenzhou Medical University, Wenzhou 325027, China
| | - Jingliang Cheng
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Longjiang Zhang
- Department of Radiology, Jinling Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing 210002, China
| | - Wenzhen Zhu
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Guangbin Cui
- Functional and Molecular Imaging Key Lab of Shaanxi Province & Department of Radiology, Tangdu Hospital, Air Force Medical University, Xi’an 710038, China
| | - Quan Zhang
- Department of Radiology, Characteristic Medical Center of Chinese People’s Armed Police Force, Tianjin 300162, China
| | - Shijun Qiu
- Department of Medical Imaging, the First Affiliated Hospital of Guangzhou University of Traditional Chinese Medicine, Guangzhou 510405, China
| | - Zhenwei Yao
- Department of Radiology, Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Wen Qin
- Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin 300052, China
| | - Feng Liu
- Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin 300052, China
| | - Meng Liang
- School of Medical Imaging and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University, Tianjin 300203, China
| | - Jilian Fu
- Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin 300052, China
| | - Qiang Xu
- Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin 300052, China
| | - Jiayuan Xu
- Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin 300052, China
| | - Yingying Xie
- Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin 300052, China
| | - Peng Zhang
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China
| | - Wei Li
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China
| | - Caihong Wang
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Wen Shen
- Department of Radiology, Tianjin First Center Hospital, Tianjin 300192, China
| | - Xiaochu Zhang
- Division of Life Science and Medicine, University of Science & Technology of China, Hefei 230027, China
| | - Kai Xu
- Department of Radiology, The Affiliated Hospital of Xuzhou Medical University, Xuzhou 221006, China
| | - Xi-Nian Zuo
- Developmental Population Neuroscience Research Center at IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
- Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
| | - Zhaoxiang Ye
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Tianjin Medical University, Ministry of Education, Key Laboratory of Cancer Prevention and Therapy, Tianjin 300060, China
| | - Yongqiang Yu
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei 230022, China
| | - Junfang Xian
- Department of Radiology, Beijing Tongren Hospital, Capital Medical University, Beijing 100730, China
| | - Chunshui Yu
- Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin 300052, China
- CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | | |
Collapse
|
6
|
Tregidgo HFJ, Soskic S, Althonayan J, Maffei C, Van Leemput K, Golland P, Insausti R, Lerma-Usabiaga G, Caballero-Gaudes C, Paz-Alonso PM, Yendiki A, Alexander DC, Bocchetta M, Rohrer JD, Iglesias JE. Accurate Bayesian segmentation of thalamic nuclei using diffusion MRI and an improved histological atlas. Neuroimage 2023; 274:120129. [PMID: 37088323 PMCID: PMC10636587 DOI: 10.1016/j.neuroimage.2023.120129] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 03/30/2023] [Accepted: 04/20/2023] [Indexed: 04/25/2023] Open
Abstract
The human thalamus is a highly connected brain structure, which is key for the control of numerous functions and is involved in several neurological disorders. Recently, neuroimaging studies have increasingly focused on the volume and connectivity of the specific nuclei comprising this structure, rather than looking at the thalamus as a whole. However, accurate identification of cytoarchitectonically designed histological nuclei on standard in vivo structural MRI is hampered by the lack of image contrast that can be used to distinguish nuclei from each other and from surrounding white matter tracts. While diffusion MRI may offer such contrast, it has lower resolution and lacks some boundaries visible in structural imaging. In this work, we present a Bayesian segmentation algorithm for the thalamus. This algorithm combines prior information from a probabilistic atlas with likelihood models for both structural and diffusion MRI, allowing segmentation of 25 thalamic labels per hemisphere informed by both modalities. We present an improved probabilistic atlas, incorporating thalamic nuclei identified from histology and 45 white matter tracts surrounding the thalamus identified in ultra-high gradient strength diffusion imaging. We present a family of likelihood models for diffusion tensor imaging, ensuring compatibility with the vast majority of neuroimaging datasets that include diffusion MRI data. The use of these diffusion likelihood models greatly improves identification of nuclear groups versus segmentation based solely on structural MRI. Dice comparison of 5 manually identifiable groups of nuclei to ground truth segmentations show improvements of up to 10 percentage points. Additionally, our chosen model shows a high degree of reliability, with median test-retest Dice scores above 0.85 for four out of five nuclei groups, whilst also offering improved detection of differential thalamic involvement in Alzheimer's disease (AUROC 81.98%). The probabilistic atlas and segmentation tool will be made publicly available as part of the neuroimaging package FreeSurfer (https://freesurfer.net/fswiki/ThalamicNucleiDTI).
Collapse
Affiliation(s)
- Henry F J Tregidgo
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK.
| | - Sonja Soskic
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Juri Althonayan
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Chiara Maffei
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA
| | - Koen Van Leemput
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA; Department of Health Technology, Technical University of Denmark, Denmark
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| | - Ricardo Insausti
- Human Neuroanatomy Laboratory, University of Castilla-La Mancha, Spain
| | - Garikoitz Lerma-Usabiaga
- BCBL. Basque Center on Cognition, Brain and Language, Spain; Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| | | | - Pedro M Paz-Alonso
- BCBL. Basque Center on Cognition, Brain and Language, Spain; Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| | - Anastasia Yendiki
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA
| | - Daniel C Alexander
- Centre for Medical Image Computing, Department of Computer Science, University College London, UK
| | - Martina Bocchetta
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, UK; Centre for Cognitive and Clinical Neuroscience, Department of Life Sciences, College of Health, Medicine and Life Sciences, Brunel University London, UK
| | - Jonathan D Rohrer
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, UK
| | - Juan Eugenio Iglesias
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK; Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| |
Collapse
|
7
|
Cerri S, Greve DN, Hoopes A, Lundell H, Siebner HR, Mühlau M, Van Leemput K. An open-source tool for longitudinal whole-brain and white matter lesion segmentation. Neuroimage Clin 2023; 38:103354. [PMID: 36907041 PMCID: PMC10024238 DOI: 10.1016/j.nicl.2023.103354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 02/10/2023] [Accepted: 02/19/2023] [Indexed: 03/06/2023]
Abstract
In this paper we describe and validate a longitudinal method for whole-brain segmentation of longitudinal MRI scans. It builds upon an existing whole-brain segmentation method that can handle multi-contrast data and robustly analyze images with white matter lesions. This method is here extended with subject-specific latent variables that encourage temporal consistency between its segmentation results, enabling it to better track subtle morphological changes in dozens of neuroanatomical structures and white matter lesions. We validate the proposed method on multiple datasets of control subjects and patients suffering from Alzheimer's disease and multiple sclerosis, and compare its results against those obtained with its original cross-sectional formulation and two benchmark longitudinal methods. The results indicate that the method attains a higher test-retest reliability, while being more sensitive to longitudinal disease effect differences between patient groups. An implementation is publicly available as part of the open-source neuroimaging package FreeSurfer.
Collapse
Affiliation(s)
- Stefano Cerri
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA.
| | - Douglas N Greve
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA; Department of Radiology, Harvard Medical School, USA
| | - Andrew Hoopes
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA
| | - Henrik Lundell
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Amager and Hvidovre, Copenhagen, Denmark
| | - Hartwig R Siebner
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Amager and Hvidovre, Copenhagen, Denmark; Department of Neurology, Copenhagen University Hospital Bispebjerg and Frederiksberg, Copenhagen, Denmark; Institute for Clinical Medicine, Faculty of Medical and Health Sciences, University of Copenhagen, Denmark
| | - Mark Mühlau
- Department of Neurology and TUM-Neuroimaging Center, School of Medicine, Technical University of Munich, Germany
| | - Koen Van Leemput
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA; Department of Health Technology, Technical University of Denmark, Denmark
| |
Collapse
|
8
|
Miller M, Tward D, Trouvé A. Molecular Computational Anatomy: Unifying the Particle to Tissue Continuum via Measure Representations of the Brain. BME FRONTIERS 2022; 2022:9868673. [PMID: 37206893 PMCID: PMC10193958 DOI: 10.34133/2022/9868673] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 04/11/2022] [Indexed: 12/24/2023] Open
Abstract
OBJECTIVE The objective of this research is to unify the molecular representations of spatial transcriptomics and cellular scale histology with the tissue scales of computational anatomy for brain mapping. IMPACT STATEMENT We present a unified representation theory for brain mapping based on geometric varifold measures of the microscale deterministic structure and function with the statistical ensembles of the spatially aggregated tissue scales. INTRODUCTION Mapping across coordinate systems in computational anatomy allows us to understand structural and functional properties of the brain at the millimeter scale. New measurement technologies in digital pathology and spatial transcriptomics allow us to measure the brain molecule by molecule and cell by cell based on protein and transcriptomic functional identity. We currently have no mathematical representations for integrating consistently the tissue limits with the molecular particle descriptions. The formalism derived here demonstrates the methodology for transitioning consistently from the molecular scale of quantized particles-using mathematical structures as first introduced by Dirac as the class of generalized functions-to the tissue scales with methods originally introduced by Euler for fluids. METHODS We introduce two mathematical methods based on notions of generalized functions and statistical mechanics. We use geometric varifolds, a product measure on space and function, to represent functional states at the micro-scales-electrophysiology, molecular histology-integrated with a Boltzmann-like program to pass from deterministic particle descriptions to empirical probabilities on the functional states at the tissue scales. RESULTS Our space-function varifold representation provides a recipe for traversing from molecular to tissue scales in terms of a cascade of linear space scaling composed with nonlinear functional feature mapping. Following the cascade implies every scale is a geometric measure so that a universal family of measure norms can be introduced which quantifies the geodesic connection between brains in the orbit independent of the probing technology, whether it be RNA identities, Tau or amyloid histology, spike trains, or dense MR imagery. CONCLUSIONS We demonstrate a unified brain mapping theory for molecular and tissue scales based on geometric measure representations. We call the consistent aggregation of tissue scales from particle and cellular scales, molecular computational anatomy.
Collapse
Affiliation(s)
- Michael Miller
- Department of Biomedical Engineering & Kavli Neuroscience Discovery Institute & Center for Imaging Science, Johns Hopkins University, Baltimore, USA
| | - Daniel Tward
- Departments of Computational Medicine & Neurology, University of California Los Angeles, Los Angeles, USA
| | - Alain Trouvé
- Centre Giovanni Borelli (UMR 9010), Ecole Normale Supérieure Paris-Saclay, Université Paris-Saclay, Gif-sur-Yvette, France
| |
Collapse
|
9
|
Casamitjana A, Iglesias JE. High-resolution atlasing and segmentation of the subcortex: Review and perspective on challenges and opportunities created by machine learning. Neuroimage 2022; 263:119616. [PMID: 36084858 PMCID: PMC11534291 DOI: 10.1016/j.neuroimage.2022.119616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 08/30/2022] [Accepted: 09/05/2022] [Indexed: 11/17/2022] Open
Abstract
This paper reviews almost three decades of work on atlasing and segmentation methods for subcortical structures in human brain MRI. In writing this survey, we have three distinct aims. First, to document the evolution of digital subcortical atlases of the human brain, from the early MRI templates published in the nineties, to the complex multi-modal atlases at the subregion level that are available today. Second, to provide a detailed record of related efforts in the automated segmentation front, from earlier atlas-based methods to modern machine learning approaches. And third, to present a perspective on the future of high-resolution atlasing and segmentation of subcortical structures in in vivo human brain MRI, including open challenges and opportunities created by recent developments in machine learning.
Collapse
Affiliation(s)
- Adrià Casamitjana
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK.
| | - Juan Eugenio Iglesias
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK; Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, USA
| |
Collapse
|
10
|
Hoffmann M, Billot B, Greve DN, Iglesias JE, Fischl B, Dalca AV. SynthMorph: Learning Contrast-Invariant Registration Without Acquired Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:543-558. [PMID: 34587005 PMCID: PMC8891043 DOI: 10.1109/tmi.2021.3116879] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We introduce a strategy for learning image registration without acquired imaging data, producing powerful networks agnostic to contrast introduced by magnetic resonance imaging (MRI). While classical registration methods accurately estimate the spatial correspondence between images, they solve an optimization problem for every new image pair. Learning-based techniques are fast at test time but limited to registering images with contrasts and geometric content similar to those seen during training. We propose to remove this dependency on training data by leveraging a generative strategy for diverse synthetic label maps and images that exposes networks to a wide range of variability, forcing them to learn more invariant features. This approach results in powerful networks that accurately generalize to a broad array of MRI contrasts. We present extensive experiments with a focus on 3D neuroimaging, showing that this strategy enables robust and accurate registration of arbitrary MRI contrasts even if the target contrast is not seen by the networks during training. We demonstrate registration accuracy surpassing the state of the art both within and across contrasts, using a single model. Critically, training on arbitrary shapes synthesized from noise distributions results in competitive performance, removing the dependency on acquired data of any kind. Additionally, since anatomical label maps are often available for the anatomy of interest, we show that synthesizing images from these dramatically boosts performance, while still avoiding the need for real intensity images. Our code is available at doic https://w3id.org/synthmorph.
Collapse
|
11
|
Zhang H, Gomez L, Guilleminot J. Uncertainty quantification of TMS simulations considering MRI segmentation errors. J Neural Eng 2022; 19. [PMID: 35169105 DOI: 10.1088/1741-2552/ac5586] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 02/08/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Transcranial Magnetic Stimulation (TMS) is a non-invasive brain stimulation method that is used to study brain function and conduct neuropsychiatric therapy. Computational methods that are commonly used for electric field (E-field) dosimetry of TMS are limited in accuracy and precision because of possible geometric errors introduced in the generation of head models by segmenting medical images into tissue types. This paper studies E-field prediction fidelity as a function of segmentation accuracy. APPROACH The errors in the segmentation of medical images into tissue types are modeled as geometric uncertainty in the shape of the boundary between tissue types. For each tissue boundary realization, we then use an in-house boundary element method to perform a forward propagation analysis and quantify the impact of tissue boundary uncertainties on the induced cortical E-field. MAIN RESULTS Our results indicate that predictions of E-field induced in the brain are negligibly sensitive to segmentation errors in scalp, skull and white matter, compartments. In contrast, E-field predictions are highly sensitive to possible CSF segmentation errors. Specifically, the segmentation errors on the CSF and gray matter interface lead to higher E-field uncertainties in the gyral crowns, and the segmentation errors on CSF and white matter interface lead to higher uncertainties in the sulci. Furthermore, the uncertainty of the average cortical E-fields over a region exhibits lower uncertainty relative to point-wise estimates. SIGNIFICANCE The accuracy of current cortical E-field simulations is limited by the accuracy of CSF segmentation accuracy. Other quantities of interest like the average of the E-field over a cortical region could provide a dose quantity that is robust to possible segmentation errors.
Collapse
Affiliation(s)
- Hao Zhang
- Department of Civil and Environmental Engineering, Duke University, 121 Hudson Hall, Durham, 27708-0187, UNITED STATES
| | - Luis Gomez
- Elmore Family School of Electrical and Computer Engineering, Purdue University, 465 Northwestern Ave., West Lafayette, Indiana, 47907-2050, UNITED STATES
| | - Johann Guilleminot
- Duke University, 121 Hudson Hall, Durham, North Carolina, 27708-0187, UNITED STATES
| |
Collapse
|
12
|
Shukla PK, Zakariah M, Hatamleh WA, Tarazi H, Tiwari B. AI-DRIVEN Novel Approach for Liver Cancer Screening and Prediction Using Cascaded Fully Convolutional Neural Network. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:4277436. [PMID: 35154620 PMCID: PMC8825667 DOI: 10.1155/2022/4277436] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 12/18/2021] [Accepted: 01/05/2022] [Indexed: 01/01/2023]
Abstract
In experimental analysis and computer-aided design sustain scheme, segmentation of cell liver and hepatic lesions by an automated method is a significant step for studying the biomarkers characteristics in experimental analysis and computer-aided design sustain scheme. Patient to patient, the change in lesion type is dependent on the size, imaging equipment (such as the setting dissimilarity approach), and timing of the lesion, all of which are different. With practical approaches, it is difficult to determine the stages of liver cancer based on the segmentation of lesion patterns. Based on the training accuracy rate, the present algorithm confronts a number of obstacles in some domains. The suggested work proposes a system for automatically detecting liver tumours and lesions in magnetic resonance imaging of the abdomen pictures by using 3D affine invariant and shape parameterization approaches, as well as the results of this study. This point-to-point parameterization addresses the frequent issues associated with concave surfaces by establishing a standard model level for the organ's surface throughout the modelling process. Initially, the geodesic active contour analysis approach is used to separate the liver area from the rest of the body. The proposal is as follows: It is possible to minimise the error rate during the training operations, which are carried out using Cascaded Fully Convolutional Neural Networks (CFCNs) using the input of the segmented tumour area. Liver segmentation may help to reduce the error rate during the training procedures. The stage analysis of the data sets, which are comprised of training and testing pictures, is used to get the findings and validate their validity. The accuracy attained by the Cascaded Fully Convolutional Neural Network (CFCN) for the liver tumour analysis is 94.21 percent, with a calculation time of less than 90 seconds per volume for the liver tumour analysis. The results of the trials show that the total accuracy rate of the training and testing procedure is 93.85 percent in the various volumes of 3DIRCAD datasets tested.
Collapse
Affiliation(s)
- Piyush Kumar Shukla
- Computer Science & Engineering Department, University Institute of Technology, Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal 462033, India
| | - Mohammed Zakariah
- College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia
| | - Wesam Atef Hatamleh
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia
| | - Hussam Tarazi
- Department of Computer Science and Informatics, School of Engineering and Computer Science, Oakland University, Rochester Hills MI USA 318 Meadow Brook rd, Rochester, MI 48309, USA
| | - Basant Tiwari
- Department of Information Technology, Hawassa University, Institute of Technology, Hawassa, Ethiopia
| |
Collapse
|
13
|
Yan Y, Balbastre Y, Brudfors M, Ashburner J. Factorisation-Based Image Labelling. Front Neurosci 2022; 15:818604. [PMID: 35110992 PMCID: PMC8801908 DOI: 10.3389/fnins.2021.818604] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 12/10/2021] [Indexed: 12/21/2022] Open
Abstract
Segmentation of brain magnetic resonance images (MRI) into anatomical regions is a useful task in neuroimaging. Manual annotation is time consuming and expensive, so having a fully automated and general purpose brain segmentation algorithm is highly desirable. To this end, we propose a patched-based labell propagation approach based on a generative model with latent variables. Once trained, our Factorisation-based Image Labelling (FIL) model is able to label target images with a variety of image contrasts. We compare the effectiveness of our proposed model against the state-of-the-art using data from the MICCAI 2012 Grand Challenge and Workshop on Multi-Atlas Labelling. As our approach is intended to be general purpose, we also assess how well it can handle domain shift by labelling images of the same subjects acquired with different MR contrasts.
Collapse
Affiliation(s)
- Yu Yan
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Yaël Balbastre
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| | - Mikael Brudfors
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - John Ashburner
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
14
|
A Discussion of Machine Learning Approaches for Clinical Prediction Modeling. ACTA NEUROCHIRURGICA. SUPPLEMENT 2021; 134:65-73. [PMID: 34862529 DOI: 10.1007/978-3-030-85292-4_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
While machine learning has occupied a niche in clinical medicine for decades, continued method development and increased accessibility of medical data have led to broad diversification of approaches. These range from humble regression-based models to more complex artificial neural networks; yet, despite heterogeneity in foundational principles and architecture, the spectrum of machine learning approaches to clinical prediction modeling have invariably led to the development of algorithms advancing our ability to provide optimal care for our patients. In this chapter, we briefly review early machine learning approaches in medicine before delving into common approaches being applied for clinical prediction modeling today. For each, we offer a brief introduction into theory and application with accompanying examples from the medical literature. In doing so, we present a summarized image of the current state of machine learning and some of its many forms in medical predictive modeling.
Collapse
|
15
|
Li Y, Cui J, Sheng Y, Liang X, Wang J, Chang EIC, Xu Y. Whole brain segmentation with full volume neural network. Comput Med Imaging Graph 2021; 93:101991. [PMID: 34634548 DOI: 10.1016/j.compmedimag.2021.101991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 06/13/2021] [Accepted: 09/06/2021] [Indexed: 10/20/2022]
Abstract
Whole brain segmentation is an important neuroimaging task that segments the whole brain volume into anatomically labeled regions-of-interest. Convolutional neural networks have demonstrated good performance in this task. Existing solutions, usually segment the brain image by classifying the voxels, or labeling the slices or the sub-volumes separately. Their representation learning is based on parts of the whole volume whereas their labeling result is produced by aggregation of partial segmentation. Learning and inference with incomplete information could lead to sub-optimal final segmentation result. To address these issues, we propose to adopt a full volume framework, which feeds the full volume brain image into the segmentation network and directly outputs the segmentation result for the whole brain volume. The framework makes use of complete information in each volume and can be implemented easily. An effective instance in this framework is given subsequently. We adopt the 3D high-resolution network (HRNet) for learning spatially fine-grained representations and the mixed precision training scheme for memory-efficient training. Extensive experiment results on a publicly available 3D MRI brain dataset show that our proposed model advances the state-of-the-art methods in terms of segmentation performance.
Collapse
Affiliation(s)
- Yeshu Li
- Department of Computer Science, University of Illinois at Chicago, Chicago, IL 60607, United States.
| | - Jonathan Cui
- Vacaville Christian Schools, Vacaville, CA 95687, United States.
| | - Yilun Sheng
- Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China; Microsoft Research, Beijing 100080, China.
| | - Xiao Liang
- High School Affiliated to Renmin University of China, Beijing 100080, China.
| | | | | | - Yan Xu
- School of Biological Science and Medical Engineering and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China; Microsoft Research, Beijing 100080, China.
| |
Collapse
|
16
|
Zhang B, Rahmatullah B, Wang SL, Zhang G, Wang H, Ebrahim NA. A bibliometric of publication trends in medical image segmentation: Quantitative and qualitative analysis. J Appl Clin Med Phys 2021; 22:45-65. [PMID: 34453471 PMCID: PMC8504607 DOI: 10.1002/acm2.13394] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 07/29/2021] [Accepted: 07/31/2021] [Indexed: 02/01/2023] Open
Abstract
PURPOSE Medical images are important in diagnosing disease and treatment planning. Computer algorithms that describe anatomical structures that highlight regions of interest and remove unnecessary information are collectively known as medical image segmentation algorithms. The quality of these algorithms will directly affect the performance of the following processing steps. There are many studies about the algorithms of medical image segmentation and their applications, but none involved a bibliometric of medical image segmentation. METHODS This bibliometric work investigated the academic publication trends in medical image segmentation technology. These data were collected from the Web of Science (WoS) Core Collection and the Scopus. In the quantitative analysis stage, important visual maps were produced to show publication trends from five different perspectives including annual publications, countries, top authors, publication sources, and keywords. In the qualitative analysis stage, the frequently used methods and research trends in the medical image segmentation field were analyzed from 49 publications with the top annual citation rates. RESULTS The analysis results showed that the number of publications had increased rapidly by year. The top related countries include the Chinese mainland, the United States, and India. Most of these publications were conference papers, besides there are also some top journals. The research hotspot in this field was deep learning-based medical image segmentation algorithms based on keyword analysis. These publications were divided into three categories: reviews, segmentation algorithm publications, and other relevant publications. Among these three categories, segmentation algorithm publications occupied the vast majority, and deep learning neural network-based algorithm was the research hotspots and frontiers. CONCLUSIONS Through this bibliometric research work, the research hotspot in the medical image segmentation field is uncovered and can point to future research in the field. It can be expected that more researchers will focus their work on deep learning neural network-based medical image segmentation.
Collapse
Affiliation(s)
- Bin Zhang
- Data Intelligence and Knowledge Management, Faculty of Arts, Computing and Creative IndustrySultan Idris Education University (UPSI)Tanjong MalimPerakMalaysia
- School of Computer ScienceBaoji University of Arts and SciencesBaojiP. R. China
| | - Bahbibi Rahmatullah
- Data Intelligence and Knowledge Management, Faculty of Arts, Computing and Creative IndustrySultan Idris Education University (UPSI)Tanjong MalimPerakMalaysia
| | - Shir Li Wang
- Data Intelligence and Knowledge Management, Faculty of Arts, Computing and Creative IndustrySultan Idris Education University (UPSI)Tanjong MalimPerakMalaysia
| | - Guangnan Zhang
- School of Computer ScienceBaoji University of Arts and SciencesBaojiP. R. China
| | - Huan Wang
- School of Computer ScienceBaoji University of Arts and SciencesBaojiP. R. China
| | - Nader Ale Ebrahim
- Research and Technology DepartmentAlzahra UniversityVanakTehranIran
- Office of the Deputy Vice‐Chancellor (Research & Innovation)University of MalayaKuala LumpurMalaysia
| |
Collapse
|
17
|
Bal A, Banerjee M, Chaki R, Sharma P. An efficient brain tumor image classifier by combining multi-pathway cascaded deep neural network and handcrafted features in MR images. Med Biol Eng Comput 2021; 59:1495-1527. [PMID: 34184181 DOI: 10.1007/s11517-021-02370-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Accepted: 04/27/2021] [Indexed: 10/21/2022]
Abstract
Accurate segmentation and delineation of the sub-tumor regions are very challenging tasks due to the nature of the tumor. Traditionally, convolutional neural networks (CNNs) have succeeded in achieving most promising performance for the segmentation of brain tumor; however, handcrafted features remain very important in identification of tumor's boundary regions accurately. The present work proposes a robust deep learning-based model with three different CNN architectures along with pre-defined handcrafted features for brain tumor segmentation, mainly to find out more prominent boundaries of the core and enhanced tumor regions. Generally, automatic CNN architecture does not use the pre-defined handcrafted features because it extracts the features automatically. In this present work, several pre-defined handcrafted features are computed from four MRI modalities (T2, FLAIR, T1c, and T1) with the help of additional handcrafted masks according to user interest and fed to the convolutional features (automatic features) to improve the overall performance of the proposed CNN model for tumor segmentation. Multi-pathway CNN is explored in this present work along with single-pathway CNN, which extracts simultaneously both local and global features to identify the accurate sub-regions of the tumor with the help of handcrafted features. The present work uses a cascaded CNN architecture, where the outcome of a CNN is considered as an additional input information to next subsequent CNNs. To extract the handcrafted features, convolutional operation was applied on the four MRI modalities with the help of several pre-defined masks to produce a predefined set of handcrafted features. The present work also investigates the usefulness of intensity normalization and data augmentation in pre-processing stage in order to handle the difficulties related to the imbalance of tumor labels. The proposed method is experimented on the BraST 2018 datasets and achieved promising results than the existing (currently published) methods with respect to different metrics such as specificity, sensitivity, and dice similarity coefficient (DSC) for complete, core, and enhanced tumor regions. Quantitatively, a notable gain is achieved around the boundaries of the sub-tumor regions using the proposed two-pathway CNN along with the handcrafted features. Graphical Abstract This data is mandatory. Please provide.
Collapse
Affiliation(s)
- Abhishek Bal
- A.K. Choudhury School of Information Technology University of Calcutta, Kolkata, India.
| | | | - Rituparna Chaki
- A.K. Choudhury School of Information Technology University of Calcutta, Kolkata, India
| | | |
Collapse
|
18
|
Gordon S, Kodner B, Goldfryd T, Sidorov M, Goldberger J, Raviv TR. An atlas of classifiers-a machine learning paradigm for brain MRI segmentation. Med Biol Eng Comput 2021; 59:1833-1849. [PMID: 34313921 DOI: 10.1007/s11517-021-02414-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 04/21/2021] [Indexed: 11/25/2022]
Abstract
We present the Atlas of Classifiers (AoC)-a conceptually novel framework for brain MRI segmentation. The AoC is a spatial map of voxel-wise multinomial logistic regression (LR) functions learned from the labeled data. Upon convergence, the resulting fixed LR weights, a few for each voxel, represent the training dataset. It can, therefore, be considered as a light-weight learning machine, which despite its low capacity does not underfit the problem. The AoC construction is independent of the actual intensities of the test images, providing the flexibility to train it on the available labeled data and use it for the segmentation of images from different datasets and modalities. In this sense, it does not overfit the training data, as well. The proposed method has been applied to numerous publicly available datasets for the segmentation of brain MRI tissues and is shown to be robust to noise and outreach commonly used methods. Promising results were also obtained for multi-modal, cross-modality MRI segmentation. Finally, we show how AoC trained on brain MRIs of healthy subjects can be exploited for lesion segmentation of multiple sclerosis patients.
Collapse
Affiliation(s)
- Shiri Gordon
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Boris Kodner
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Tal Goldfryd
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Michael Sidorov
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Jacob Goldberger
- The Faculty of Electrical Engineering, Ber-Ilan University, Ramat-Gan, Israel
| | - Tammy Riklin Raviv
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
| |
Collapse
|
19
|
Weiss DA, Saluja R, Xie L, Gee JC, Sugrue LP, Pradhan A, Nick Bryan R, Rauschecker AM, Rudie JD. Automated multiclass tissue segmentation of clinical brain MRIs with lesions. NEUROIMAGE-CLINICAL 2021; 31:102769. [PMID: 34333270 PMCID: PMC8346689 DOI: 10.1016/j.nicl.2021.102769] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 06/29/2021] [Accepted: 07/20/2021] [Indexed: 12/21/2022]
Abstract
A U-Net incorporating spatial prior information can successfully segment 6 brain tissue types. The U-Net was able to segment gray and white matter in the presence of lesions. The U-Net surpassed the performance of its source algorithm in an external dataset. Segmentations were produced in a hundredth of the time of its predecessor algorithm.
Delineation and quantification of normal and abnormal brain tissues on Magnetic Resonance Images is fundamental to the diagnosis and longitudinal assessment of neurological diseases. Here we sought to develop a convolutional neural network for automated multiclass tissue segmentation of brain MRIs that was robust at typical clinical resolutions and in the presence of a variety of lesions. We trained a 3D U-Net for full brain multiclass tissue segmentation from a prior atlas-based segmentation method on an internal dataset that consisted of 558 clinical T1-weighted brain MRIs (453/52/53; training/validation/test) of patients with one of 50 different diagnostic entities (n = 362) or with a normal brain MRI (n = 196). We then used transfer learning to refine our model on an external dataset that consisted of 7 patients with hand-labeled tissue types. We evaluated the tissue-wise and intra-lesion performance with different loss functions and spatial prior information in the validation set and applied the best performing model to the internal and external test sets. The network achieved an average overall Dice score of 0.87 and volume similarity of 0.97 in the internal test set. Further, the network achieved a median intra-lesion tissue segmentation accuracy of 0.85 inside lesions within white matter and 0.61 inside lesions within gray matter. After transfer learning, the network achieved an average overall Dice score of 0.77 and volume similarity of 0.96 in the external dataset compared to human raters. The network had equivalent or better performance than the original atlas-based method on which it was trained across all metrics and produced segmentations in a hundredth of the time. We anticipate that this pipeline will be a useful tool for clinical decision support and quantitative analysis of clinical brain MRIs in the presence of lesions.
Collapse
Affiliation(s)
- David A Weiss
- University of Pennsylvania, United States; University of California, San Francisco, United States.
| | | | - Long Xie
- University of Pennsylvania, United States
| | | | - Leo P Sugrue
- University of California, San Francisco, United States
| | | | | | | | | |
Collapse
|
20
|
Learning U-Net Based Multi-Scale Features in Encoding-Decoding for MR Image Brain Tissue Segmentation. SENSORS 2021; 21:s21093232. [PMID: 34067101 PMCID: PMC8124734 DOI: 10.3390/s21093232] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 04/27/2021] [Accepted: 04/28/2021] [Indexed: 11/17/2022]
Abstract
Accurate brain tissue segmentation of MRI is vital to diagnosis aiding, treatment planning, and neurologic condition monitoring. As an excellent convolutional neural network (CNN), U-Net is widely used in MR image segmentation as it usually generates high-precision features. However, the performance of U-Net is considerably restricted due to the variable shapes of the segmented targets in MRI and the information loss of down-sampling and up-sampling operations. Therefore, we propose a novel network by introducing spatial and channel dimensions-based multi-scale feature information extractors into its encoding-decoding framework, which is helpful in extracting rich multi-scale features while highlighting the details of higher-level features in the encoding part, and recovering the corresponding localization to a higher resolution layer in the decoding part. Concretely, we propose two information extractors, multi-branch pooling, called MP, in the encoding part, and multi-branch dense prediction, called MDP, in the decoding part, to extract multi-scale features. Additionally, we designed a new multi-branch output structure with MDP in the decoding part to form more accurate edge-preserving predicting maps by integrating the dense adjacent prediction features at different scales. Finally, the proposed method is tested on datasets MRbrainS13, IBSR18, and ISeg2017. We find that the proposed network performs higher accuracy in segmenting MRI brain tissues and it is better than the leading method of 2018 at the segmentation of GM and CSF. Therefore, it can be a useful tool for diagnostic applications, such as brain MRI segmentation and diagnosing.
Collapse
|
21
|
Glas HH, Kraeima J, van Ooijen PMA, Spijkervet FKL, Yu L, Witjes MJH. Augmented Reality Visualization for Image-Guided Surgery: A Validation Study Using a Three-Dimensional Printed Phantom. J Oral Maxillofac Surg 2021; 79:1943.e1-1943.e10. [PMID: 34033801 DOI: 10.1016/j.joms.2021.04.001] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 04/01/2021] [Accepted: 04/01/2021] [Indexed: 01/21/2023]
Abstract
BACKGROUND Oral and maxillofacial surgery currently relies on virtual surgery planning based on image data (CT, MRI). Three-dimensional (3D) visualizations are typically used to plan and predict the outcome of complex surgical procedures. To translate the virtual surgical plan to the operating room, it is either converted into physical 3D-printed guides or directly translated using real-time navigation systems. PURPOSE This study aims to improve the translation of the virtual surgery plan to a surgical procedure, such as oncologic or trauma surgery, in terms of accuracy and speed. Here we report an augmented reality visualization technique for image-guided surgery. It describes how surgeons can visualize and interact with the virtual surgery plan and navigation data while in the operating room. The user friendliness and usability is objectified by a formal user study that compared our augmented reality assisted technique to the gold standard setup of a perioperative navigation system (Brainlab). Moreover, accuracy of typical navigation tasks as reaching landmarks and following trajectories is compared. RESULTS Overall completion time of navigation tasks was 1.71 times faster using augmented reality (P = .034). Accuracy improved significantly using augmented reality (P < .001), for reaching physical landmarks a less strong correlation was found (P = .087). Although the participants were relatively unfamiliar with VR/AR (rated 2.25/5) and gesture-based interaction (rated 2/5), they reported that navigation tasks become easier to perform using augmented reality (difficulty Brainlab rated 3.25/5, HoloLens 2.4/5). CONCLUSION The proposed workflow can be used in a wide range of image-guided surgery procedures as an addition to existing verified image guidance systems. Results of this user study imply that our technique enables typical navigation tasks to be performed faster and more accurately compared to the current gold standard. In addition, qualitative feedback on our augmented reality assisted technique was more positive compared to the standard setup.?>.
Collapse
Affiliation(s)
- H H Glas
- Technical Physician, Department of Oral & Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| | - J Kraeima
- Technical Physician, Department of Oral & Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - P M A van Ooijen
- Associate Professor Faculty of Medical Sciences, Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - F K L Spijkervet
- Professor, Oral and Maxillofacial Surgeon, Head of the Department, Department of Oral & Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - L Yu
- Lecturer in the Department of Computer Science and Software Engineering (CSSE), Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - M J H Witjes
- Oral and Maxillofacial Surgeon, Principal Investigator, Department of Oral & Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
22
|
Hoffmann M, Billot B, Iglesias JE, Fischl B, Dalca AV. LEARNING MRI CONTRAST-AGNOSTIC REGISTRATION. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2021; 2023:899-903. [PMID: 38213549 PMCID: PMC10782386 DOI: 10.1109/isbi48211.2021.9434113] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2024]
Abstract
We introduce a strategy for learning image registration without acquired imaging data, producing powerful networks agnostic to magnetic resonance imaging (MRI) contrast. While classical methods accurately estimate the spatial correspondence between images, they solve an optimization problem for every new image pair. Learning methods are fast at test time but limited to images with contrasts and geometric content similar to those seen during training. We propose to remove this dependency using a generative strategy that exposes networks to a wide range of images synthesized from segmentations during training, forcing them to generalize across contrasts. We show that networks trained within this framework generalize to a broad array of unseen MRI contrasts and surpass classical state-of-the-art brain registration accuracy by up to 12.4 Dice points for a variety of tested contrast combinations. Critically, training on arbitrary shapes synthesized from noise distributions results in competitive performance, removing the dependency on acquired data of any kind. Additionally, since anatomical label maps are often available for the anatomy of interest, we show that synthesizing images from these dramatically boosts performance, while still avoiding the need for real intensity images during training.
Collapse
Affiliation(s)
- Malte Hoffmann
- Athinoula A. Martinos Center, Massachusetts General Hospital, Charlestown, MA 02129, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
| | - Benjamin Billot
- Centre for Medical Image Computing, University College London, WC1E 6BT, UK
| | - Juan E Iglesias
- Athinoula A. Martinos Center, Massachusetts General Hospital, Charlestown, MA 02129, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
- Centre for Medical Image Computing, University College London, WC1E 6BT, UK
- Computer Science and Artificial Intelligence Lab, MIT, Cambridge, MA 02139, USA
| | - Bruce Fischl
- Athinoula A. Martinos Center, Massachusetts General Hospital, Charlestown, MA 02129, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
- Computer Science and Artificial Intelligence Lab, MIT, Cambridge, MA 02139, USA
| | - Adrian V Dalca
- Athinoula A. Martinos Center, Massachusetts General Hospital, Charlestown, MA 02129, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
- Computer Science and Artificial Intelligence Lab, MIT, Cambridge, MA 02139, USA
| |
Collapse
|
23
|
LR-cGAN: Latent representation based conditional generative adversarial network for multi-modality MRI synthesis. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102457] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
24
|
Masoudi S, Harmon SA, Mehralivand S, Walker SM, Raviprakash H, Bagci U, Choyke PL, Turkbey B. Quick guide on radiology image pre-processing for deep learning applications in prostate cancer research. J Med Imaging (Bellingham) 2021; 8:010901. [PMID: 33426151 PMCID: PMC7790158 DOI: 10.1117/1.jmi.8.1.010901] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 12/04/2020] [Indexed: 12/25/2022] Open
Abstract
Purpose: Deep learning has achieved major breakthroughs during the past decade in almost every field. There are plenty of publicly available algorithms, each designed to address a different task of computer vision in general. However, most of these algorithms cannot be directly applied to images in the medical domain. Herein, we are focused on the required preprocessing steps that should be applied to medical images prior to deep neural networks. Approach: To be able to employ the publicly available algorithms for clinical purposes, we must make a meaningful pixel/voxel representation from medical images which facilitates the learning process. Based on the ultimate goal expected from an algorithm (classification, detection, or segmentation), one may infer the required pre-processing steps that can ideally improve the performance of that algorithm. Required pre-processing steps for computed tomography (CT) and magnetic resonance (MR) images in their correct order are discussed in detail. We further supported our discussion by relevant experiments to investigate the efficiency of the listed preprocessing steps. Results: Our experiments confirmed how using appropriate image pre-processing in the right order can improve the performance of deep neural networks in terms of better classification and segmentation. Conclusions: This work investigates the appropriate pre-processing steps for CT and MR images of prostate cancer patients, supported by several experiments that can be useful for educating those new to the field (https://github.com/NIH-MIP/Radiology_Image_Preprocessing_for_Deep_Learning).
Collapse
Affiliation(s)
- Samira Masoudi
- National Cancer Institute, National Institutes of Health, Molecular Imaging Branch, Bethesda, Maryland, United States
| | - Stephanie A. Harmon
- National Cancer Institute, National Institutes of Health, Molecular Imaging Branch, Bethesda, Maryland, United States
| | - Sherif Mehralivand
- National Cancer Institute, National Institutes of Health, Molecular Imaging Branch, Bethesda, Maryland, United States
| | - Stephanie M. Walker
- National Cancer Institute, National Institutes of Health, Molecular Imaging Branch, Bethesda, Maryland, United States
| | - Harish Raviprakash
- National Institutes of Health, Department of Radiology and Imaging Sciences, Bethesda, Maryland, United States
| | - Ulas Bagci
- University of Central Florida, Orlando, Florida, United States
| | - Peter L. Choyke
- National Cancer Institute, National Institutes of Health, Molecular Imaging Branch, Bethesda, Maryland, United States
| | - Baris Turkbey
- National Cancer Institute, National Institutes of Health, Molecular Imaging Branch, Bethesda, Maryland, United States
| |
Collapse
|
25
|
Gray Matter Segmentation of Brain MRI Using Hybrid Enhanced Independent Component Analysis in Noisy and Noise Free Environment. JOURNAL OF BIOMIMETICS BIOMATERIALS AND BIOMEDICAL ENGINEERING 2020. [DOI: 10.4028/www.scientific.net/jbbbe.47.75] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Medical segmentation is the primary task performed to diagnosis the abnormalities in the human body. The brain is the complex organ and anatomical segmentation of brain tissues is a challenging task. In this paper, we used Enhanced Independent component analysis to perform the segmentation of gray matter. We used modified K means, Expected Maximization and Hidden Markov random field to provide better spatial correlation that overcomes in-homogeneity, noise and low contrast. Our objective is achieved in two steps (i) initially unwanted tissues are clipped from the MRI image using skull stripped Algorithm (ii) Enhanced Independent Component analysis is used to perform the segmentation of gray matter. We apply the proposed method on both T1w and T2w MRI to perform segmentation of gray matter at different noisy environments. We evaluate the the performance of our proposed system with Jaccard Index, Dice Coefficient and Accuracy. We further compared the proposed system performance with the existing frameworks. Our proposed method gives better segmentation of gray matter useful for diagnosis neurodegenerative disorders.
Collapse
|
26
|
A contrast-adaptive method for simultaneous whole-brain and lesion segmentation in multiple sclerosis. Neuroimage 2020; 225:117471. [PMID: 33099007 PMCID: PMC7856304 DOI: 10.1016/j.neuroimage.2020.117471] [Citation(s) in RCA: 76] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Revised: 10/12/2020] [Accepted: 10/16/2020] [Indexed: 12/24/2022] Open
Abstract
Here we present a method for the simultaneous segmentation of white matter lesions and normal-appearing neuroanatomical structures from multi-contrast brain MRI scans of multiple sclerosis patients. The method integrates a novel model for white matter lesions into a previously validated generative model for whole-brain segmentation. By using separate models for the shape of anatomical structures and their appearance in MRI, the algorithm can adapt to data acquired with different scanners and imaging protocols without retraining. We validate the method using four disparate datasets, showing robust performance in white matter lesion segmentation while simultaneously segmenting dozens of other brain structures. We further demonstrate that the contrast-adaptive method can also be safely applied to MRI scans of healthy controls, and replicate previously documented atrophy patterns in deep gray matter structures in MS. The algorithm is publicly available as part of the open-source neuroimaging package FreeSurfer.
Collapse
|
27
|
Ding Y, Gong L, Zhang M, Li C, Qin Z. A multi-path adaptive fusion network for multimodal brain tumor segmentation. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.06.078] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
28
|
Image-based state-of-the-art techniques for the identification and classification of brain diseases: a review. Med Biol Eng Comput 2020; 58:2603-2620. [PMID: 32960410 DOI: 10.1007/s11517-020-02256-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Accepted: 08/28/2020] [Indexed: 12/22/2022]
Abstract
Detection and classification methods have a vital and important role in identifying brain diseases. Timely detection and classification of brain diseases enable an accurate identification and effective management of brain impairment. Brain disorders are commonly most spreadable diseases and the diagnosing process is time-consuming and highly expensive. There is an utmost need to develop effective and advantageous methods for brain diseases detection and characterization. Magnetic resonance imaging (MRI), computed tomography (CT), and other various brain imaging scans are used to identify different brain diseases and disorders. Brain imaging scans are the efficient tool to understand the anatomical changes in brain in fast and accurate manner. These different brain imaging scans used with segmentation techniques and along with machine learning and deep learning techniques give maximum accuracy and efficiency. This paper focuses on different conventional approaches, machine learning and deep learning techniques used for the detection, and classification of brain diseases and abnormalities. This paper also summarizes the research gap and problems in the existing techniques used for detection and classification of brain disorders. Comparison and evaluation of different machine learning and deep learning techniques in terms of efficiency and accuracy are also highlighted in this paper. Furthermore, different brain diseases like leukoariaosis, Alzheimer's, Parkinson's, and Wilson's disorder are studied in the scope of machine learning and deep learning techniques.
Collapse
|
29
|
Venkatesh V, Sharma N, Singh M. Intensity inhomogeneity correction of MRI images using InhomoNet. Comput Med Imaging Graph 2020; 84:101748. [DOI: 10.1016/j.compmedimag.2020.101748] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2020] [Revised: 04/28/2020] [Accepted: 06/05/2020] [Indexed: 10/24/2022]
|
30
|
Oliver CR, Westerhof TM, Castro MG, Merajver SD. Quantifying the Brain Metastatic Tumor Micro-Environment using an Organ-On-A Chip 3D Model, Machine Learning, and Confocal Tomography. J Vis Exp 2020. [PMID: 32865534 DOI: 10.3791/61654] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
Brain metastases are the most lethal cancer lesions; 10-30% of all cancers metastasize to the brain, with a median survival of only ~5-20 months, depending on the cancer type. To reduce the brain metastatic tumor burden, gaps in basic and translational knowledge need to be addressed. Major challenges include a paucity of reproducible preclinical models and associated tools. Three-dimensional models of brain metastasis can yield the relevant molecular and phenotypic data used to address these needs when combined with dedicated analysis tools. Moreover, compared to murine models, organ-on-a-chip models of patient tumor cells traversing the blood brain barrier into the brain microenvironment generate results rapidly and are more interpretable with quantitative methods, thus amenable to high throughput testing. Here we describe and demonstrate the use of a novel 3D microfluidic blood brain niche (µmBBN) platform where multiple elements of the niche can be cultured for an extended period (several days), fluorescently imaged by confocal microscopy, and the images reconstructed using an innovative confocal tomography technique; all aimed to understand the development of micro-metastasis and changes to the tumor micro-environment (TME) in a repeatable and quantitative manner. We demonstrate how to fabricate, seed, image, and analyze the cancer cells and TME cellular and humoral components, using this platform. Moreover, we show how artificial intelligence (AI) is used to identify the intrinsic phenotypic differences of cancer cells that are capable of transit through a model µmBBN and to assign them an objective index of brain metastatic potential. The data sets generated by this method can be used to answer basic and translational questions about metastasis, the efficacy of therapeutic strategies, and the role of the TME in both.
Collapse
Affiliation(s)
- C Ryan Oliver
- Department of Internal Medicine, University of Michigan Ann Arbor; Rogel Cancer Center, University of Michigan Ann Arbor
| | - Trisha M Westerhof
- Department of Internal Medicine, University of Michigan Ann Arbor; Rogel Cancer Center, University of Michigan Ann Arbor
| | - Maria G Castro
- Rogel Cancer Center, University of Michigan Ann Arbor; Department of Neurosurgery, University of Michigan Ann Arbor; Department of Cell and Developmental Biology, University of Michigan Ann Arbor
| | - Sofia D Merajver
- Department of Internal Medicine, University of Michigan Ann Arbor; Rogel Cancer Center, University of Michigan Ann Arbor;
| |
Collapse
|
31
|
Rehman ZU, Zia MS, Bojja GR, Yaqub M, Jinchao F, Arshid K. Texture based localization of a brain tumor from MR-images by using a machine learning approach. Med Hypotheses 2020; 141:109705. [PMID: 32289646 DOI: 10.1016/j.mehy.2020.109705] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 03/13/2020] [Accepted: 04/02/2020] [Indexed: 01/10/2023]
Abstract
In this paper, a machine learning approach was used for brain tumour localization on FLAIR scans of magnetic resonance images (MRI). The multi-modal brain images dataset (BraTs 2012) was used, that is a skull stripped and co-registered. In order to remove the noise, bilateral filtering is applied and then texton-map images are created by using the Gabor filter bank. From the texton-map, the image is segmented out into superpixel and then the low-level features are extracted: the first order intensity statistical features and also calculates the histogram level of texton-map at each superpixel level. There is a significant contribution here that the low features are not too much significant for the localization of brain tumour from MR images, but we have to make them meaningful by integrating features with the texton-map images at the region level approach. Then these features which are provided later to classifier for the prediction of three classes: background, tumour and non-tumour region, and used the labels for computation of two different areas (i.e. complete tumour and non-tumour). A Leave-one-out (LOOCV) cross validation technique is applied and achieves the dice overlap score of 88% for the whole tumour area localization, which is similar to the declared score in MICCAI BraTS challenge. This brain tumour localization approach using the textonmap image based on superpixel features illustrates the equivalent performance with other contemporary techniques. Recently, medical hypothesis generation by using autonomous computer based systems in disease diagnosis have given the great contribution in medical diagnosis.
Collapse
Affiliation(s)
- Zaka Ur Rehman
- Department of Computer science and IT, The University of Lahore, Gujrat Campus, Gujrat, Pakistan.
| | - M Sultan Zia
- Department of Computer science and IT, The University of Lahore, Gujrat Campus, Gujrat, Pakistan.
| | - Giridhar Reddy Bojja
- College of Business and Information Systems, Dakota State University, Madison, USA.
| | - Muhammad Yaqub
- Faculty of Information Technology, Beijing University of Technology, China
| | - Feng Jinchao
- Faculty of Information Technology, Beijing University of Technology, China.
| | - Kaleem Arshid
- Faculty of Information Technology, Beijing University of Technology, China
| |
Collapse
|
32
|
Deprez M, Price A, Christiaens D, Lockwood Estrin G, Cordero-Grande L, Hutter J, Daducci A, Tournier JD, Rutherford M, Counsell SJ, Cuadra MB, Hajnal JV. Higher Order Spherical Harmonics Reconstruction of Fetal Diffusion MRI With Intensity Correction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1104-1113. [PMID: 31562073 DOI: 10.1109/tmi.2019.2943565] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present a novel method for higher order reconstruction of fetal diffusion MRI signal that enables detection of fiber crossings. We combine data-driven motion and intensity correction with super-resolution reconstruction and spherical harmonic parametrisation to reconstruct data scattered in both spatial and angular domains into consistent fetal dMRI signal suitable for further diffusion analysis. We show that intensity correction is essential for good performance of the method and identify anatomically plausible fiber crossings. The proposed methodology has potential to facilitate detailed investigation of developing brain connectivity and microstructure in-utero.
Collapse
|
33
|
Banerjee A, Maji P. A Spatially Constrained Probabilistic Model for Robust Image Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:4898-4910. [PMID: 32142431 DOI: 10.1109/tip.2020.2975717] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In general, the hidden Markov random field (HMRF) represents the class label distribution of an image in probabilistic model based segmentation. The class label distributions provided by existing HMRF models consider either the number of neighboring pixels with similar class labels or the spatial distance of neighboring pixels with dissimilar class labels. Also, this spatial information is only considered for estimation of class labels of the image pixels, while its contribution in parameter estimation is completely ignored. This, in turn, deteriorates the parameter estimation, resulting in sub-optimal segmentation performance. Moreover, the existing models assign equal weightage to the spatial information for class label estimation of all pixels throughout the image, which, create significant misclassification for the pixels in boundary region of image classes. In this regard, the paper develops a new clique potential function and a new class label distribution, incorporating the information of image class parameters. Unlike existing HMRF model based segmentation techniques, the proposed framework introduces a new scaling parameter that adaptively measures the contribution of spatial information for class label estimation of image pixels. The importance of the proposed framework is depicted by modifying the HMRF based segmentation methods. The advantage of proposed class label distribution is also demonstrated irrespective of the underlying intensity distributions. The comparative performance of the proposed and existing class label distributions in HMRF model is demonstrated both qualitatively and quantitatively for brain MR image segmentation, HEp-2 cell delineation, natural image and object segmentation.
Collapse
|
34
|
Saygili A, Albayrak S. Knee Meniscus Segmentation and Tear Detection from MRI: A Review. Curr Med Imaging 2020; 16:2-15. [DOI: 10.2174/1573405614666181017122109] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Revised: 09/20/2018] [Accepted: 09/29/2018] [Indexed: 12/22/2022]
Abstract
Background:
Automatic diagnostic systems in medical imaging provide useful information
to support radiologists and other relevant experts. The systems that help radiologists in their
analysis and diagnosis appear to be increasing.
Discussion:
Knee joints are intensively studied structures, as well. In this review, studies that
automatically segment meniscal structures from the knee joint MR images and detect tears have
been investigated. Some of the studies in the literature merely perform meniscus segmentation,
while others include classification procedures that detect both meniscus segmentation and anomalies
on menisci. The studies performed on the meniscus were categorized according to the methods
they used. The methods used and the results obtained from such studies were analyzed along with
their drawbacks, and the aspects to be developed were also emphasized.
Conclusion:
The work that has been done in this area can effectively support the decisions that will
be made by radiology and orthopedics specialists. Furthermore, these operations, which were performed
manually on MR images, can be performed in a shorter time with the help of computeraided
systems, which enables early diagnosis and treatment.
Collapse
Affiliation(s)
- Ahmet Saygili
- Computer Engineering Department, Corlu Faculty of Engineering, Namık Kemal University, Tekirdağ, Turkey
| | - Songül Albayrak
- Computer Engineering Department, Faculty of Electric and Electronics, Yıldız Technical University, İstanbul, Turkey
| |
Collapse
|
35
|
Halder A, Talukdar NA. Robust brain magnetic resonance image segmentation using modified rough-fuzzy C-means with spatial constraints. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2019.105758] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
36
|
Banerjee A, Maji P. Segmentation of bias field induced brain MR images using rough sets and stomped-t distribution. Inf Sci (N Y) 2019. [DOI: 10.1016/j.ins.2019.07.027] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
37
|
An Efficient Implementation of Deep Convolutional Neural Networks for MRI Segmentation. J Digit Imaging 2019; 31:738-747. [PMID: 29488179 DOI: 10.1007/s10278-018-0062-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Image segmentation is one of the most common steps in digital image processing, classifying a digital image into different segments. The main goal of this paper is to segment brain tumors in magnetic resonance images (MRI) using deep learning. Tumors having different shapes, sizes, brightness and textures can appear anywhere in the brain. These complexities are the reasons to choose a high-capacity Deep Convolutional Neural Network (DCNN) containing more than one layer. The proposed DCNN contains two parts: architecture and learning algorithms. The architecture and the learning algorithms are used to design a network model and to optimize parameters for the network training phase, respectively. The architecture contains five convolutional layers, all using 3 × 3 kernels, and one fully connected layer. Due to the advantage of using small kernels with fold, it allows making the effect of larger kernels with smaller number of parameters and fewer computations. Using the Dice Similarity Coefficient metric, we report accuracy results on the BRATS 2016, brain tumor segmentation challenge dataset, for the complete, core, and enhancing regions as 0.90, 0.85, and 0.84 respectively. The learning algorithm includes the task-level parallelism. All the pixels of an MR image are classified using a patch-based approach for segmentation. We attain a good performance and the experimental results show that the proposed DCNN increases the segmentation accuracy compared to previous techniques.
Collapse
|
38
|
Thalman SW, Powell DK, Ubele M, Norris CM, Head E, Lin AL. Brain-Blood Partition Coefficient and Cerebral Blood Flow in Canines Using Calibrated Short TR Recovery (CaSTRR) Correction Method. Front Neurosci 2019; 13:1189. [PMID: 31749679 PMCID: PMC6848028 DOI: 10.3389/fnins.2019.01189] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Accepted: 10/21/2019] [Indexed: 11/13/2022] Open
Abstract
The brain–blood partition coefficient (BBPC) is necessary for quantifying cerebral blood flow (CBF) when using tracer based techniques like arterial spin labeling (ASL). A recent improvement to traditional MRI measurements of BBPC, called calibrated short TR recovery (CaSTRR), has demonstrated a significant reduction in acquisition time for BBPC maps in mice. In this study CaSTRR is applied to a cohort of healthy canines (n = 17, age = 5.0 – 8.0 years) using a protocol suited for application in humans at 3T. The imaging protocol included CaSTRR for BBPC maps, pseudo-continuous ASL for CBF maps, and high resolution anatomical images. The standard CaSTRR method of normalizing BBPC to gadolinium-doped deuterium oxide phantoms was also compared to normalization using hematocrit (Hct) as a proxy value for blood water content. The results show that CaSTRR is able to produce high quality BBPC maps with a 4 min acquisition time. The BBPC maps demonstrate significantly higher BBPC in gray matter (0.83 ± 0.05 mL/g) than in white matter (0.78 ± 0.04 mL/g, p = 0.006). Maps of CBF acquired with pCASL demonstrate a negative correlation between gray matter perfusion and age (p = 0.003). Voxel-wise correction for BBPC is also shown to improve contrast to noise ratio between gray and white matter in CBF maps. A novel aspect of the study was to show that that BBPC measurements can be calculated based on the known Hct of the blood sample placed in scanner. We found a strong correlation (R2 = 0.81 in gray matter, R2 = 0.59 in white matter) established between BBPC maps normalized to the doped phantoms and BBPC maps normalized using Hct. This obviates the need for doped water phantoms which simplifies both the acquisition protocol and the post-processing methods. Together this suggests that CaSTRR represents a feasible, rapid method to account for BBPC variability when quantifying CBF. As canines have been used widely for aging and Alzheimer’s disease studies, the CaSTRR method established in the animals may further improve CBF measurements and advance our understanding of cerebrovascular changes in aging and neurodegeneration.
Collapse
Affiliation(s)
- Scott W Thalman
- F. Joseph Halcomb III, Department of Biomedical Engineering, University of Kentucky, Lexington, KY, United States.,Sanders-Brown Center on Aging, University of Kentucky, Lexington, KY, United States
| | - David K Powell
- F. Joseph Halcomb III, Department of Biomedical Engineering, University of Kentucky, Lexington, KY, United States.,Magnetic Resonance Imaging and Spectroscopy Center, University of Kentucky, Lexington, KY, United States
| | - Margo Ubele
- Sanders-Brown Center on Aging, University of Kentucky, Lexington, KY, United States
| | - Christopher M Norris
- Sanders-Brown Center on Aging, University of Kentucky, Lexington, KY, United States.,Department of Pharmacology and Nutritional Sciences, University of Kentucky, Lexington, KY, United States
| | - Elizabeth Head
- Department of Pathology and Laboratory Medicine, University of California, Irvine, Irvine, CA, United States.,University of California Irvine Institute for Memory Impairments and Neurological Disorders (UCI MIND), University of California, Irvine, Irvine, CA, United States
| | - Ai-Ling Lin
- F. Joseph Halcomb III, Department of Biomedical Engineering, University of Kentucky, Lexington, KY, United States.,Sanders-Brown Center on Aging, University of Kentucky, Lexington, KY, United States.,Department of Pharmacology and Nutritional Sciences, University of Kentucky, Lexington, KY, United States.,Department of Neuroscience, University of Kentucky, Lexington, KY, United States
| |
Collapse
|
39
|
Bermudez Noguera C, Bao S, Petersen KJ, Lopez AM, Reid J, Plassard AJ, Zald DH, Claassen DO, Dawant BM, Landman BA. Using deep learning for a diffusion-based segmentation of the dentate nucleus and its benefits over atlas-based methods. J Med Imaging (Bellingham) 2019; 6:044007. [PMID: 31824980 PMCID: PMC6895566 DOI: 10.1117/1.jmi.6.4.044007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Accepted: 11/18/2019] [Indexed: 01/17/2023] Open
Abstract
The dentate nucleus (DN) is a gray matter structure deep in the cerebellum involved in motor coordination, sensory input integration, executive planning, language, and visuospatial function. The DN is an emerging biomarker of disease, informing studies that advance pathophysiologic understanding of neurodegenerative and related disorders. The main challenge in defining the DN radiologically is that, like many deep gray matter structures, it has poor contrast in T1-weighted magnetic resonance (MR) images and therefore requires specialized MR acquisitions for visualization. Manual tracing of the DN across multiple acquisitions is resource-intensive and does not scale well to large datasets. We describe a technique that automatically segments the DN using deep learning (DL) on common imaging sequences, such as T1-weighted, T2-weighted, and diffusion MR imaging. We trained a DL algorithm that can automatically delineate the DN and provide an estimate of its volume. The automatic segmentation achieved higher agreement to the manual labels compared to template registration, which is the current common practice in DN segmentation or multiatlas segmentation of manual labels. Across all sequences, the FA maps achieved the highest mean Dice similarity coefficient (DSC) of 0.83 compared to T1 imaging ( DSC = 0.76 ), T2 imaging ( DSC = 0.79 ), or a multisequence approach ( DSC = 0.80 ). A single atlas registration approach using the spatially unbiased atlas template of the cerebellum and brainstem template achieved a DSC of 0.23, and multi-atlas segmentation achieved a DSC of 0.33. Overall, we propose a method of delineating the DN on clinical imaging that can reproduce manual labels with higher accuracy than current atlas-based tools.
Collapse
Affiliation(s)
- Camilo Bermudez Noguera
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Shunxing Bao
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Kalen J. Petersen
- Vanderbilt University, Department of Neurology, Nashville, Tennessee, United States
| | - Alexander M. Lopez
- Vanderbilt University, Department of Neurology, Nashville, Tennessee, United States
| | - Jacqueline Reid
- Vanderbilt University, Department of Neurology, Nashville, Tennessee, United States
| | - Andrew J. Plassard
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - David H. Zald
- Vanderbilt University, Department of Psychology and Psychiatry, Nashville, Tennessee, United States
| | - Daniel O. Claassen
- Vanderbilt University, Department of Neurology, Nashville, Tennessee, United States
| | - Benoit M. Dawant
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Bennett A. Landman
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
- Vanderbilt University, Department of Psychology and Psychiatry, Nashville, Tennessee, United States
| |
Collapse
|
40
|
Dalca AV, Yu E, Golland P, Fischl B, Sabuncu MR, Iglesias JE. Unsupervised Deep Learning for Bayesian Brain MRI Segmentation. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2019; 11766:356-365. [PMID: 32432231 PMCID: PMC7235150 DOI: 10.1007/978-3-030-32248-9_40] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
Probabilistic atlas priors have been commonly used to derive adaptive and robust brain MRI segmentation algorithms. Widely-used neuroimage analysis pipelines rely heavily on these techniques, which are often computationally expensive. In contrast, there has been a recent surge of approaches that leverage deep learning to implement segmentation tools that are computationally efficient at test time. However, most of these strategies rely on learning from manually annotated images. These supervised deep learning methods are therefore sensitive to the intensity profiles in the training dataset. To develop a deep learning-based segmentation model for a new image dataset (e.g., of different contrast), one usually needs to create a new labeled training dataset, which can be prohibitively expensive, or rely on suboptimal ad hoc adaptation or augmentation approaches. In this paper, we propose an alternative strategy that combines a conventional probabilistic atlas-based segmentation with deep learning, enabling one to train a segmentation model for new MRI scans without the need for any manually segmented images. Our experiments include thousands of brain MRI scans and demonstrate that the proposed method achieves good accuracy for a brain MRI segmentation task for different MRI contrasts, requiring only approximately 15 seconds at test time on a GPU.
Collapse
Affiliation(s)
- Adrian V Dalca
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology
| | - Evan Yu
- Meinig School of Biomedical Engineering, Cornell University
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology
| | - Bruce Fischl
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
| | - Mert R Sabuncu
- Meinig School of Biomedical Engineering, Cornell University
- School of Electrical and Computer Engineering, Cornell University
| | - Juan Eugenio Iglesias
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology
- Centre for Medical Image Computing (CMIC), University College London
| |
Collapse
|
41
|
Kojima K, Nakajima T, Taga N, Miyauchi A, Kato M, Matsumoto A, Ikeda T, Nakamura K, Kubota T, Mizukami H, Ono S, Onuki Y, Sato T, Osaka H, Muramatsu SI, Yamagata T. Gene therapy improves motor and mental function of aromatic l-amino acid decarboxylase deficiency. Brain 2019; 142:322-333. [PMID: 30689738 PMCID: PMC6377184 DOI: 10.1093/brain/awy331] [Citation(s) in RCA: 104] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2018] [Accepted: 11/07/2018] [Indexed: 12/01/2022] Open
Abstract
In patients with aromatic l-amino acid decarboxylase (AADC) deficiency, a decrease in catecholamines and serotonin levels in the brain leads to developmental delay and movement disorders. The beneficial effects of gene therapy in patients from 1 to 8 years of age with homogeneous severity of disease have been reported from Taiwan. We conducted an open-label phase 1/2 study of population including adolescent patients with different degrees of severity. Six patients were enrolled: four males (ages 4, 10, 15 and 19 years) and one female (age 12 years) with a severe phenotype who were not capable of voluntary movement or speech, and one female (age 5 years) with a moderate phenotype who could walk with support. The patients received a total of 2 × 1011 vector genomes of adeno-associated virus vector harbouring DDC via bilateral intraputaminal infusions. At up to 2 years after gene therapy, the motor function was remarkably improved in all patients. Three patients with the severe phenotype were able to stand with support, and one patient could walk with a walker, while the patient with the moderate phenotype could run and ride a bicycle. This moderate-phenotype patient also showed improvement in her mental function, being able to converse fluently and perform simple arithmetic. Dystonia disappeared and oculogyric crisis was markedly decreased in all patients. The patients exhibited transient choreic dyskinesia for a couple of months, but no adverse events caused by vector were observed. PET with 6-[18F]fluoro-l-m-tyrosine, a specific tracer for AADC, showed a persistently increased uptake in the broad areas of the putamen. In our study, older patients (>8 years of age) also showed improvement, although treatment was more effective in younger patients. The genetic background of our patients was heterogeneous, and some patients suspected of having remnant enzyme activity showed better improvement than the Taiwanese patients. In addition to the alleviation of motor symptoms, the cognitive and verbal functions were improved in a patient with the moderate phenotype. The restoration of dopamine synthesis in the putamen via gene transfer provides transformative medical benefit across all patient ages, genotypes, and disease severities included in this study, with the most pronounced improvements noted in moderate patients.
Collapse
Affiliation(s)
- Karin Kojima
- Department of Pediatrics, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Takeshi Nakajima
- Department of Neurosurgery, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Naoyuki Taga
- Department of Anesthesiology and Critical Care Medicine, Division of Anesthesiology, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Akihiko Miyauchi
- Department of Pediatrics, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Mitsuhiro Kato
- Department of Pediatrics, Showa University, Shinagawa, Tokyo, Japan.,Department of Pediatrics, Yamagata University Faculty of Medicine, Yamagata, Yamagata, Japan
| | - Ayumi Matsumoto
- Department of Pediatrics, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Takahiro Ikeda
- Department of Pediatrics, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Kazuyuki Nakamura
- Department of Pediatrics, Yamagata University Faculty of Medicine, Yamagata, Yamagata, Japan
| | - Tetsuo Kubota
- Department of Pediatrics, Anjo Kosei Hospital, Anjo, Aichi, Japan
| | - Hiroaki Mizukami
- Division of Genetic Therapeutics, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Sayaka Ono
- Division of Neurology, Department of Medicine, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Yoshiyuki Onuki
- Department of Neurosurgery, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | | | - Hitoshi Osaka
- Department of Pediatrics, Jichi Medical University, Shimotsuke, Tochigi, Japan
| | - Shin-Ichi Muramatsu
- Division of Genetic Therapeutics, Jichi Medical University, Shimotsuke, Tochigi, Japan.,Division of Neurology, Department of Medicine, Jichi Medical University, Shimotsuke, Tochigi, Japan.,Center for Gene and Cell Therapy, The Institute of Medical Science, The University of Tokyo, Minato-ku, Tokyo, Japan
| | - Takanori Yamagata
- Department of Pediatrics, Jichi Medical University, Shimotsuke, Tochigi, Japan
| |
Collapse
|
42
|
Quantitative assessment of myelination patterns in preterm neonates using T2-weighted MRI. Sci Rep 2019; 9:12938. [PMID: 31506514 PMCID: PMC6736873 DOI: 10.1038/s41598-019-49350-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 08/14/2019] [Indexed: 11/08/2022] Open
Abstract
Myelination is considered to be an important developmental process during human brain maturation and closely correlated with gestational age. Quantitative assessment of the myelination status requires dedicated imaging, but the conventional T2-weighted scans routinely acquired during clinical imaging of neonates carry signatures that are thought to be associated with myelination. In this work, we develop a quatitative marker of progressing myelination for assessment preterm neonatal brain maturation based on novel automatic segmentation method for myelin-like signals on T2-weighted magnetic resonance images. Firstly we define a segmentation protocol for myelin-like signals. We then develop an expectation-maximization framework to obtain the automatic segmentations of myelin-like signals with explicit class for partial volume voxels whose locations are configured in relation to the composing pure tissues via second-order Markov random fields. The proposed segmentation achieves high Dice overlaps of 0.83 with manual annotations. The automatic segmentations are then used to track volumes of myelinated tissues in the regions of the central brain structures and brainstem. Finally, we construct a spatio-temporal growth models for myelin-like signals, which allows us to predict gestational age at scan in preterm infants with root mean squared error 1.41 weeks.
Collapse
|
43
|
Neural Correlates of Music Listening and Recall in the Human Brain. J Neurosci 2019; 39:8112-8123. [PMID: 31501297 DOI: 10.1523/jneurosci.1468-18.2019] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2018] [Revised: 08/13/2019] [Accepted: 08/14/2019] [Indexed: 11/21/2022] Open
Abstract
Previous neuroimaging studies have identified various brain regions that are activated by music listening or recall. However, little is known about how these brain regions represent the time course and temporal features of music during listening and recall. Here we analyzed neural activity in different brain regions associated with music listening and recall using electrocorticography recordings obtained from 10 epilepsy patients of both genders implanted with subdural electrodes. Electrocorticography signals were recorded while subjects were listening to familiar instrumental music or recalling the same music pieces by imagery. During the onset phase (0-500 ms), music listening initiated cortical activity in high-gamma band in the temporal lobe and supramarginal gyrus, followed by the precentral gyrus and the inferior frontal gyrus. In contrast, during music recall, the high-gamma band activity first appeared in the inferior frontal gyrus and precentral gyrus, and then spread to the temporal lobe, showing a reversed temporal sequential order. During the sustained phase (after 500 ms), delta band and high-gamma band responses in the supramarginal gyrus, temporal and frontal lobes dynamically tracked the intensity envelope of the music during listening or recall with distinct temporal delays. During music listening, the neural tracking by the frontal lobe lagged behind that of the temporal lobe; whereas during music recall, the neural tracking by the frontal lobe preceded that of the temporal lobe. These findings demonstrate bottom-up and top-down processes in the cerebral cortex during music listening and recall and provide important insights into music processing by the human brain.SIGNIFICANCE STATEMENT Understanding how the brain analyzes, stores, and retrieves music remains one of the most challenging problems in neuroscience. By analyzing direct neural recordings obtained from the human brain, we observed dispersed and overlapping brain regions associated with music listening and recall. Music listening initiated cortical activity in high-gamma band starting from the temporal lobe and ending at the inferior frontal gyrus. A reversed temporal flow was observed in high-gamma response during music recall. Neural responses of frontal and temporal lobes dynamically tracked the intensity envelope of music that was presented or imagined during listening or recall. These findings demonstrate bottom-up and top-down processes in the cerebral cortex during music listening and recall.
Collapse
|
44
|
Halder A, Talukdar NA. Brain tissue segmentation using improved kernelized rough-fuzzy C-means with spatio-contextual information from MRI. Magn Reson Imaging 2019; 62:129-151. [PMID: 31247252 DOI: 10.1016/j.mri.2019.06.010] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2018] [Revised: 06/12/2019] [Accepted: 06/14/2019] [Indexed: 11/24/2022]
Abstract
Segmentation of brain tissues from MRI often becomes crucial to properly investigate any region of the brain in order to detect abnormalities. However, the accurate segmentation of the brain tissues is a challenging task as the different tissue regions are usually imprecise, indiscernible, ambiguous, and overlapping. Additionally, different tissue regions are non-linearly separable. Noises and other artifacts may also present in the brain MRI. Therefore, conventional segmentation techniques may not often achieve desired accuracy. To deal those challenges, a robust kernelized rough fuzzy C-means clustering with spatial constraints (KRFCMSC) is proposed in this article for brain tissue segmentation. Here, the brain tissue segmentation from MRI is considered as a clustering of pixels problem. The basic idea behind the proposed technique is the judicious integration of the fuzzy set, rough set, and kernel trick along with spatial constraints (in the form of contextual information) to increase the clustering (segmentation) performance. The use of rough and fuzzy set theory in the clustering process handles the ambiguity, indiscernibility, vagueness and overlappingness of different brain tissue regions. While, the kernel trick increases the chance of linear separability of the complex regions which are otherwise not linearly separable in its original feature space. In order to deal the noisy pixels, here in the clustering process, the spatio-contextual information is introduced from the neighbouring pixels. Experiments are carried out on different real and synthetic benchmark brain MRI datasets (publicly available from Brainweb, and IBSR) without and with added noise. The performance of the proposed method is compared with five other counterpart clustering based segmentation techniques and evaluated using various supervised as well as unsupervised validity indices such as, overall accuracy, precision, recall, kappa, Jaccard, dice, and kernelized Xie-Beni index. Experimental results justify the superiority and robustness of the proposed method over other state-of-the-art methods on both benchmark real life and synthetic brain MRI datasets with and without added noise. Statistical significance of the better segmentation accuracy can be confirmed from the paired t-test results in favour of the proposed method compared to other counterpart methods.
Collapse
Affiliation(s)
- Anindya Halder
- Department of Computer Applications, School of Technology, North-Eastern Hill University, Meghalaya794002, India.
| | - Nur Alom Talukdar
- Department of Computer Applications, School of Technology, North-Eastern Hill University, Meghalaya794002, India.
| |
Collapse
|
45
|
Agn M, Munck Af Rosenschöld P, Puonti O, Lundemann MJ, Mancini L, Papadaki A, Thust S, Ashburner J, Law I, Van Leemput K. A modality-adaptive method for segmenting brain tumors and organs-at-risk in radiation therapy planning. Med Image Anal 2019; 54:220-237. [PMID: 30952038 PMCID: PMC6554451 DOI: 10.1016/j.media.2019.03.005] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 03/14/2019] [Accepted: 03/21/2019] [Indexed: 12/25/2022]
Abstract
In this paper we present a method for simultaneously segmenting brain tumors and an extensive set of organs-at-risk for radiation therapy planning of glioblastomas. The method combines a contrast-adaptive generative model for whole-brain segmentation with a new spatial regularization model of tumor shape using convolutional restricted Boltzmann machines. We demonstrate experimentally that the method is able to adapt to image acquisitions that differ substantially from any available training data, ensuring its applicability across treatment sites; that its tumor segmentation accuracy is comparable to that of the current state of the art; and that it captures most organs-at-risk sufficiently well for radiation therapy planning purposes. The proposed method may be a valuable step towards automating the delineation of brain tumors and organs-at-risk in glioblastoma patients undergoing radiation therapy.
Collapse
Affiliation(s)
- Mikael Agn
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark.
| | - Per Munck Af Rosenschöld
- Radiation Physics, Department of Hematology, Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden
| | - Oula Puonti
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre, Denmark
| | - Michael J Lundemann
- Department of Oncology, Copenhagen University Hospital Rigshospitalet, Denmark
| | - Laura Mancini
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - Anastasia Papadaki
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - Steffi Thust
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - John Ashburner
- Wellcome Centre for Human Neuroimaging, UCL Institute of Neurology, University College London, UK
| | - Ian Law
- Department of Clinical Physiology, Nuclear Medicine and PET, Copenhagen University Hospital Rigshospitalet, Denmark
| | - Koen Van Leemput
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA
| |
Collapse
|
46
|
George MM, Kalaivani S. Retrospective correction of intensity inhomogeneity with sparsity constraints in transform-domain: Application to brain MRI. Magn Reson Imaging 2019; 61:207-223. [PMID: 31009687 DOI: 10.1016/j.mri.2019.04.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Revised: 04/05/2019] [Accepted: 04/18/2019] [Indexed: 11/27/2022]
Abstract
An effective retrospective correction method is introduced in this paper for intensity inhomogeneity which is an inherent artifact in MR images. Intensity inhomogeneity problem is formulated as the decomposition of acquired image into true image and bias field which are expected to have sparse approximation in suitable transform domains based on their known properties. Piecewise constant nature of the true image lends itself to have a sparse approximation in framelet domain. While spatially smooth property of the bias field supports a sparse representation in Fourier domain. The algorithm attains optimal results by seeking the sparsest solutions for the unknown variables in the search space through L1 norm minimization. The objective function associated with defined problem is convex and is efficiently solved by the linearized alternating direction method. Thus, the method estimates the optimal true image and bias field simultaneously in an L1 norm minimization framework by promoting sparsity of the solutions in suitable transform domains. Furthermore, the methodology doesn't require any preprocessing, any predefined specifications or parametric models that are critically controlled by user-defined parameters. The qualitative and quantitative validation of the proposed methodology in simulated and real human brain MR images demonstrates the efficacy and superiority in performance compared to some of the distinguished algorithms for intensity inhomogeneity correction.
Collapse
Affiliation(s)
- Maryjo M George
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu 632014, India.
| | - S Kalaivani
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu 632014, India.
| |
Collapse
|
47
|
Rician noise and intensity nonuniformity correction (NNC) model for MRI data. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2018.11.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
48
|
Iglesias JE, Insausti R, Lerma-Usabiaga G, Bocchetta M, Van Leemput K, Greve DN, van der Kouwe A, Fischl B, Caballero-Gaudes C, Paz-Alonso PM. A probabilistic atlas of the human thalamic nuclei combining ex vivo MRI and histology. Neuroimage 2018; 183:314-326. [PMID: 30121337 PMCID: PMC6215335 DOI: 10.1016/j.neuroimage.2018.08.012] [Citation(s) in RCA: 348] [Impact Index Per Article: 49.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2018] [Revised: 07/27/2018] [Accepted: 08/09/2018] [Indexed: 01/18/2023] Open
Abstract
The human thalamus is a brain structure that comprises numerous, highly specific nuclei. Since these nuclei are known to have different functions and to be connected to different areas of the cerebral cortex, it is of great interest for the neuroimaging community to study their volume, shape and connectivity in vivo with MRI. In this study, we present a probabilistic atlas of the thalamic nuclei built using ex vivo brain MRI scans and histological data, as well as the application of the atlas to in vivo MRI segmentation. The atlas was built using manual delineation of 26 thalamic nuclei on the serial histology of 12 whole thalami from six autopsy samples, combined with manual segmentations of the whole thalamus and surrounding structures (caudate, putamen, hippocampus, etc.) made on in vivo brain MR data from 39 subjects. The 3D structure of the histological data and corresponding manual segmentations was recovered using the ex vivo MRI as reference frame, and stacks of blockface photographs acquired during the sectioning as intermediate target. The atlas, which was encoded as an adaptive tetrahedral mesh, shows a good agreement with previous histological studies of the thalamus in terms of volumes of representative nuclei. When applied to segmentation of in vivo scans using Bayesian inference, the atlas shows excellent test-retest reliability, robustness to changes in input MRI contrast, and ability to detect differential thalamic effects in subjects with Alzheimer's disease. The probabilistic atlas and companion segmentation tool are publicly available as part of the neuroimaging package FreeSurfer.
Collapse
Affiliation(s)
- Juan Eugenio Iglesias
- Centre for Medical Image Computing (CMIC), Department of Medical Physics and Biomedical Engineering, University College London, United Kingdom; BCBL. Basque Center on Cognition, Brain and Language, Spain.
| | - Ricardo Insausti
- Human Neuroanatomy Laboratory, University of Castilla-La Mancha, Spain
| | | | - Martina Bocchetta
- Dementia Research Centre, Department of Neurodegenerative Disease, Institute of Neurology, University College London, United Kingdom
| | - Koen Van Leemput
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA; Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark
| | - Douglas N Greve
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA
| | - Andre van der Kouwe
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA
| | - Bruce Fischl
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA; MIT Computer Science and Artificial Intelligence Laboratory, USA
| | | | | |
Collapse
|
49
|
|
50
|
Nielsen JD, Madsen KH, Puonti O, Siebner HR, Bauer C, Madsen CG, Saturnino GB, Thielscher A. Automatic skull segmentation from MR images for realistic volume conductor models of the head: Assessment of the state-of-the-art. Neuroimage 2018. [DOI: 10.1016/j.neuroimage.2018.03.001] [Citation(s) in RCA: 76] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022] Open
|