1
|
Xena-Bosch C, Kodali S, Sahi N, Chard D, Llufriu S, Toosy AT, Martinez-Heras E, Prados F. Advances in MRI optic nerve segmentation. Mult Scler Relat Disord 2025; 98:106437. [PMID: 40220726 DOI: 10.1016/j.msard.2025.106437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2024] [Revised: 03/05/2025] [Accepted: 04/08/2025] [Indexed: 04/14/2025]
Abstract
Understanding optic nerve structure and monitoring changes within it can provide insights into neurodegenerative diseases like multiple sclerosis, in which optic nerves are often damaged by inflammatory episodes of optic neuritis. Over the past decades, interest in the optic nerve has increased, particularly with advances in magnetic resonance technology and the advent of deep learning solutions. These advances have significantly improved the visualisation and analysis of optic nerves, making it possible to detect subtle changes that aid the early diagnosis and treatment of optic nerve-related diseases, and for planning radiotherapy interventions. Effective segmentation techniques, therefore, are crucial for enhancing the accuracy of predictive models, planning interventions and treatment strategies. This comprehensive review, which includes 27 peer-reviewed articles published between 2007 and 2024, examines and highlights the evolution of optic nerve magnetic resonance imaging segmentation over the past decade, tracing the development from intensity-based methods to the latest deep learning algorithms, including multi-atlas solutions using single or multiple image modalities.
Collapse
Affiliation(s)
- Carla Xena-Bosch
- e-Health Center, Universitat Oberta de Catalunya, Barcelona, Spain.
| | - Srikirti Kodali
- Queen Square MS Centre, Department of Neuroinflammation, UCL Institute of Neurology, Faculty of Brain Sciences, University College London, London, United Kingdom
| | - Nitin Sahi
- Queen Square MS Centre, Department of Neuroinflammation, UCL Institute of Neurology, Faculty of Brain Sciences, University College London, London, United Kingdom
| | - Declan Chard
- Queen Square MS Centre, Department of Neuroinflammation, UCL Institute of Neurology, Faculty of Brain Sciences, University College London, London, United Kingdom; National Institute for Health Research (NIHR) University College London Hospitals (UCLH) Biomedical Research Centre, United Kingdom
| | - Sara Llufriu
- Neuroimmunology and Multiple Sclerosis Unit, Hospital Clínic de Barcelona, Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS) and Universitat de Barcelona, Barcelona, Spain
| | - Ahmed T Toosy
- Queen Square MS Centre, Department of Neuroinflammation, UCL Institute of Neurology, Faculty of Brain Sciences, University College London, London, United Kingdom
| | - Eloy Martinez-Heras
- Neuroimmunology and Multiple Sclerosis Unit, Hospital Clínic de Barcelona, Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS) and Universitat de Barcelona, Barcelona, Spain
| | - Ferran Prados
- e-Health Center, Universitat Oberta de Catalunya, Barcelona, Spain; Queen Square MS Centre, Department of Neuroinflammation, UCL Institute of Neurology, Faculty of Brain Sciences, University College London, London, United Kingdom; Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| |
Collapse
|
2
|
Wang L, Sun Y, Seidlitz J, Bethlehem RAI, Alexander-Bloch A, Dorfschmidt L, Li G, Elison JT, Lin W, Wang L. A lifespan-generalizable skull-stripping model for magnetic resonance images that leverages prior knowledge from brain atlases. Nat Biomed Eng 2025:10.1038/s41551-024-01337-w. [PMID: 39779813 DOI: 10.1038/s41551-024-01337-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 12/05/2024] [Indexed: 01/11/2025]
Abstract
In magnetic resonance imaging of the brain, an imaging-preprocessing step removes the skull and other non-brain tissue from the images. But methods for such a skull-stripping process often struggle with large data heterogeneity across medical sites and with dynamic changes in tissue contrast across lifespans. Here we report a skull-stripping model for magnetic resonance images that generalizes across lifespans by leveraging personalized priors from brain atlases. The model consists of a brain extraction module that provides an initial estimation of the brain tissue on an image, and a registration module that derives a personalized prior from an age-specific atlas. The model is substantially more accurate than state-of-the-art skull-stripping methods, as we show with a large and diverse dataset of 21,334 lifespans acquired from 18 sites with various imaging protocols and scanners, and it generates naturally consistent and seamless lifespan changes in brain volume, faithfully charting the underlying biological processes of brain development and ageing.
Collapse
Affiliation(s)
- Limei Wang
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill and North Carolina State University, Chapel Hill, NC, USA
| | - Yue Sun
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill and North Carolina State University, Chapel Hill, NC, USA
| | - Jakob Seidlitz
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA, USA
- Department of Child and Adolescent Psychiatry and Behavioral Science, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Lifespan Brain Institute, The Children's Hospital of Philadelphia and Penn Medicine, Philadelphia, PA, USA
| | | | - Aaron Alexander-Bloch
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA, USA
- Department of Child and Adolescent Psychiatry and Behavioral Science, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Lifespan Brain Institute, The Children's Hospital of Philadelphia and Penn Medicine, Philadelphia, PA, USA
| | - Lena Dorfschmidt
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA, USA
- Department of Child and Adolescent Psychiatry and Behavioral Science, The Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Lifespan Brain Institute, The Children's Hospital of Philadelphia and Penn Medicine, Philadelphia, PA, USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Jed T Elison
- Institute of Child Development, University of Minnesota, Minneapolis, MN, USA
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Li Wang
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
| |
Collapse
|
3
|
Nishimaki K, Onda K, Ikuta K, Chotiyanonta J, Uchida Y, Mori S, Iyatomi H, Oishi K. OpenMAP-T1: A Rapid Deep-Learning Approach to Parcellate 280 Anatomical Regions to Cover the Whole Brain. Hum Brain Mapp 2024; 45:e70063. [PMID: 39523990 PMCID: PMC11551626 DOI: 10.1002/hbm.70063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Revised: 10/10/2024] [Accepted: 10/15/2024] [Indexed: 11/16/2024] Open
Abstract
This study introduces OpenMAP-T1, a deep-learning-based method for rapid and accurate whole-brain parcellation in T1- weighted brain MRI, which aims to overcome the limitations of conventional normalization-to-atlas-based approaches and multi-atlas label-fusion (MALF) techniques. Brain image parcellation is a fundamental process in neuroscientific and clinical research, enabling a detailed analysis of specific cerebral regions. Normalization-to-atlas-based methods have been employed for this task, but they face limitations due to variations in brain morphology, especially in pathological conditions. The MALF techniques improved the accuracy of the image parcellation and robustness to variations in brain morphology, but at the cost of high computational demand that requires a lengthy processing time. OpenMAP-T1 integrates several convolutional neural network models across six phases: preprocessing; cropping; skull-stripping; parcellation; hemisphere segmentation; and final merging. This process involves standardizing MRI images, isolating the brain tissue, and parcellating it into 280 anatomical structures that cover the whole brain, including detailed gray and white matter structures, while simplifying the parcellation processes and incorporating robust training to handle various scan types and conditions. The OpenMAP-T1 was validated on the Johns Hopkins University atlas library and eight available open resources, including real-world clinical images, and the demonstration of robustness across different datasets with variations in scanner types, magnetic field strengths, and image processing techniques, such as defacing. Compared with existing methods, OpenMAP-T1 significantly reduced the processing time per image from several hours to less than 90 s without compromising accuracy. It was particularly effective in handling images with intensity inhomogeneity and varying head positions, conditions commonly seen in clinical settings. The adaptability of OpenMAP-T1 to a wide range of MRI datasets and its robustness to various scan conditions highlight its potential as a versatile tool in neuroimaging.
Collapse
Affiliation(s)
- Kei Nishimaki
- The Russell H. Morgan Department of Radiology and Radiological ScienceThe Johns Hopkins University School of MedicineBaltimoreMarylandUSA
- Department of Applied Informatics, Graduate School of Science and EngineeringHosei UniversityTokyoJapan
| | - Kengo Onda
- The Russell H. Morgan Department of Radiology and Radiological ScienceThe Johns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Kumpei Ikuta
- Department of Applied Informatics, Graduate School of Science and EngineeringHosei UniversityTokyoJapan
| | - Jill Chotiyanonta
- The Russell H. Morgan Department of Radiology and Radiological ScienceThe Johns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Yuto Uchida
- The Russell H. Morgan Department of Radiology and Radiological ScienceThe Johns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Susumu Mori
- The Russell H. Morgan Department of Radiology and Radiological ScienceThe Johns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Hitoshi Iyatomi
- Department of Applied Informatics, Graduate School of Science and EngineeringHosei UniversityTokyoJapan
| | - Kenichi Oishi
- The Russell H. Morgan Department of Radiology and Radiological ScienceThe Johns Hopkins University School of MedicineBaltimoreMarylandUSA
- The Richman Family Precision Medicine Center of Excellence in Alzheimer's DiseaseJohns Hopkins University School of MedicineBaltimoreMarylandUSA
- Department of NeurologyThe Johns Hopkins University School of MedicineBaltimoreMarylandUSA
| | | | | |
Collapse
|
4
|
Borges P, Shaw R, Varsavsky T, Kläser K, Thomas D, Drobnjak I, Ourselin S, Cardoso MJ. Acquisition-invariant brain MRI segmentation with informative uncertainties. Med Image Anal 2024; 92:103058. [PMID: 38104403 PMCID: PMC7617170 DOI: 10.1016/j.media.2023.103058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 08/24/2023] [Accepted: 12/05/2023] [Indexed: 12/19/2023]
Abstract
Combining multi-site data can strengthen and uncover trends, but is a task that is marred by the influence of site-specific covariates that can bias the data and, therefore, any downstream analyses. Post-hoc multi-site correction methods exist but have strong assumptions that often do not hold in real-world scenarios. Algorithms should be designed in a way that can account for site-specific effects, such as those that arise from sequence parameter choices, and in instances where generalisation fails, should be able to identify such a failure by means of explicit uncertainty modelling. This body of work showcases such an algorithm that can become robust to the physics of acquisition in the context of segmentation tasks while simultaneously modelling uncertainty. We demonstrate that our method not only generalises to complete holdout datasets, preserving segmentation quality but does so while also accounting for site-specific sequence choices, which also allows it to perform as a harmonisation tool.
Collapse
Affiliation(s)
- Pedro Borges
- Department of Medical Physics and Biomedical Engineering, UCL, UK; School of Biomedical Engineering and Imaging Sciences, KCL, UK.
| | - Richard Shaw
- Department of Medical Physics and Biomedical Engineering, UCL, UK; School of Biomedical Engineering and Imaging Sciences, KCL, UK
| | - Thomas Varsavsky
- Department of Medical Physics and Biomedical Engineering, UCL, UK; School of Biomedical Engineering and Imaging Sciences, KCL, UK
| | - Kerstin Kläser
- School of Biomedical Engineering and Imaging Sciences, KCL, UK
| | | | - Ivana Drobnjak
- Department of Medical Physics and Biomedical Engineering, UCL, UK
| | | | - M Jorge Cardoso
- School of Biomedical Engineering and Imaging Sciences, KCL, UK
| |
Collapse
|
5
|
Wang X, Liu S, Yang N, Chen F, Ma L, Ning G, Zhang H, Qiu X, Liao H. A Segmentation Framework With Unsupervised Learning-Based Label Mapper for the Ventricular Target of Intracranial Germ Cell Tumor. IEEE J Biomed Health Inform 2023; 27:5381-5392. [PMID: 37651479 DOI: 10.1109/jbhi.2023.3310492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Intracranial germ cell tumors are rare tumors that mainly affect children and adolescents. Radiotherapy is the cornerstone of interdisciplinary treatment methods. Radiation of the whole ventricle system and the local tumor can reduce the complications in the late stage of radiotherapy while ensuring the curative effect. However, manually delineating the ventricular system is labor-intensive and time-consuming for physicians. The diverse ventricle shape and the hydrocephalus-induced ventricle dilation increase the difficulty of automatic segmentation algorithms. Therefore, this study proposed a fully automatic segmentation framework. Firstly, we designed a novel unsupervised learning-based label mapper, which is used to handle the ventricle shape variations and obtain the preliminary segmentation result. Then, to boost the segmentation performance of the framework, we improved the region growth algorithm and combined the fully connected conditional random field to optimize the preliminary results from both regional and voxel scales. In the case of only one set of annotated data is required, the average time cost is 153.01 s, and the average target segmentation accuracy can reach 84.69%. Furthermore, we verified the algorithm in practical clinical applications. The results demonstrate that our proposed method is beneficial for physicians to delineate radiotherapy targets, which is feasible and clinically practical, and may fill the gap of automatic delineation methods for the ventricular target of intracranial germ celltumors.
Collapse
|
6
|
Gao L, Yusufaly TI, Williamson CW, Mell LK. Optimized Atlas-Based Auto-Segmentation of Bony Structures from Whole-Body Computed Tomography. Pract Radiat Oncol 2023; 13:e442-e450. [PMID: 37030539 DOI: 10.1016/j.prro.2023.03.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 03/14/2023] [Accepted: 03/15/2023] [Indexed: 04/09/2023]
Abstract
PURPOSE To develop and test a method for fully automated segmentation of bony structures from whole-body computed tomography (CT) and evaluate its performance compared with manual segmentation. METHODS AND MATERIALS We developed a workflow for automatic whole-body bone segmentation using atlas-based segmentation (ABS) method with a postprocessing module (ABSPP) in MIM MAESTRO software. Fifty-two CT scans comprised the training set to build the atlas library, and 29 CT scans comprised the test set. To validate the workflow, we compared Dice similarity coefficient (DSC), mean distance to agreement, and relative volume errors between ABSPP and ABS with no postprocessing (ABSNPP) with manual segmentation as the reference (gold standard). RESULTS The ABSPP method resulted in significantly improved segmentation accuracy (DSC range, 0.85-0.98) compared with the ABSNPP method (DSC range, 0.55-0.87; P < .001). Mean distance to agreement results also indicated high agreement between ABSPP and manual reference delineations (range, 0.11-1.56 mm), which was significantly improved compared with ABSNPP (range, 1.00-2.34 mm) for the majority of tested bony structures. Relative volume errors were also significantly lower for ABSPP compared with ABSNPP for most bony structures. CONCLUSIONS We developed a fully automated MIM workflow for bony structure segmentation from whole-body CT, which exhibited high accuracy compared with manual delineation. The integrated postprocessing module significantly improved workflow performance.
Collapse
Affiliation(s)
- Lei Gao
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California
| | - Tahir I Yusufaly
- Russell H. Morgan Department of Radiology and Radiologic Sciences, Johns Hopkins University, School of Medicine, Baltimore, Maryland
| | - Casey W Williamson
- Department of Radiation Medicine, Oregon Health Sciences University, Portland, Oregon
| | - Loren K Mell
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California.
| |
Collapse
|
7
|
Chen L, Wu Z, Zhao F, Wang Y, Lin W, Wang L, Li G. An attention-based context-informed deep framework for infant brain subcortical segmentation. Neuroimage 2023; 269:119931. [PMID: 36746299 PMCID: PMC10241225 DOI: 10.1016/j.neuroimage.2023.119931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 01/13/2023] [Accepted: 02/03/2023] [Indexed: 02/06/2023] Open
Abstract
Precise segmentation of subcortical structures from infant brain magnetic resonance (MR) images plays an essential role in studying early subcortical structural and functional developmental patterns and diagnosis of related brain disorders. However, due to the dynamic appearance changes, low tissue contrast, and tiny subcortical size in infant brain MR images, infant subcortical segmentation is a challenging task. In this paper, we propose a context-guided, attention-based, coarse-to-fine deep framework to precisely segment the infant subcortical structures. At the coarse stage, we aim to directly predict the signed distance maps (SDMs) from multi-modal intensity images, including T1w, T2w, and the ratio of T1w and T2w images, with an SDM-Unet, which can leverage the spatial context information, including the structural position information and the shape information of the target structure, to generate high-quality SDMs. At the fine stage, the predicted SDMs, which encode spatial-context information of each subcortical structure, are integrated with the multi-modal intensity images as the input to a multi-source and multi-path attention Unet (M2A-Unet) for achieving refined segmentation. Both the 3D spatial and channel attention blocks are added to guide the M2A-Unet to focus more on the important subregions and channels. We additionally incorporate the inner and outer subcortical boundaries as extra labels to help precisely estimate the ambiguous boundaries. We validate our method on an infant MR image dataset and on an unrelated neonatal MR image dataset. Compared to eleven state-of-the-art methods, the proposed framework consistently achieves higher segmentation accuracy in both qualitative and quantitative evaluations of infant MR images and also exhibits good generalizability in the neonatal dataset.
Collapse
Affiliation(s)
- Liangjun Chen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Zhengwang Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Fenqiang Zhao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Ya Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Weili Lin
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| |
Collapse
|
8
|
Magalhães TNC, Casseb RF, Gerbelli CLB, Pimentel-Siva LR, Nogueira MH, Teixeira CVL, Carletti AFMK, de Rezende TJR, Joaquim HPG, Talib LL, Forlenza OV, Cendes F, Balthazar MLF. Whole-brain DTI parameters associated with tau protein and hippocampal volume in Alzheimer's disease. Brain Behav 2023; 13:e2863. [PMID: 36601694 PMCID: PMC9927845 DOI: 10.1002/brb3.2863] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 12/02/2022] [Accepted: 12/06/2022] [Indexed: 01/06/2023] Open
Abstract
The causes of the neurodegenerative processes in Alzheimer's disease (AD) are not completely known. Recent studies have shown that white matter (WM) damage could be more severe and widespread than whole-brain cortical atrophy and that such damage may appear even before the damage to the gray matter (GM). In AD, Amyloid-beta (Aβ42 ) and tau proteins could directly affect WM, spreading across brain networks. Since hippocampal atrophy is common in the early phase of disease, it is reasonable to expect that hippocampal volume (HV) might be also related to WM integrity. Our study aimed to evaluate the integrity of the whole-brain WM, through diffusion tensor imaging (DTI) parameters, in mild AD and amnestic mild cognitive impairment (aMCI) due to AD (with Aβ42 alteration in cerebrospinal fluid [CSF]) in relation to controls; and possible correlations between those measures and the CSF levels of Aβ42 , phosphorylated tau protein (p-Tau) and total tau (t-Tau). We found a widespread WM alteration in the groups, and we also observed correlations between p-Tau and t-Tau with tracts directly linked to mesial temporal lobe (MTL) structures (fornix and hippocampal cingulum). However, linear regressions showed that the HV better explained the variation found in the DTI measures (with weak to moderate effect sizes, explaining from 9% to 31%) than did CSF proteins. In conclusion, we found widespread alterations in WM integrity, particularly in regions commonly affected by the disease in our group of early-stage disease and patients with Alzheimer's disease. Nonetheless, in the statistical models, the HV better predicted the integrity of the MTL tracts than the biomarkers in CSF.
Collapse
Affiliation(s)
- Thamires Naela Cardoso Magalhães
- Department of Neurology and Neuroimaging Laboratory, School of Medical Sciences, University of Campinas (UNICAMP), Campinas, Brazil.,Brazilian Institute of Neuroscience and Neurotechnology, São Paulo, Brazil
| | - Raphael Fernandes Casseb
- Department of Neurology and Neuroimaging Laboratory, School of Medical Sciences, University of Campinas (UNICAMP), Campinas, Brazil.,Seaman Family MR Research Center, University of Calgary, Calgary, Canada
| | - Christian Luiz Baptista Gerbelli
- Department of Neurology and Neuroimaging Laboratory, School of Medical Sciences, University of Campinas (UNICAMP), Campinas, Brazil
| | - Luciana Ramalho Pimentel-Siva
- Department of Neurology and Neuroimaging Laboratory, School of Medical Sciences, University of Campinas (UNICAMP), Campinas, Brazil.,Brazilian Institute of Neuroscience and Neurotechnology, São Paulo, Brazil
| | - Mateus Henrique Nogueira
- Department of Neurology and Neuroimaging Laboratory, School of Medical Sciences, University of Campinas (UNICAMP), Campinas, Brazil.,Brazilian Institute of Neuroscience and Neurotechnology, São Paulo, Brazil
| | - Camila Vieira Ligo Teixeira
- Brazilian Institute of Neuroscience and Neurotechnology, São Paulo, Brazil.,National Institute on Aging, National Institute of Health, Baltimore, Maryland, USA
| | - Ana Flávia Mac Knight Carletti
- Department of Neurology and Neuroimaging Laboratory, School of Medical Sciences, University of Campinas (UNICAMP), Campinas, Brazil
| | - Thiago Junqueira Ribeiro de Rezende
- Department of Neurology and Neuroimaging Laboratory, School of Medical Sciences, University of Campinas (UNICAMP), Campinas, Brazil.,Brazilian Institute of Neuroscience and Neurotechnology, São Paulo, Brazil
| | | | - Leda Leme Talib
- Laboratory of Neuroscience (LIM-27), Department and Institute of Psychiatry, University of Sao Paulo (USP), São Paulo, Brazil
| | - Orestes Vicente Forlenza
- Laboratory of Neuroscience (LIM-27), Department and Institute of Psychiatry, University of Sao Paulo (USP), São Paulo, Brazil
| | - Fernando Cendes
- Department of Neurology and Neuroimaging Laboratory, School of Medical Sciences, University of Campinas (UNICAMP), Campinas, Brazil.,Brazilian Institute of Neuroscience and Neurotechnology, São Paulo, Brazil
| | - Marcio Luiz Figueredo Balthazar
- Department of Neurology and Neuroimaging Laboratory, School of Medical Sciences, University of Campinas (UNICAMP), Campinas, Brazil.,Brazilian Institute of Neuroscience and Neurotechnology, São Paulo, Brazil
| |
Collapse
|
9
|
Casamitjana A, Iglesias JE. High-resolution atlasing and segmentation of the subcortex: Review and perspective on challenges and opportunities created by machine learning. Neuroimage 2022; 263:119616. [PMID: 36084858 PMCID: PMC11534291 DOI: 10.1016/j.neuroimage.2022.119616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 08/30/2022] [Accepted: 09/05/2022] [Indexed: 11/17/2022] Open
Abstract
This paper reviews almost three decades of work on atlasing and segmentation methods for subcortical structures in human brain MRI. In writing this survey, we have three distinct aims. First, to document the evolution of digital subcortical atlases of the human brain, from the early MRI templates published in the nineties, to the complex multi-modal atlases at the subregion level that are available today. Second, to provide a detailed record of related efforts in the automated segmentation front, from earlier atlas-based methods to modern machine learning approaches. And third, to present a perspective on the future of high-resolution atlasing and segmentation of subcortical structures in in vivo human brain MRI, including open challenges and opportunities created by recent developments in machine learning.
Collapse
Affiliation(s)
- Adrià Casamitjana
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK.
| | - Juan Eugenio Iglesias
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK; Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, USA
| |
Collapse
|
10
|
Li Y, Qiu Z, Fan X, Liu X, Chang EIC, Xu Y. Integrated 3d flow-based multi-atlas brain structure segmentation. PLoS One 2022; 17:e0270339. [PMID: 35969596 PMCID: PMC9377636 DOI: 10.1371/journal.pone.0270339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 06/09/2022] [Indexed: 11/18/2022] Open
Abstract
MRI brain structure segmentation plays an important role in neuroimaging studies. Existing methods either spend much CPU time, require considerable annotated data, or fail in segmenting volumes with large deformation. In this paper, we develop a novel multi-atlas-based algorithm for 3D MRI brain structure segmentation. It consists of three modules: registration, atlas selection and label fusion. Both registration and label fusion leverage an integrated flow based on grayscale and SIFT features. We introduce an effective and efficient strategy for atlas selection by employing the accompanying energy generated in the registration step. A 3D sequential belief propagation method and a 3D coarse-to-fine flow matching approach are developed in both registration and label fusion modules. The proposed method is evaluated on five public datasets. The results show that it has the best performance in almost all the settings compared to competitive methods such as ANTs, Elastix, Learning to Rank and Joint Label Fusion. Moreover, our registration method is more than 7 times as efficient as that of ANTs SyN, while our label transfer method is 18 times faster than Joint Label Fusion in CPU time. The results on the ADNI dataset demonstrate that our method is applicable to image pairs that require a significant transformation in registration. The performance on a composite dataset suggests that our method succeeds in a cross-modality manner. The results of this study show that the integrated 3D flow-based method is effective and efficient for brain structure segmentation. It also demonstrates the power of SIFT features, multi-atlas segmentation and classical machine learning algorithms for a medical image analysis task. The experimental results on public datasets show the proposed method's potential for general applicability in various brain structures and settings.
Collapse
Affiliation(s)
- Yeshu Li
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Ziming Qiu
- Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, United States of America
| | - Xingyu Fan
- Bioengineering College, Chongqing University, Chongqing, China
| | - Xianglong Liu
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | | | - Yan Xu
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China
- Microsoft Research, Beijing, China
| |
Collapse
|
11
|
Lin H, Dong L, Jimenez RB. Emerging Technologies in Mitigating the Risks of Cardiac Toxicity From Breast Radiotherapy. Semin Radiat Oncol 2022; 32:270-281. [DOI: 10.1016/j.semradonc.2022.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
12
|
Shengli C, Yingli Z, Zheng G, Shiwei L, Ziyun X, Han F, Yingwei Q, Gangqiang H. An aberrant hippocampal subregional network, rather than structure, characterizes major depressive disorder. J Affect Disord 2022; 302:123-130. [PMID: 35085667 DOI: 10.1016/j.jad.2022.01.087] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Revised: 01/13/2022] [Accepted: 01/22/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND Behavioral and neuroimaging studies have implicated the hippocampus as a cardinal neural structure in major depressive disorder (MDD) pathogenesis. The hippocampal subregion-specific structural and functional abnormalities in MDD remain unknown. METHODS Multimodal magnetic resonance imaging (MRI) was acquired in 140 patients with MDD and 44 age- and sex-matched healthy controls (HCs). We quantified hippocampal subregional volumes and fractional anisotropy (FA) following a structural and diffusion MRI data analysis processing stream. Hippocampal subregional networks were established using seed-based functional connectivity (FC) analysis. Univariate analysis was used to investigate the differences between the two groups. Significant subfield metrics were correlated with depression severity. RESULTS Compared with HCs, we did not find significant differences in subregional volumes or FA metrics in the MDD group. The MDD group exhibited a significantly weaker connectivity of the right hippocampal subregional networks with the temporal cortex (extending to the insula) and basal ganglia but showed increased connectivity of the right subiculum to the bilateral lingual gyrus. The FC between the right cornu ammonis 1 and right fusiform, between the right hippocampal amygdala transition area and the bilateral basal ganglia, were negatively correlated with depression severity (r = -0.224, p = 0.010; r = -0.196, p = 0.025, respectively) in the MDD group. LIMITATIONS This study did not consider the longitudinal changes in the structure and functional connectivity of the hippocampal subregion. CONCLUSION These findings advance our understanding of the neurobiological basis of depression by identifying the hippocampal subregional structural and functional abnormalities.
Collapse
Affiliation(s)
- Chen Shengli
- Department of Radiology, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou Medical University, Duobao AVE 56, Liwan district, Guangzhou, China; Department of Radiology, Huazhong University of Science and Technology Union Shenzhen Hospital, Taoyuan AVE 89, Nanshan district, Shenzhen 518000, China
| | - Zhang Yingli
- Shenzhen Mental Health Center, Shenzhen Kangning Hospital, Cuizhu AVE 1080, Luohu district, Shenzhen 518020, China
| | - Guo Zheng
- Department of Hematology and Oncology, International Cancer Center, Shenzhen Key Laboratory of Precision Medicine for Hematological Malignancies, Shenzhen University General Hospital, Shenzhen Univeristy Clincal Medical Academy, Shenzhen University Health Science Center, Xueyuan AVE 1098, Nanshan district, Shenzhen, Guangdong 518000, China
| | - Lin Shiwei
- Department of Radiology, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou Medical University, Duobao AVE 56, Liwan district, Guangzhou, China
| | - Xu Ziyun
- Shenzhen Mental Health Center, Shenzhen Kangning Hospital, Cuizhu AVE 1080, Luohu district, Shenzhen 518020, China
| | - Fang Han
- Shenzhen Mental Health Center, Shenzhen Kangning Hospital, Cuizhu AVE 1080, Luohu district, Shenzhen 518020, China
| | - Qiu Yingwei
- Department of Radiology, Huazhong University of Science and Technology Union Shenzhen Hospital, Taoyuan AVE 89, Nanshan district, Shenzhen 518000, China,.
| | - Hou Gangqiang
- Shenzhen Mental Health Center, Shenzhen Kangning Hospital, Cuizhu AVE 1080, Luohu district, Shenzhen 518020, China.
| |
Collapse
|
13
|
Dahl MJ, Mather M, Werkle-Bergner M, Kennedy BL, Guzman S, Hurth K, Miller CA, Qiao Y, Shi Y, Chui HC, Ringman JM. Locus coeruleus integrity is related to tau burden and memory loss in autosomal-dominant Alzheimer's disease. Neurobiol Aging 2022; 112:39-54. [PMID: 35045380 PMCID: PMC8976827 DOI: 10.1016/j.neurobiolaging.2021.11.006] [Citation(s) in RCA: 58] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 11/17/2021] [Accepted: 11/26/2021] [Indexed: 12/14/2022]
Abstract
Abnormally phosphorylated tau, an indicator of Alzheimer's disease, accumulates in the first decades of life in the locus coeruleus (LC), the brain's main noradrenaline supply. However, technical challenges in in-vivo assessments have impeded research into the role of the LC in Alzheimer's disease. We studied participants with or known to be at-risk for mutations in genes causing autosomal-dominant Alzheimer's disease (ADAD) with early onset, providing a unique window into the pathogenesis of Alzheimer's largely disentangled from age-related factors. Using high-resolution MRI and tau PET, we found lower rostral LC integrity in symptomatic participants. LC integrity was associated with individual differences in tau burden and memory decline. Post-mortem analyses in a separate set of carriers of the same mutation confirmed substantial neuronal loss in the LC. Our findings link LC degeneration to tau burden and memory in Alzheimer's, and highlight a role of the noradrenergic system in this neurodegenerative disease.
Collapse
Affiliation(s)
- Martin J Dahl
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany; Davis School of Gerontology, University of Southern California, Los Angeles, CA, USA.
| | - Mara Mather
- Davis School of Gerontology, University of Southern California, Los Angeles, CA, USA
| | - Markus Werkle-Bergner
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Briana L Kennedy
- Davis School of Gerontology, University of Southern California, Los Angeles, CA, USA; School of Psychological Science, University of Western Australia, Perth, Australia
| | - Samuel Guzman
- Department of Pathology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Kyle Hurth
- Department of Pathology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Carol A Miller
- Department of Pathology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Yuchuan Qiao
- Laboratory of Neuro Imaging (LONI), USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Yonggang Shi
- Laboratory of Neuro Imaging (LONI), USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Helena C Chui
- Department of Neurology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - John M Ringman
- Department of Neurology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
14
|
Robust Bayesian fusion of continuous segmentation maps. Med Image Anal 2022; 78:102398. [PMID: 35349837 DOI: 10.1016/j.media.2022.102398] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 01/06/2022] [Accepted: 02/18/2022] [Indexed: 11/20/2022]
Abstract
The fusion of probability maps is required when trying to analyse a collection of image labels or probability maps produced by several segmentation algorithms or human raters. The challenge is to weight the combination of maps correctly, in order to reflect the agreement among raters, the presence of outliers and the spatial uncertainty in the consensus. In this paper, we address several shortcomings of prior work in continuous label fusion. We introduce a novel approach to jointly estimate a reliable consensus map and to assess the presence of outliers and the confidence in each rater. Our robust approach is based on heavy-tailed distributions allowing local estimates of raters performances. In particular, we investigate the Laplace, the Student's t and the generalized double Pareto distributions, and compare them with respect to the classical Gaussian likelihood used in prior works. We unify these distributions into a common tractable inference scheme based on variational calculus and scale mixture representations. Moreover, the introduction of bias and spatial priors leads to proper rater bias estimates and control over the smoothness of the consensus map. Finally, we propose an approach that clusters raters based on variational boosting, and thus may produce several alternative consensus maps. Our approach was successfully tested on MR prostate delineations and on lung nodule segmentations from the LIDC-IDRI dataset.
Collapse
|
15
|
Yan Y, Balbastre Y, Brudfors M, Ashburner J. Factorisation-Based Image Labelling. Front Neurosci 2022; 15:818604. [PMID: 35110992 PMCID: PMC8801908 DOI: 10.3389/fnins.2021.818604] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 12/10/2021] [Indexed: 12/21/2022] Open
Abstract
Segmentation of brain magnetic resonance images (MRI) into anatomical regions is a useful task in neuroimaging. Manual annotation is time consuming and expensive, so having a fully automated and general purpose brain segmentation algorithm is highly desirable. To this end, we propose a patched-based labell propagation approach based on a generative model with latent variables. Once trained, our Factorisation-based Image Labelling (FIL) model is able to label target images with a variety of image contrasts. We compare the effectiveness of our proposed model against the state-of-the-art using data from the MICCAI 2012 Grand Challenge and Workshop on Multi-Atlas Labelling. As our approach is intended to be general purpose, we also assess how well it can handle domain shift by labelling images of the same subjects acquired with different MR contrasts.
Collapse
Affiliation(s)
- Yu Yan
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Yaël Balbastre
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| | - Mikael Brudfors
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - John Ashburner
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
16
|
Improved Segmentation of Cardiac MRI using efficient Pre-Processing Technique. JOURNAL OF INFORMATION TECHNOLOGY RESEARCH 2022. [DOI: 10.4018/jitr.299932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Cardiac Magnetic Resonance Imaging is a popular non-invasive technique used for assessing the cardiac performance. Automating the segmentation helps in increased diagnosis accuracy in considerably less time and effort. In this paper a novel approach has been proposed to improve the automated segmentation process by increasing the accuracy of segmentation and laying focus on efficient pre-processing of the cardiac Magnetic Resonance (MR) image. The pre-processing module in the proposed method includes noise estimation and efficient denoising of images using discrete total variation based Non local means method.Segmentation accuracy is evaluated using measures such as average perpendicular distance and dice similarity coefficient. The performance of all the segmentation techniques is improved. Further segmentation comparison has also been performed using other state-of-the art noise removal techniques for pre-processing and it was observed that the proposed pre-processing technique outperformed other noise removal techniques in improving the segmentation accuracy.
Collapse
|
17
|
Brown DA, McMahan CS, Shinohara RT, Linn KA. Bayesian Spatial Binary Regression for Label Fusion in Structural Neuroimaging. J Am Stat Assoc 2022; 117:547-560. [PMID: 36338275 PMCID: PMC9632253 DOI: 10.1080/01621459.2021.2014854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Alzheimer's disease is a neurodegenerative condition that accelerates cognitive decline relative to normal aging. It is of critical scientific importance to gain a better understanding of early disease mechanisms in the brain to facilitate effective, targeted therapies. The volume of the hippocampus is often used in diagnosis and monitoring of the disease. Measuring this volume via neuroimaging is difficult since each hippocampus must either be manually identified or automatically delineated, a task referred to as segmentation. Automatic hippocampal segmentation often involves mapping a previously manually segmented image to a new brain image and propagating the labels to obtain an estimate of where each hippocampus is located in the new image. A more recent approach to this problem is to propagate labels from multiple manually segmented atlases and combine the results using a process known as label fusion. To date, most label fusion algorithms employ voting procedures with voting weights assigned directly or estimated via optimization. We propose using a fully Bayesian spatial regression model for label fusion that facilitates direct incorporation of covariate information while making accessible the entire posterior distribution. Our results suggest that incorporating tissue classification (e.g, gray matter) into the label fusion procedure can greatly improve segmentation when relatively homogeneous, healthy brains are used as atlases for diseased brains. The fully Bayesian approach also produces meaningful uncertainty measures about hippocampal volumes, information which can be leveraged to detect significant, scientifically meaningful differences between healthy and diseased populations, improving the potential for early detection and tracking of the disease.
Collapse
Affiliation(s)
- D. Andrew Brown
- School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC 29634, USA
| | - Christopher S. McMahan
- School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC 29634, USA
| | - Russell T. Shinohara
- Penn Statistics in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, and Center for Biomedical Image Computing and Analytics, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Kristin A. Linn
- Penn Statistics in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, and Center for Biomedical Image Computing and Analytics, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | | |
Collapse
|
18
|
Pemberton HG, Zaki LAM, Goodkin O, Das RK, Steketee RME, Barkhof F, Vernooij MW. Technical and clinical validation of commercial automated volumetric MRI tools for dementia diagnosis-a systematic review. Neuroradiology 2021; 63:1773-1789. [PMID: 34476511 PMCID: PMC8528755 DOI: 10.1007/s00234-021-02746-3] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 06/02/2021] [Indexed: 12/22/2022]
Abstract
Developments in neuroradiological MRI analysis offer promise in enhancing objectivity and consistency in dementia diagnosis through the use of quantitative volumetric reporting tools (QReports). Translation into clinical settings should follow a structured framework of development, including technical and clinical validation steps. However, published technical and clinical validation of the available commercial/proprietary tools is not always easy to find and pathways for successful integration into the clinical workflow are varied. The quantitative neuroradiology initiative (QNI) framework highlights six necessary steps for the development, validation and integration of quantitative tools in the clinic. In this paper, we reviewed the published evidence regarding regulatory-approved QReports for use in the memory clinic and to what extent this evidence fulfils the steps of the QNI framework. We summarize unbiased technical details of available products in order to increase the transparency of evidence and present the range of reporting tools on the market. Our intention is to assist neuroradiologists in making informed decisions regarding the adoption of these methods in the clinic. For the 17 products identified, 11 companies have published some form of technical validation on their methods, but only 4 have published clinical validation of their QReports in a dementia population. Upon systematically reviewing the published evidence for regulatory-approved QReports in dementia, we concluded that there is a significant evidence gap in the literature regarding clinical validation, workflow integration and in-use evaluation of these tools in dementia MRI diagnosis.
Collapse
Affiliation(s)
- Hugh G Pemberton
- Centre for Medical Image Computing (CMIC), Department of Medical Physics and Bioengineering, University College London, London, UK.
- UCL Queen Square Institute of Neurology, University College London, London, UK.
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, UK.
| | - Lara A M Zaki
- Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Center, Rotterdam, The Netherlands
| | - Olivia Goodkin
- Centre for Medical Image Computing (CMIC), Department of Medical Physics and Bioengineering, University College London, London, UK
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Ravi K Das
- Clinical, Educational and Health Psychology, University College London, London, UK
| | - Rebecca M E Steketee
- Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Center, Rotterdam, The Netherlands
| | - Frederik Barkhof
- Centre for Medical Image Computing (CMIC), Department of Medical Physics and Bioengineering, University College London, London, UK
- UCL Queen Square Institute of Neurology, University College London, London, UK
- Radiology & Nuclear Medicine, VU University Medical Center, Amsterdam, The Netherlands
| | - Meike W Vernooij
- Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Center, Rotterdam, The Netherlands
- Department of Epidemiology, Erasmus MC University Medical Center, Rotterdam, The Netherlands
| |
Collapse
|
19
|
Wang Z, Demarcy T, Vandersteen C, Gnansia D, Raffaelli C, Guevara N, Delingette H. Bayesian logistic shape model inference: Application to cochlear image segmentation. Med Image Anal 2021; 75:102268. [PMID: 34710654 DOI: 10.1016/j.media.2021.102268] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 09/01/2021] [Accepted: 10/08/2021] [Indexed: 11/28/2022]
Abstract
Incorporating shape information is essential for the delineation of many organs and anatomical structures in medical images. While previous work has mainly focused on parametric spatial transformations applied to reference template shapes, in this paper, we address the Bayesian inference of parametric shape models for segmenting medical images with the objective of providing interpretable results. The proposed framework defines a likelihood appearance probability and a prior label probability based on a generic shape function through a logistic function. A reference length parameter defined in the sigmoid controls the trade-off between shape and appearance information. The inference of shape parameters is performed within an Expectation-Maximisation approach in which a Gauss-Newton optimization stage provides an approximation of the posterior probability of the shape parameters. This framework is applied to the segmentation of cochlear structures from clinical CT images constrained by a 10-parameter shape model. It is evaluated on three different datasets, one of which includes more than 200 patient images. The results show performances comparable to supervised methods and better than previously proposed unsupervised ones. It also enables an analysis of parameter distributions and the quantification of segmentation uncertainty, including the effect of the shape model.
Collapse
Affiliation(s)
- Zihao Wang
- Inria, Epione Team, Université Côte d'Azur, Sophia Antipolis, France.
| | - Thomas Demarcy
- Oticon Medical, 14 Chemin de Saint-Bernard Porte, Vallauris 06220, France
| | - Clair Vandersteen
- Inria, Epione Team, Université Côte d'Azur, Sophia Antipolis, France; Head and Neck University Institute, Nice University Hospital, 31 Avenue de Valombrose, Nice 06100, France
| | - Dan Gnansia
- Oticon Medical, 14 Chemin de Saint-Bernard Porte, Vallauris 06220, France
| | - Charles Raffaelli
- Department of Radiology, Centre Hospitalier Universitaire de Nice, 31 Avenue de Valombrose, Nice 06100, France
| | - Nicolas Guevara
- Head and Neck University Institute, Nice University Hospital, 31 Avenue de Valombrose, Nice 06100, France
| | - Hervé Delingette
- Inria, Epione Team, Université Côte d'Azur, Sophia Antipolis, France
| |
Collapse
|
20
|
Li Y, Cui J, Sheng Y, Liang X, Wang J, Chang EIC, Xu Y. Whole brain segmentation with full volume neural network. Comput Med Imaging Graph 2021; 93:101991. [PMID: 34634548 DOI: 10.1016/j.compmedimag.2021.101991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 06/13/2021] [Accepted: 09/06/2021] [Indexed: 10/20/2022]
Abstract
Whole brain segmentation is an important neuroimaging task that segments the whole brain volume into anatomically labeled regions-of-interest. Convolutional neural networks have demonstrated good performance in this task. Existing solutions, usually segment the brain image by classifying the voxels, or labeling the slices or the sub-volumes separately. Their representation learning is based on parts of the whole volume whereas their labeling result is produced by aggregation of partial segmentation. Learning and inference with incomplete information could lead to sub-optimal final segmentation result. To address these issues, we propose to adopt a full volume framework, which feeds the full volume brain image into the segmentation network and directly outputs the segmentation result for the whole brain volume. The framework makes use of complete information in each volume and can be implemented easily. An effective instance in this framework is given subsequently. We adopt the 3D high-resolution network (HRNet) for learning spatially fine-grained representations and the mixed precision training scheme for memory-efficient training. Extensive experiment results on a publicly available 3D MRI brain dataset show that our proposed model advances the state-of-the-art methods in terms of segmentation performance.
Collapse
Affiliation(s)
- Yeshu Li
- Department of Computer Science, University of Illinois at Chicago, Chicago, IL 60607, United States.
| | - Jonathan Cui
- Vacaville Christian Schools, Vacaville, CA 95687, United States.
| | - Yilun Sheng
- Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China; Microsoft Research, Beijing 100080, China.
| | - Xiao Liang
- High School Affiliated to Renmin University of China, Beijing 100080, China.
| | | | | | - Yan Xu
- School of Biological Science and Medical Engineering and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China; Microsoft Research, Beijing 100080, China.
| |
Collapse
|
21
|
Assessing the differential sensitivities of wave-CAIPI ViSTa myelin water fraction and magnetization transfer saturation for efficiently quantifying tissue damage in MS. Mult Scler Relat Disord 2021; 56:103309. [PMID: 34688179 DOI: 10.1016/j.msard.2021.103309] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 09/21/2021] [Accepted: 10/02/2021] [Indexed: 11/23/2022]
Abstract
BACKGROUND Wave-CAIPI Visualization of Short Transverse relaxation time component (ViSTa) is a recently developed, short-T1-sensitized MRI method for fast quantification of myelin water fraction (MWF) in the human brain. It represents a promising technique for the evaluation of subtle, early signals of demyelination in the cerebral white matter of multiple sclerosis (MS) patients. Currently however, few studies exist that robustly assess the utility of ViSTa MWF measures of myelin compared to more conventional MRI measures of myelin in the brain of MS patients. Moreover, there are no previous studies evaluating the sensitivity of ViSTa MWF for the non-invasive detection of subtle tissue damage in both normal-appearing white matter (NAWM) and white matter lesions of MS patients. As a result, a central purpose of this study was to systematically evaluate the relationship between myelin sensitivity of T1-based ViSTa MWF mapping and a more generally recognized metric, Magnetization Transfer Saturation (MTsat), in healthy control and MS brain white matter. METHODS ViSTa MWF and MTsat values were evaluated in automatically-classified normal appearing white matter (NAWM), white matter (WM) lesion tissue, cortical gray matter, and deep gray matter of 29 MS patients and 10 healthy controls using 3T MRI. MWF and MT sat were also assessed in a tract-specific manner using the Johns Hopkins University WM atlas. MRI-derived measures of cerebral myelin content were uniquely compared by employing non-normal distribution-specific measures of median, interquartile range and skewness. Separate analyses of variance were applied to test tissue-specific differences in MTsat and ViSTa MWF distribution metrics. Non-parametric tests were utilized when appropriate. All tests were corrected for multiple comparisons using the False Discovery Rate method at the level, α=0.05. RESULTS Differences in whole NAWM MS tissue damage were detected with a higher effect size when using ViSTa MWF (q = 0.0008; ƞ2 = 0.34) compared to MTsat (q = 0.02; ƞ2= 0.24). We also observed that, as a possible measure of WM pathology, ViSTa-derived NAWM MWF voxel distributions of MS subjects were consistently skewed towards lower MWF values, while MTsat voxel distributions showed reduced skewness values. We further identified tract-specific reductions in mean ViSTa MWF of MS patients compared to controls that were not observed with MTsat. However, MTsat (q = 1.4 × 10-21; ƞ2 = 0.88) displayed higher effect sizes when differentiating NAWM and MS lesion tissue. Using regression analysis at the group level, we identified a linear relationship between MTsat and ViSTa MWF in NAWM (R2 = 0.46; p = 7.8 × 10-4) lesions (R2 = 0.30; p = 0.004), and with all tissue types combined (R2 = 0.71; p = 8.4 × 10-45). The linear relationship was also observed in most of the WM tracts we investigated. ViSTa MWF in NAWM of MS patients correlated with both disease duration (p = 0.02; R2 = 0.27) and WM lesion volume (p = 0.002; R2 = 0.34). CONCLUSION Because ViSTa MWF and MTsat metrics exhibit differential sensitivities to tissue damage in MS white matter, they can be collected in combination to provide an efficient, comprehensive measure of myelin water and macromolecular pool proton signals. These complementary measures may offer a more sensitive, non-invasive biopsy of early precursor signals in NAWM that occur prior to lesion formation. They may also aid in monitoring the efficacy of remyelination therapies.
Collapse
|
22
|
Kong R, Yang Q, Gordon E, Xue A, Yan X, Orban C, Zuo XN, Spreng N, Ge T, Holmes A, Eickhoff S, Yeo BTT. Individual-Specific Areal-Level Parcellations Improve Functional Connectivity Prediction of Behavior. Cereb Cortex 2021; 31:4477-4500. [PMID: 33942058 PMCID: PMC8757323 DOI: 10.1093/cercor/bhab101] [Citation(s) in RCA: 119] [Impact Index Per Article: 29.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 03/03/2021] [Accepted: 03/12/2021] [Indexed: 11/13/2022] Open
Abstract
Resting-state functional magnetic resonance imaging (rs-fMRI) allows estimation of individual-specific cortical parcellations. We have previously developed a multi-session hierarchical Bayesian model (MS-HBM) for estimating high-quality individual-specific network-level parcellations. Here, we extend the model to estimate individual-specific areal-level parcellations. While network-level parcellations comprise spatially distributed networks spanning the cortex, the consensus is that areal-level parcels should be spatially localized, that is, should not span multiple lobes. There is disagreement about whether areal-level parcels should be strictly contiguous or comprise multiple noncontiguous components; therefore, we considered three areal-level MS-HBM variants spanning these range of possibilities. Individual-specific MS-HBM parcellations estimated using 10 min of data generalized better than other approaches using 150 min of data to out-of-sample rs-fMRI and task-fMRI from the same individuals. Resting-state functional connectivity derived from MS-HBM parcellations also achieved the best behavioral prediction performance. Among the three MS-HBM variants, the strictly contiguous MS-HBM exhibited the best resting-state homogeneity and most uniform within-parcel task activation. In terms of behavioral prediction, the gradient-infused MS-HBM was numerically the best, but differences among MS-HBM variants were not statistically significant. Overall, these results suggest that areal-level MS-HBMs can capture behaviorally meaningful individual-specific parcellation features beyond group-level parcellations. Multi-resolution trained models and parcellations are publicly available (https://github.com/ThomasYeoLab/CBIG/tree/master/stable_projects/brain_parcellation/Kong2022_ArealMSHBM).
Collapse
Affiliation(s)
- Ru Kong
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117583, Singapore
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), National University of Singapore, Singapore 117549, Singapore
- N.1 Institute for Health and Institute for Digital Medicine (WisDM), National University of Singapore, Singapore 117456, Singapore
| | - Qing Yang
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117583, Singapore
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), National University of Singapore, Singapore 117549, Singapore
- N.1 Institute for Health and Institute for Digital Medicine (WisDM), National University of Singapore, Singapore 117456, Singapore
| | - Evan Gordon
- Department of Radiology, Washington University School of Medicine, St. Louis, MO 63130, USA
| | - Aihuiping Xue
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117583, Singapore
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), National University of Singapore, Singapore 117549, Singapore
- N.1 Institute for Health and Institute for Digital Medicine (WisDM), National University of Singapore, Singapore 117456, Singapore
| | - Xiaoxuan Yan
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117583, Singapore
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), National University of Singapore, Singapore 117549, Singapore
- N.1 Institute for Health and Institute for Digital Medicine (WisDM), National University of Singapore, Singapore 117456, Singapore
- Integrative Sciences and Engineering Programme (ISEP), National University of Singapore, Singapore 119077, Singapore
| | - Csaba Orban
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117583, Singapore
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), National University of Singapore, Singapore 117549, Singapore
- N.1 Institute for Health and Institute for Digital Medicine (WisDM), National University of Singapore, Singapore 117456, Singapore
| | - Xi-Nian Zuo
- State Key Laboratory of Cognitive Neuroscience and Learning/IDG McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
- National Basic Public Science Data Center, Chinese Academy of Sciences, Beijing 100101, China
| | - Nathan Spreng
- Laboratory of Brain and Cognition, Department of Neurology and Neurosurgery, McGill University, Montreal QC H3A 2B4, Canada
- Departments of Psychiatry and Psychology, Neurological Institute, McGill University, Montreal QC H3A 2B4, Canada
- McConnell Brain Imaging Centre, Montreal Neurological Institute (MNI), McGill University, Montreal QC H3A 2B4, Canada
| | - Tian Ge
- Psychiatric & Neurodevelopmental Genetics Unit, Center for Genomic Medicine, Massachusetts General Hospital, Boston, MA 02114, USA
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA
| | - Avram Holmes
- Department of Psychology, Yale University, New Haven, CT 06520, USA
| | - Simon Eickhoff
- Medical Faculty, Institute for Systems Neuroscience, Heinrich-Heine University Düsseldorf, Düsseldorf 40225, Germany
- Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Center Jülich, Jülich 52425, Germany
| | - B T Thomas Yeo
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117583, Singapore
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), National University of Singapore, Singapore 117549, Singapore
- N.1 Institute for Health and Institute for Digital Medicine (WisDM), National University of Singapore, Singapore 117456, Singapore
- Integrative Sciences and Engineering Programme (ISEP), National University of Singapore, Singapore 119077, Singapore
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA
| |
Collapse
|
23
|
Chen J, Sun Y, Fang Z, Lin W, Li G, Wang L. Harmonized neonatal brain MR image segmentation model for cross-site datasets. Biomed Signal Process Control 2021; 69. [DOI: 10.1016/j.bspc.2021.102810] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
24
|
Aganj I, Fischl B. Multi-Atlas Image Soft Segmentation via Computation of the Expected Label Value. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1702-1710. [PMID: 33687840 PMCID: PMC8202781 DOI: 10.1109/tmi.2021.3064661] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The use of multiple atlases is common in medical image segmentation. This typically requires deformable registration of the atlases (or the average atlas) to the new image, which is computationally expensive and susceptible to entrapment in local optima. We propose to instead consider the probability of all possible atlas-to-image transformations and compute the expected label value (ELV), thereby not relying merely on the transformation deemed "optimal" by the registration method. Moreover, we do so without actually performing deformable registration, thus avoiding the associated computational costs. We evaluate our ELV computation approach by applying it to brain, liver, and pancreas segmentation on datasets of magnetic resonance and computed tomography images.
Collapse
|
25
|
Conze PH, Kavur AE, Cornec-Le Gall E, Gezer NS, Le Meur Y, Selver MA, Rousseau F. Abdominal multi-organ segmentation with cascaded convolutional and adversarial deep networks. Artif Intell Med 2021; 117:102109. [PMID: 34127239 DOI: 10.1016/j.artmed.2021.102109] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 01/24/2021] [Accepted: 05/06/2021] [Indexed: 02/05/2023]
Abstract
Abdominal anatomy segmentation is crucial for numerous applications from computer-assisted diagnosis to image-guided surgery. In this context, we address fully-automated multi-organ segmentation from abdominal CT and MR images using deep learning. The proposed model extends standard conditional generative adversarial networks. Additionally to the discriminator which enforces the model to create realistic organ delineations, it embeds cascaded partially pre-trained convolutional encoder-decoders as generator. Encoder fine-tuning from a large amount of non-medical images alleviates data scarcity limitations. The network is trained end-to-end to benefit from simultaneous multi-level segmentation refinements using auto-context. Employed for healthy liver, kidneys and spleen segmentation, our pipeline provides promising results by outperforming state-of-the-art encoder-decoder schemes. Followed for the Combined Healthy Abdominal Organ Segmentation (CHAOS) challenge organized in conjunction with the IEEE International Symposium on Biomedical Imaging 2019, it gave us the first rank for three competition categories: liver CT, liver MR and multi-organ MR segmentation. Combining cascaded convolutional and adversarial networks strengthens the ability of deep learning pipelines to automatically delineate multiple abdominal organs, with good generalization capability. The comprehensive evaluation provided suggests that better guidance could be achieved to help clinicians in abdominal image interpretation and clinical decision making.
Collapse
Affiliation(s)
- Pierre-Henri Conze
- IMT Atlantique, Technopôle Brest-Iroise, 29238 Brest, France; LaTIM UMR 1101, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France.
| | - Ali Emre Kavur
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - Emilie Cornec-Le Gall
- Department of Nephrology, University Hospital, 2 avenue Foch, 29609 Brest, France; UMR 1078, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France
| | - Naciye Sinem Gezer
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey; Department of Radiology, Faculty of Medicine, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - Yannick Le Meur
- Department of Nephrology, University Hospital, 2 avenue Foch, 29609 Brest, France; LBAI UMR 1227, Inserm, 5 avenue Foch, 29609 Brest, France
| | - M Alper Selver
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - François Rousseau
- IMT Atlantique, Technopôle Brest-Iroise, 29238 Brest, France; LaTIM UMR 1101, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France
| |
Collapse
|
26
|
Lazaridis G, Lorenzi M, Mohamed-Noriega J, Aguilar-Munoa S, Suzuki K, Nomoto H, Ourselin S, Garway-Heath DF, Crabb DP, Bunce C, Amalfitano F, Anand N, Azuara-Blanco A, Bourne RR, Broadway DC, Cunliffe IA, Diamond JP, Fraser SG, Ho TA, Martin KR, McNaught AI, Negi A, Shah A, Spry PG, White ET, Wormald RP, Xing W, Zeyen TG. OCT Signal Enhancement with Deep Learning. ACTA ACUST UNITED AC 2021; 4:295-304. [DOI: 10.1016/j.ogla.2020.10.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Revised: 10/04/2020] [Accepted: 10/06/2020] [Indexed: 01/29/2023]
|
27
|
Test-time adaptable neural networks for robust medical image segmentation. Med Image Anal 2021; 68:101907. [DOI: 10.1016/j.media.2020.101907] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 11/11/2020] [Accepted: 11/12/2020] [Indexed: 11/20/2022]
|
28
|
Wallstén E, Axelsson J, Jonsson J, Karlsson CT, Nyholm T, Larsson A. Improved PET/MRI attenuation correction in the pelvic region using a statistical decomposition method on T2-weighted images. EJNMMI Phys 2020; 7:68. [PMID: 33226495 PMCID: PMC7683750 DOI: 10.1186/s40658-020-00336-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 11/04/2020] [Indexed: 11/29/2022] Open
Abstract
Background Attenuation correction of PET/MRI is a remaining problem for whole-body PET/MRI. The statistical decomposition algorithm (SDA) is a probabilistic atlas-based method that calculates synthetic CTs from T2-weighted MRI scans. In this study, we evaluated the application of SDA for attenuation correction of PET images in the pelvic region. Materials and method Twelve patients were retrospectively selected from an ongoing prostate cancer research study. The patients had same-day scans of [11C]acetate PET/MRI and CT. The CT images were non-rigidly registered to the PET/MRI geometry, and PET images were reconstructed with attenuation correction employing CT, SDA-generated CT, and the built-in Dixon sequence-based method of the scanner. The PET images reconstructed using CT-based attenuation correction were used as ground truth. Results The mean whole-image PET uptake error was reduced from − 5.4% for Dixon-PET to − 0.9% for SDA-PET. The prostate standardized uptake value (SUV) quantification error was significantly reduced from − 5.6% for Dixon-PET to − 2.3% for SDA-PET. Conclusion Attenuation correction with SDA improves quantification of PET/MR images in the pelvic region compared to the Dixon-based method.
Collapse
Affiliation(s)
- Elin Wallstén
- Department of Radiation Sciences, Radiation Physics, Umeå University, 901 85, Umeå, Sweden.
| | - Jan Axelsson
- Department of Radiation Sciences, Radiation Physics, Umeå University, 901 85, Umeå, Sweden
| | - Joakim Jonsson
- Department of Radiation Sciences, Radiation Physics, Umeå University, 901 85, Umeå, Sweden
| | | | - Tufve Nyholm
- Department of Radiation Sciences, Radiation Physics, Umeå University, 901 85, Umeå, Sweden
| | - Anne Larsson
- Department of Radiation Sciences, Radiation Physics, Umeå University, 901 85, Umeå, Sweden
| |
Collapse
|
29
|
Lazaridis G, Lorenzi M, Ourselin S, Garway-Heath D. Improving statistical power of glaucoma clinical trials using an ensemble of cyclical generative adversarial networks. Med Image Anal 2020; 68:101906. [PMID: 33260117 DOI: 10.1016/j.media.2020.101906] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 11/11/2020] [Accepted: 11/12/2020] [Indexed: 11/16/2022]
Abstract
Albeit spectral-domain OCT (SDOCT) is now in clinical use for glaucoma management, published clinical trials relied on time-domain OCT (TDOCT) which is characterized by low signal-to-noise ratio, leading to low statistical power. For this reason, such trials require large numbers of patients observed over long intervals and become more costly. We propose a probabilistic ensemble model and a cycle-consistent perceptual loss for improving the statistical power of trials utilizing TDOCT. TDOCT are converted to synthesized SDOCT and segmented via Bayesian fusion of an ensemble of GANs. The final retinal nerve fibre layer segmentation is obtained automatically on an averaged synthesized image using label fusion. We benchmark different networks using i) GAN, ii) Wasserstein GAN (WGAN) (iii) GAN + perceptual loss and iv) WGAN + perceptual loss. For training and validation, an independent dataset is used, while testing is performed on the UK Glaucoma Treatment Study (UKGTS), i.e. a TDOCT-based trial. We quantify the statistical power of the measurements obtained with our method, as compared with those derived from the original TDOCT. The results provide new insights into the UKGTS, showing a significantly better separation between treatment arms, while improving the statistical power of TDOCT on par with visual field measurements.
Collapse
Affiliation(s)
- Georgios Lazaridis
- Centre for Medical Image Computing, University College London, London, United Kingdom; School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom; NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and the Institute of Ophthalmology, University College London, London, United Kingdom.
| | - Marco Lorenzi
- Université Côte dAzur, Inria, Epione Team, 06902 Sophia Antipolis, France
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - David Garway-Heath
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and the Institute of Ophthalmology, University College London, London, United Kingdom
| |
Collapse
|
30
|
Abstract
Segmentation of medical images using multiple atlases has recently gained immense attention due to their augmented robustness against variabilities across different subjects. These atlas-based methods typically comprise of three steps: atlas selection, image registration, and finally label fusion. Image registration is one of the core steps in this process, accuracy of which directly affects the final labeling performance. However, due to inter-subject anatomical variations, registration errors are inevitable. The aim of this paper is to develop a deep learning-based confidence estimation method to alleviate the potential effects of registration errors. We first propose a fully convolutional network (FCN) with residual connections to learn the relationship between the image patch pair (i.e., patches from the target subject and the atlas) and the related label confidence patch. With the obtained label confidence patch, we can identify the potential errors in the warped atlas labels and correct them. Then, we use two label fusion methods to fuse the corrected atlas labels. The proposed methods are validated on a publicly available dataset for hippocampus segmentation. Experimental results demonstrate that our proposed methods outperform the state-of-the-art segmentation methods.
Collapse
Affiliation(s)
- Hancan Zhu
- School of Mathematics Physics and Information, Shaoxing University, Shaoxing, 312000, China
| | - Ehsan Adeli
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, 94305, CA, USA
| | - Feng Shi
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, 27599, North Carolina, USA.
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea.
| |
Collapse
|
31
|
Mu G, Yang Y, Gao Y, Feng Q. [Multi-scale 3D convolutional neural network-based segmentation of head and neck organs at risk]. NAN FANG YI KE DA XUE XUE BAO = JOURNAL OF SOUTHERN MEDICAL UNIVERSITY 2020; 40:491-498. [PMID: 32895133 DOI: 10.12122/j.issn.1673-4254.2020.04.07] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
OBJECTIVE To establish an algorithm based on 3D convolution neural network to segment the organs at risk (OARs) in the head and neck on CT images. METHODS We propose an automatic segmentation algorithm of head and neck OARs based on V-Net. To enhance the feature expression ability of the 3D neural network, we combined the squeeze and exception (SE) module with the residual convolution module in V-Net to increase the weight of the features that has greater contributions to the segmentation task. Using a multi-scale strategy, we completed organ segmentation using two cascade models for location and fine segmentation, and the input image was resampled to different resolutions during preprocessing to allow the two models to focus on the extraction of global location information and local detail features respectively. RESULTS Our experiments on segmentation of 22 OARs in the head and neck indicated that compared with the existing methods, the proposed method achieved better segmentation accuracy and efficiency, and the average segmentation accuracy was improved by 9%. At the same time, the average test time was reduced from 33.82 s to 2.79 s. CONCLUSIONS The 3D convolution neural network based on multi-scale strategy can effectively and efficiently improve the accuracy of organ segmentation and can be potentially used in clinical setting for segmentation of other organs to improve the efficiency of clinical treatment.
Collapse
Affiliation(s)
- Guangrui Mu
- School of Biomedical Engineering, Guangzhou 510515, China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China
| | - Yanping Yang
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200030, China
| | - Yaozong Gao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200030, China
| | - Qianjin Feng
- School of Biomedical Engineering, Guangzhou 510515, China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
32
|
Zöllei L, Iglesias JE, Ou Y, Grant PE, Fischl B. Infant FreeSurfer: An automated segmentation and surface extraction pipeline for T1-weighted neuroimaging data of infants 0-2 years. Neuroimage 2020; 218:116946. [PMID: 32442637 PMCID: PMC7415702 DOI: 10.1016/j.neuroimage.2020.116946] [Citation(s) in RCA: 101] [Impact Index Per Article: 20.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 03/03/2020] [Accepted: 05/12/2020] [Indexed: 01/23/2023] Open
Abstract
The development of automated tools for brain morphometric analysis in infants has lagged significantly behind analogous tools for adults. This gap reflects the greater challenges in this domain due to: 1) a smaller-scaled region of interest, 2) increased motion corruption, 3) regional changes in geometry due to heterochronous growth, and 4) regional variations in contrast properties corresponding to ongoing myelination and other maturation processes. Nevertheless, there is a great need for automated image-processing tools to quantify differences between infant groups and other individuals, because aberrant cortical morphologic measurements (including volume, thickness, surface area, and curvature) have been associated with neuropsychiatric, neurologic, and developmental disorders in children. In this paper we present an automated segmentation and surface extraction pipeline designed to accommodate clinical MRI studies of infant brains in a population 0-2 year-olds. The algorithm relies on a single channel of T1-weighted MR images to achieve automated segmentation of cortical and subcortical brain areas, producing volumes of subcortical structures and surface models of the cerebral cortex. We evaluated the algorithm both qualitatively and quantitatively using manually labeled datasets, relevant comparator software solutions cited in the literature, and expert evaluations. The computational tools and atlases described in this paper will be distributed to the research community as part of the FreeSurfer image analysis package.
Collapse
Affiliation(s)
- Lilla Zöllei
- Laboratory for Computational Neuroimaging, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA.
| | - Juan Eugenio Iglesias
- Laboratory for Computational Neuroimaging, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA; Center for Medical Image Computing, University College London, United Kingdom; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| | - Yangming Ou
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, USA
| | - P Ellen Grant
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, USA
| | - Bruce Fischl
- Laboratory for Computational Neuroimaging, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| |
Collapse
|
33
|
Automated segmentation of the hypothalamus and associated subunits in brain MRI. Neuroimage 2020; 223:117287. [PMID: 32853816 PMCID: PMC8417769 DOI: 10.1016/j.neuroimage.2020.117287] [Citation(s) in RCA: 104] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 07/17/2020] [Accepted: 08/09/2020] [Indexed: 01/19/2023] Open
Abstract
A publicly available deep learning tool to segment the hypothalamus and its subunits. Our tool outperforms inter-rater accuracy and approaches intra-rater precision level. It can robustly generalise to unseen heterogeneous datasets. It yields a rejection rate of less than 1% in a QC analysis performed on 675 scans. It detects subtle subunit-specific hypothalamic atrophy in Alzheimer’s Disease.
Despite the crucial role of the hypothalamus in the regulation of the human body, neuroimaging studies of this structure and its nuclei are scarce. Such scarcity partially stems from the lack of automated segmentation tools, since manual delineation suffers from scalability and reproducibility issues. Due to the small size of the hypothalamus and the lack of image contrast in its vicinity, automated segmentation is difficult and has been long neglected by widespread neuroimaging packages like FreeSurfer or FSL. Nonetheless, recent advances in deep machine learning are enabling us to tackle difficult segmentation problems with high accuracy. In this paper we present a fully automated tool based on a deep convolutional neural network, for the segmentation of the whole hypothalamus and its subregions from T1-weighted MRI scans. We use aggressive data augmentation in order to make the model robust to T1-weighted MR scans from a wide array of different sources, without any need for preprocessing. We rigorously assess the performance of the presented tool through extensive analyses, including: inter- and intra-rater variability experiments between human observers; comparison of our tool with manual segmentation; comparison with an automated method based on multi-atlas segmentation; assessment of robustness by quality control analysis of a larger, heterogeneous dataset (ADNI); and indirect evaluation with a volumetric study performed on ADNI. The presented model outperforms multi-atlas segmentation scores as well as inter-rater accuracy level, and approaches intra-rater precision. Our method does not require any preprocessing and runs in less than a second on a GPU, and approximately 10 seconds on a CPU. The source code as well as the trained model are publicly available at https://github.com/BBillot/hypothalamus_seg, and will also be distributed with FreeSurfer.
Collapse
|
34
|
|
35
|
Janiri D, Simonetti A, Piras F, Ciullo V, Spalletta G, Sani G. Predominant polarity and hippocampal subfield volumes in Bipolar disorders. Bipolar Disord 2020; 22:490-497. [PMID: 31630469 DOI: 10.1111/bdi.12857] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
OBJECTIVES Predominant polarity (PP) is a proposed course specifier for bipolar disorders (BD) based on patient lifetime mood episodes. Hippocampal subfield volumetric changes have been proposed as a neurobiological marker for BD and could be influenced by mood episodes. Our study aimed to test the hypothesis that patients with BD differ in hippocampal subfield volumes according to their PP. METHODS We assessed 172 outpatients, diagnosed with BD according to DSM-IV-TR criteria, and 150 healthy control (HC) participants. High-resolution magnetic resonance imaging was performed on all subjects and volumes of all hippocampal subfields were measured using FreeSurfer. RESULTS Patients with depressive PP (BD-DP) and with uncertain PP (BD-UP) but not with manic/hypomanic PP (BD-MP) showed a global reduction on all hippocampal subfield volumes with respect to HCs. When directly compared, BD-DP presented with smaller bilateral presubiculum/subiculum volumes than BD-MP. CONCLUSIONS Results support the potential utility of PP not only as a clinical but also as a neurobiological specifier of BD.
Collapse
Affiliation(s)
- Delfina Janiri
- Department of Neurology and Psychiatry, Sapienza University of Rome, Rome, Italy.,Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA.,Lucio Bini Center, Rome, Italy
| | - Alessio Simonetti
- Department of Neurology and Psychiatry, Sapienza University of Rome, Rome, Italy.,Lucio Bini Center, Rome, Italy.,Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
| | - Fabrizio Piras
- IRCCS Santa Lucia Foundation, Laboratory of Neuropsychiatry, Rome, Italy
| | - Valentina Ciullo
- IRCCS Santa Lucia Foundation, Laboratory of Neuropsychiatry, Rome, Italy
| | - Gianfranco Spalletta
- Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA.,IRCCS Santa Lucia Foundation, Laboratory of Neuropsychiatry, Rome, Italy
| | - Gabriele Sani
- Lucio Bini Center, Rome, Italy.,NESMOS Department (Neurosciences, Mental Health, and Sensory Organs), Sapienza University of Rome, School of Medicine and Psychology, Sant'Andrea Hospital, Rome, Italy.,Tufts Medical Center, Tufts University School of Medicine, Boston, MA, USA
| |
Collapse
|
36
|
Agrawal P, Whitaker RT, Elhabian SY. An Optimal, Generative Model for Estimating Multi-Label Probabilistic Maps. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2316-2326. [PMID: 31985415 PMCID: PMC7395849 DOI: 10.1109/tmi.2020.2968917] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Multi-label probabilistic maps, a.k.a. probabilistic segmentations, parameterize a population of intimately co-existing anatomical shapes and are useful for various medical imaging applications, such as segmentation, anatomical atlases, shape analysis, and consensus generation. Existing methods to estimate probabilistic segmentations rely on ad hoc intermediate representations (e.g., average of Gaussian-smoothed label maps and smoothed signed distance maps) that do not necessarily conform to the underlying generative process. Generative modeling of such maps could help discover as well as aide in the statistical analysis of sub-groups in a population via clustering and mixture modeling techniques. In this paper, we propose an estimation of multi-label probabilistic maps and showcase their favorable performance for modeling anatomical shapes such as the left atrium of the human heart and brain structures. The proposed formulation relies on a constrained optimization in the natural parameter space of the exponential family form of categorical distributions. A smoothness prior provides generalizability in the model and helps achieve greater performance in modeling tasks for unseen samples. We demonstrate and compare the effectiveness of the proposed method for Bayesian image segmentation, multi-atlas segmentation, and shape-based clustering.
Collapse
|
37
|
Sun L, Ma W, Ding X, Huang Y, Liang D, Paisley J. A 3D Spatially Weighted Network for Segmentation of Brain Tissue From MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:898-909. [PMID: 31449009 DOI: 10.1109/tmi.2019.2937271] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The segmentation of brain tissue in MRI is valuable for extracting brain structure to aid diagnosis, treatment and tracking the progression of different neurologic diseases. Medical image data are volumetric and some neural network models for medical image segmentation have addressed this using a 3D convolutional architecture. However, this volumetric spatial information has not been fully exploited to enhance the representative ability of deep networks, and these networks have not fully addressed the practical issues facing the analysis of multimodal MRI data. In this paper, we propose a spatially-weighted 3D network (SW-3D-UNet) for brain tissue segmentation of single-modality MRI, and extend it using multimodality MRI data. We validate our model on the MRBrainS13 and MALC12 datasets. This unpublished model ranked first on the leaderboard of the MRBrainS13 Challenge.
Collapse
|
38
|
Integration of a knowledge-based constraint into generative models with applications in semi-automatic segmentation of liver tumors. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101725] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
39
|
Wachinger C, Toews M, Langs G, Wells W, Golland P. Keypoint Transfer for Fast Whole-Body Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:273-282. [PMID: 29994670 PMCID: PMC6310119 DOI: 10.1109/tmi.2018.2851194] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We introduce an approach for image segmentation based on sparse correspondences between keypoints in testing and training images. Keypoints represent automatically identified distinctive image locations, where each keypoint correspondence suggests a transformation between images. We use these correspondences to transfer the label maps of entire organs from the training images to the test image. The keypoint transfer algorithm includes three steps: 1) keypoint matching; 2) voting-based keypoint labeling; and 3) keypoint-based probabilistic transfer of organ segmentations. We report segmentation results for abdominal organs in whole-body CT and MRI, as well as in contrast-enhanced CT and MRI. Our method offers a speed-up of about three orders of magnitude in comparison with common multi-atlas segmentation while achieving an accuracy that compares favorably. Moreover, keypoint transfer does not require the registration to an atlas or a training phase. Finally, the method allows for the segmentation of scans with a highly variable field-of-view.
Collapse
|
40
|
Longitudinal brain tumor segmentation prediction in MRI using feature and label fusion. Biomed Signal Process Control 2020; 55:101648. [PMID: 34354762 PMCID: PMC8336640 DOI: 10.1016/j.bspc.2019.101648] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
This work proposes a novel framework for brain tumor segmentation prediction in longitudinal multi-modal MRI scans, comprising two methods; feature fusion and joint label fusion (JLF). The first method fuses stochastic multi-resolution texture features with tumor cell density feature to obtain tumor segmentation predictions in follow-up timepoints using data from baseline pre-operative timepoint. The cell density feature is obtained by solving the 3D reaction-diffusion equation for biophysical tumor growth modelling using the Lattice-Boltzmann method. The second method utilizes JLF to combine segmentation labels obtained from (i) the stochastic texture feature-based and Random Forest (RF)-based tumor segmentation method; and (ii) another state-of-the-art tumor growth and segmentation method, known as boosted Glioma Image Segmentation and Registration (GLISTRboost, or GB). We quantitatively evaluate both proposed methods using the Dice Similarity Coefficient (DSC) in longitudinal scans of 9 patients from the public BraTS 2015 multi-institutional dataset. The evaluation results for the feature-based fusion method show improved tumor segmentation prediction for the whole tumor(DSC WT = 0.314, p = 0.1502), tumor core (DSC TC = 0.332, p = 0.0002), and enhancing tumor (DSC ET = 0.448, p = 0.0002) regions. The feature-based fusion shows some improvement on tumor prediction of longitudinal brain tumor tracking, whereas the JLF offers statistically significant improvement on the actual segmentation of WT and ET (DSC WT = 0.85 ± 0.055, DSC ET = 0.837 ± 0.074), and also improves the results of GB. The novelty of this work is two-fold: (a) exploit tumor cell density as a feature to predict brain tumor segmentation, using a stochastic multi-resolution RF-based method, and (b) improve the performance of another successful tumor segmentation method, GB, by fusing with the RF-based segmentation labels.
Collapse
|
41
|
Kavur AE, Gezer NS, Barış M, Şahin Y, Özkan S, Baydar B, Yüksel U, Kılıkçıer Ç, Olut Ş, Akar GB, Ünal G, Dicle O, Selver MA. Comparison of semi-automatic and deep learning-based automatic methods for liver segmentation in living liver transplant donors. Diagn Interv Radiol 2020; 26:11-21. [PMID: 31904568 PMCID: PMC7075579 DOI: 10.5152/dir.2019.19025] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 03/05/2019] [Accepted: 06/10/2019] [Indexed: 11/22/2022]
Abstract
PURPOSE To compare the accuracy and repeatability of emerging machine learning based (i.e. deep) automatic segmentation algorithms with those of well-established semi-automatic (interactive) methods for determining liver volume in living liver transplant donors at computerized tomography (CT) imaging. METHODS A total of 12 (6 semi-, 6 full-automatic) methods are evaluated. The semi-automatic segmentation algorithms are based on both traditional iterative models including watershed, fast marching, region growing, active contours and modern techniques including robust statistical segmenter and super-pixels. These methods entail some sort of interaction mechanism such as placing initialization seeds on images or determining a parameter range. The automatic methods are based on deep learning and they include three framework templates (DeepMedic, NiftyNet and U-Net) the first two of which are applied with default parameter sets and the last two involve adapted novel model designs. For 20 living donors (6 training and 12 test datasets), a group of imaging scientists and radiologists created ground truths by performing manual segmentations on contrast material-enhanced CT images. Each segmentation is evaluated using five metrics (i.e. volume overlap and relative volume errors, average/RMS/maximum symmetrical surface distances). The results are mapped to a scoring system and a final grade is calculated by taking their average. Accuracy and repeatability were evaluated using slice by slice comparisons and volumetric analysis. Diversity and complementarity are observed through heatmaps. Majority voting and Simultaneous Truth and Performance Level Estimation (STAPLE) algorithms are utilized to obtain the fusion of the individual results. RESULTS The top four methods are determined to be automatic deep models having 79.63, 79.46 and 77.15 and 74.50 scores. Intra-user score is determined as 95.14. Overall, deep automatic segmentation outperformed interactive techniques on all metrics. The mean volume of liver of ground truth is found to be 1409.93 mL ± 271.28 mL, while it is calculated as 1342.21 mL ± 231.24 mL using automatic and 1201.26 mL ± 258.13 mL using interactive methods, showing higher accuracy and less variation on behalf of automatic methods. The qualitative analysis of segmentation results showed significant diversity and complementarity enabling the idea of using ensembles to obtain superior results. The fusion of automatic methods reached 83.87 with majority voting and 86.20 using STAPLE that are only slightly less than fusion of all methods that achieved 86.70 (majority voting) and 88.74 (STAPLE). CONCLUSION Use of the new deep learning based automatic segmentation algorithms substantially increases the accuracy and repeatability for segmentation and volumetric measurements of liver. Fusion of automatic methods based on ensemble approaches exhibits best results almost without any additional time cost due to potential parallel execution of multiple models.
Collapse
Affiliation(s)
- A. Emre Kavur
- From the Graduate School of Natural and Applied Sciences (A.E.K., U.Y.), Dokuz Eylül University, İzmir, Turkey; Departments of Radiology (N.S.G., M.B., O.D.) and Electrical and Electronics Engineering (M.A.S. ), Dokuz Eylül University School of Medicine, İzmir, Turkey; Department of Computer Engineering (Y.Ş., Ş.O., G.Ü.), İstanbul Technical University, İstanbul, Turkey; Department of Electrical and Electronics Engineering (S.Ö., B.B., G.B.A.), Middle East Technical University, Ankara, Turkey; Department of Computer Engineering (Ç.K.), Uludağ University, Bursa, Turkey
| | - Naciye Sinem Gezer
- From the Graduate School of Natural and Applied Sciences (A.E.K., U.Y.), Dokuz Eylül University, İzmir, Turkey; Departments of Radiology (N.S.G., M.B., O.D.) and Electrical and Electronics Engineering (M.A.S. ), Dokuz Eylül University School of Medicine, İzmir, Turkey; Department of Computer Engineering (Y.Ş., Ş.O., G.Ü.), İstanbul Technical University, İstanbul, Turkey; Department of Electrical and Electronics Engineering (S.Ö., B.B., G.B.A.), Middle East Technical University, Ankara, Turkey; Department of Computer Engineering (Ç.K.), Uludağ University, Bursa, Turkey
| | - Mustafa Barış
- From the Graduate School of Natural and Applied Sciences (A.E.K., U.Y.), Dokuz Eylül University, İzmir, Turkey; Departments of Radiology (N.S.G., M.B., O.D.) and Electrical and Electronics Engineering (M.A.S. ), Dokuz Eylül University School of Medicine, İzmir, Turkey; Department of Computer Engineering (Y.Ş., Ş.O., G.Ü.), İstanbul Technical University, İstanbul, Turkey; Department of Electrical and Electronics Engineering (S.Ö., B.B., G.B.A.), Middle East Technical University, Ankara, Turkey; Department of Computer Engineering (Ç.K.), Uludağ University, Bursa, Turkey
| | - Yusuf Şahin
- From the Graduate School of Natural and Applied Sciences (A.E.K., U.Y.), Dokuz Eylül University, İzmir, Turkey; Departments of Radiology (N.S.G., M.B., O.D.) and Electrical and Electronics Engineering (M.A.S. ), Dokuz Eylül University School of Medicine, İzmir, Turkey; Department of Computer Engineering (Y.Ş., Ş.O., G.Ü.), İstanbul Technical University, İstanbul, Turkey; Department of Electrical and Electronics Engineering (S.Ö., B.B., G.B.A.), Middle East Technical University, Ankara, Turkey; Department of Computer Engineering (Ç.K.), Uludağ University, Bursa, Turkey
| | - Savaş Özkan
- From the Graduate School of Natural and Applied Sciences (A.E.K., U.Y.), Dokuz Eylül University, İzmir, Turkey; Departments of Radiology (N.S.G., M.B., O.D.) and Electrical and Electronics Engineering (M.A.S. ), Dokuz Eylül University School of Medicine, İzmir, Turkey; Department of Computer Engineering (Y.Ş., Ş.O., G.Ü.), İstanbul Technical University, İstanbul, Turkey; Department of Electrical and Electronics Engineering (S.Ö., B.B., G.B.A.), Middle East Technical University, Ankara, Turkey; Department of Computer Engineering (Ç.K.), Uludağ University, Bursa, Turkey
| | - Bora Baydar
- From the Graduate School of Natural and Applied Sciences (A.E.K., U.Y.), Dokuz Eylül University, İzmir, Turkey; Departments of Radiology (N.S.G., M.B., O.D.) and Electrical and Electronics Engineering (M.A.S. ), Dokuz Eylül University School of Medicine, İzmir, Turkey; Department of Computer Engineering (Y.Ş., Ş.O., G.Ü.), İstanbul Technical University, İstanbul, Turkey; Department of Electrical and Electronics Engineering (S.Ö., B.B., G.B.A.), Middle East Technical University, Ankara, Turkey; Department of Computer Engineering (Ç.K.), Uludağ University, Bursa, Turkey
| | - Ulaş Yüksel
- From the Graduate School of Natural and Applied Sciences (A.E.K., U.Y.), Dokuz Eylül University, İzmir, Turkey; Departments of Radiology (N.S.G., M.B., O.D.) and Electrical and Electronics Engineering (M.A.S. ), Dokuz Eylül University School of Medicine, İzmir, Turkey; Department of Computer Engineering (Y.Ş., Ş.O., G.Ü.), İstanbul Technical University, İstanbul, Turkey; Department of Electrical and Electronics Engineering (S.Ö., B.B., G.B.A.), Middle East Technical University, Ankara, Turkey; Department of Computer Engineering (Ç.K.), Uludağ University, Bursa, Turkey
| | - Çağlar Kılıkçıer
- From the Graduate School of Natural and Applied Sciences (A.E.K., U.Y.), Dokuz Eylül University, İzmir, Turkey; Departments of Radiology (N.S.G., M.B., O.D.) and Electrical and Electronics Engineering (M.A.S. ), Dokuz Eylül University School of Medicine, İzmir, Turkey; Department of Computer Engineering (Y.Ş., Ş.O., G.Ü.), İstanbul Technical University, İstanbul, Turkey; Department of Electrical and Electronics Engineering (S.Ö., B.B., G.B.A.), Middle East Technical University, Ankara, Turkey; Department of Computer Engineering (Ç.K.), Uludağ University, Bursa, Turkey
| | - Şahin Olut
- From the Graduate School of Natural and Applied Sciences (A.E.K., U.Y.), Dokuz Eylül University, İzmir, Turkey; Departments of Radiology (N.S.G., M.B., O.D.) and Electrical and Electronics Engineering (M.A.S. ), Dokuz Eylül University School of Medicine, İzmir, Turkey; Department of Computer Engineering (Y.Ş., Ş.O., G.Ü.), İstanbul Technical University, İstanbul, Turkey; Department of Electrical and Electronics Engineering (S.Ö., B.B., G.B.A.), Middle East Technical University, Ankara, Turkey; Department of Computer Engineering (Ç.K.), Uludağ University, Bursa, Turkey
| | - Gözde Bozdağı Akar
- From the Graduate School of Natural and Applied Sciences (A.E.K., U.Y.), Dokuz Eylül University, İzmir, Turkey; Departments of Radiology (N.S.G., M.B., O.D.) and Electrical and Electronics Engineering (M.A.S. ), Dokuz Eylül University School of Medicine, İzmir, Turkey; Department of Computer Engineering (Y.Ş., Ş.O., G.Ü.), İstanbul Technical University, İstanbul, Turkey; Department of Electrical and Electronics Engineering (S.Ö., B.B., G.B.A.), Middle East Technical University, Ankara, Turkey; Department of Computer Engineering (Ç.K.), Uludağ University, Bursa, Turkey
| | - Gözde Ünal
- From the Graduate School of Natural and Applied Sciences (A.E.K., U.Y.), Dokuz Eylül University, İzmir, Turkey; Departments of Radiology (N.S.G., M.B., O.D.) and Electrical and Electronics Engineering (M.A.S. ), Dokuz Eylül University School of Medicine, İzmir, Turkey; Department of Computer Engineering (Y.Ş., Ş.O., G.Ü.), İstanbul Technical University, İstanbul, Turkey; Department of Electrical and Electronics Engineering (S.Ö., B.B., G.B.A.), Middle East Technical University, Ankara, Turkey; Department of Computer Engineering (Ç.K.), Uludağ University, Bursa, Turkey
| | - Oğuz Dicle
- From the Graduate School of Natural and Applied Sciences (A.E.K., U.Y.), Dokuz Eylül University, İzmir, Turkey; Departments of Radiology (N.S.G., M.B., O.D.) and Electrical and Electronics Engineering (M.A.S. ), Dokuz Eylül University School of Medicine, İzmir, Turkey; Department of Computer Engineering (Y.Ş., Ş.O., G.Ü.), İstanbul Technical University, İstanbul, Turkey; Department of Electrical and Electronics Engineering (S.Ö., B.B., G.B.A.), Middle East Technical University, Ankara, Turkey; Department of Computer Engineering (Ç.K.), Uludağ University, Bursa, Turkey
| | - M. Alper Selver
- From the Graduate School of Natural and Applied Sciences (A.E.K., U.Y.), Dokuz Eylül University, İzmir, Turkey; Departments of Radiology (N.S.G., M.B., O.D.) and Electrical and Electronics Engineering (M.A.S. ), Dokuz Eylül University School of Medicine, İzmir, Turkey; Department of Computer Engineering (Y.Ş., Ş.O., G.Ü.), İstanbul Technical University, İstanbul, Turkey; Department of Electrical and Electronics Engineering (S.Ö., B.B., G.B.A.), Middle East Technical University, Ankara, Turkey; Department of Computer Engineering (Ç.K.), Uludağ University, Bursa, Turkey
| |
Collapse
|
42
|
Wang M, Li P. Label fusion method combining pixel greyscale probability for brain MR segmentation. Sci Rep 2019; 9:17987. [PMID: 31784630 PMCID: PMC6884484 DOI: 10.1038/s41598-019-54527-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 11/13/2019] [Indexed: 11/08/2022] Open
Abstract
Multi-atlas-based segmentation (MAS) methods have demonstrated superior performance in the field of automatic image segmentation, and label fusion is an important part of MAS methods. In this paper, we propose a label fusion method that incorporates pixel greyscale probability information. The proposed method combines the advantages of label fusion methods based on sparse representation (SRLF) and weighted voting methods using patch similarity weights (PSWV) and introduces pixel greyscale probability information to improve the segmentation accuracy. We apply the proposed method to the segmentation of deep brain tissues in challenging 3D brain MR images from publicly available IBSR datasets, including images of the thalamus, hippocampus, caudate, putamen, pallidum and amygdala. The experimental results show that the proposed method has higher segmentation accuracy and robustness than the related methods. Compared with the state-of-the-art methods, the proposed method obtains the best putamen, pallidum and amygdala segmentation results and hippocampus and caudate segmentation results that are similar to those of the comparison methods.
Collapse
Affiliation(s)
- Monan Wang
- School of Mechanical & Power Engineering, Harbin University of Science and Technology, Xue Fu Road No. 52, Nangang District, Harbin City, Heilongjiang Province, 150080, People's Republic of China.
| | - Pengcheng Li
- School of Mechanical & Power Engineering, Harbin University of Science and Technology, Xue Fu Road No. 52, Nangang District, Harbin City, Heilongjiang Province, 150080, People's Republic of China
| |
Collapse
|
43
|
Schipaanboord B, Boukerroui D, Peressutti D, van Soest J, Lustberg T, Dekker A, Elmpt WV, Gooding MJ. An Evaluation of Atlas Selection Methods for Atlas-Based Automatic Segmentation in Radiotherapy Treatment Planning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2654-2664. [PMID: 30969918 DOI: 10.1109/tmi.2019.2907072] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Atlas-based automatic segmentation is used in radiotherapy planning to accelerate the delineation of organs at risk (OARs). Atlas selection has been proposed as a way to improve the accuracy and execution time of segmentation, assuming that, the more similar the atlas is to the patient, the better the results will be. This paper presents an analysis of atlas selection methods in the context of radiotherapy treatment planning. For a range of commonly contoured OARs, a thorough comparison of a large class of typical atlas selection methods has been performed. For this evaluation, clinically contoured CT images of the head and neck ( N=316 ) and thorax ( N=280 ) were used. The state-of-the-art intensity and deformation similarity-based atlas selection methods were found to compare poorly to perfect atlas selection. Counter-intuitively, atlas selection methods based on a fixed set of representative atlases outperformed atlas selection methods based on the patient image. This study suggests that atlas-based segmentation with currently available selection methods compares poorly to the potential best performance, hampering the clinical utility of atlas-based segmentation. Effective atlas selection remains an open challenge in atlas-based segmentation for radiotherapy planning.
Collapse
|
44
|
Haq R, Berry SL, Deasy JO, Hunt M, Veeraraghavan H. Dynamic multiatlas selection-based consensus segmentation of head and neck structures from CT images. Med Phys 2019; 46:5612-5622. [PMID: 31587300 DOI: 10.1002/mp.13854] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Revised: 09/11/2019] [Accepted: 09/16/2019] [Indexed: 12/11/2022] Open
Abstract
PURPOSE Manual delineation of head and neck (H&N) organ-at-risk (OAR) structures for radiation therapy planning is time consuming and highly variable. Therefore, we developed a dynamic multiatlas selection-based approach for fast and reproducible segmentation. METHODS Our approach dynamically selects and weights the appropriate number of atlases for weighted label fusion and generates segmentations and consensus maps indicating voxel-wise agreement between different atlases. Atlases were selected for a target as those exceeding an alignment weight called dynamic atlas attention index. Alignment weights were computed at the image level and called global weighted voting (GWV) or at the structure level and called structure weighted voting (SWV) by using a normalized metric computed as the sum of squared distances of computed tomography (CT)-radiodensity and modality-independent neighborhood descriptors (extracting edge information). Performance comparisons were performed using 77 H&N CT images from an internal Memorial Sloan-Kettering Cancer Center dataset (N = 45) and an external dataset (N = 32) using Dice similarity coefficient (DSC), Hausdorff distance (HD), 95th percentile of HD, median of maximum surface distance, and volume ratio error against expert delineation. Pairwise DSC accuracy comparisons of proposed (GWV, SWV) vs single best atlas (BA) or majority voting (MV) methods were performed using Wilcoxon rank-sum tests. RESULTS Both SWV and GWV methods produced significantly better segmentation accuracy than BA (P < 0.001) and MV (P < 0.001) for all OARs within both datasets. SWV generated the most accurate segmentations with DSC of: 0.88 for oral cavity, 0.85 for mandible, 0.84 for cord, 0.76 for brainstem and parotids, 0.71 for larynx, and 0.60 for submandibular glands. SWV's accuracy exceeded GWV's for submandibular glands (DSC = 0.60 vs 0.52, P = 0.019). CONCLUSIONS The contributed SWV and GWV methods generated more accurate automated segmentations than the other two multiatlas-based segmentation techniques. The consensus maps could be combined with segmentations to visualize voxel-wise consensus between atlases within OARs during manual review.
Collapse
Affiliation(s)
- Rabia Haq
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Sean L Berry
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Joseph O Deasy
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Margie Hunt
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| |
Collapse
|
45
|
Jog A, Hoopes A, Greve DN, Van Leemput K, Fischl B. PSACNN: Pulse sequence adaptive fast whole brain segmentation. Neuroimage 2019; 199:553-569. [PMID: 31129303 PMCID: PMC6688920 DOI: 10.1016/j.neuroimage.2019.05.033] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Revised: 05/09/2019] [Accepted: 05/12/2019] [Indexed: 01/07/2023] Open
Abstract
With the advent of convolutional neural networks (CNN), supervised learning methods are increasingly being used for whole brain segmentation. However, a large, manually annotated training dataset of labeled brain images required to train such supervised methods is frequently difficult to obtain or create. In addition, existing training datasets are generally acquired with a homogeneous magnetic resonance imaging (MRI) acquisition protocol. CNNs trained on such datasets are unable to generalize on test data with different acquisition protocols. Modern neuroimaging studies and clinical trials are necessarily multi-center initiatives with a wide variety of acquisition protocols. Despite stringent protocol harmonization practices, it is very difficult to standardize the gamut of MRI imaging parameters across scanners, field strengths, receive coils etc., that affect image contrast. In this paper we propose a CNN-based segmentation algorithm that, in addition to being highly accurate and fast, is also resilient to variation in the input acquisition. Our approach relies on building approximate forward models of pulse sequences that produce a typical test image. For a given pulse sequence, we use its forward model to generate plausible, synthetic training examples that appear as if they were acquired in a scanner with that pulse sequence. Sampling over a wide variety of pulse sequences results in a wide variety of augmented training examples that help build an image contrast invariant model. Our method trains a single CNN that can segment input MRI images with acquisition parameters as disparate as T1-weighted and T2-weighted contrasts with only T1-weighted training data. The segmentations generated are highly accurate with state-of-the-art results (overall Dice overlap=0.94), with a fast run time (≈ 45 s), and consistent across a wide range of acquisition protocols.
Collapse
Affiliation(s)
- Amod Jog
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, United States; Department of Radiology, Harvard Medical School, United States.
| | - Andrew Hoopes
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, United States
| | - Douglas N Greve
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, United States; Department of Radiology, Harvard Medical School, United States
| | - Koen Van Leemput
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, United States; Department of Health Technology, Technical University of Denmark, Denmark
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, United States; Department of Radiology, Harvard Medical School, United States; Division of Health Sciences and Technology and Engineering and Computer Science MIT, Cambridge, MA, United States
| |
Collapse
|
46
|
Muschelli J. Recommendations for Processing Head CT Data. Front Neuroinform 2019; 13:61. [PMID: 31551745 PMCID: PMC6738271 DOI: 10.3389/fninf.2019.00061] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Accepted: 08/22/2019] [Indexed: 11/13/2022] Open
Abstract
Many research applications of neuroimaging use magnetic resonance imaging (MRI). As such, recommendations for image analysis and standardized imaging pipelines exist. Clinical imaging, however, relies heavily on X-ray computed tomography (CT) scans for diagnosis and prognosis. Currently, there is only one image processing pipeline for head CT, which focuses mainly on head CT data with lesions. We present tools and a complete pipeline for processing CT data, focusing on open-source solutions, that focus on head CT but are applicable to most CT analyses. We describe going from raw DICOM data to a spatially normalized brain within CT presenting a full example with code. Overall, we recommend anonymizing data with Clinical Trials Processor, converting DICOM data to NIfTI using dcm2niix, using BET for brain extraction, and registration using a publicly-available CT template for analysis.
Collapse
Affiliation(s)
- John Muschelli
- Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, United States
| |
Collapse
|
47
|
Hou B, Kang G, Zhang N, Liu K. Multi-target Interactive Neural Network for Automated Segmentation of the Hippocampus in Magnetic Resonance Imaging. Cognit Comput 2019. [DOI: 10.1007/s12559-019-09645-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
48
|
Pagnozzi AM, Fripp J, Rose SE. Quantifying deep grey matter atrophy using automated segmentation approaches: A systematic review of structural MRI studies. Neuroimage 2019; 201:116018. [PMID: 31319182 DOI: 10.1016/j.neuroimage.2019.116018] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Revised: 07/01/2019] [Accepted: 07/12/2019] [Indexed: 12/13/2022] Open
Abstract
The deep grey matter (DGM) nuclei of the brain play a crucial role in learning, behaviour, cognition, movement and memory. Although automated segmentation strategies can provide insight into the impact of multiple neurological conditions affecting these structures, such as Multiple Sclerosis (MS), Huntington's disease (HD), Alzheimer's disease (AD), Parkinson's disease (PD) and Cerebral Palsy (CP), there are a number of technical challenges limiting an accurate automated segmentation of the DGM. Namely, the insufficient contrast of T1 sequences to completely identify the boundaries of these structures, as well as the presence of iso-intense white matter lesions or extensive tissue loss caused by brain injury. Therefore in this systematic review, 269 eligible studies were analysed and compared to determine the optimal approaches for addressing these technical challenges. The automated approaches used among the reviewed studies fall into three broad categories, atlas-based approaches focusing on the accurate alignment of atlas priors, algorithmic approaches which utilise intensity information to a greater extent, and learning-based approaches that require an annotated training set. Studies that utilise freely available software packages such as FIRST, FreeSurfer and LesionTOADS were also eligible, and their performance compared. Overall, deep learning approaches achieved the best overall performance, however these strategies are currently hampered by the lack of large-scale annotated data. Improving model generalisability to new datasets could be achieved in future studies with data augmentation and transfer learning. Multi-atlas approaches provided the second-best performance overall, and may be utilised to construct a "silver standard" annotated training set for deep learning. To address the technical challenges, providing robustness to injury can be improved by using multiple channels, highly elastic diffeomorphic transformations such as LDDMM, and by following atlas-based approaches with an intensity driven refinement of the segmentation, which has been done with the Expectation Maximisation (EM) and level sets methods. Accounting for potential lesions should be achieved with a separate lesion segmentation approach, as in LesionTOADS. Finally, to address the issue of limited contrast, R2*, T2* and QSM sequences could be used to better highlight the DGM due to its higher iron content. Future studies could look to additionally acquire these sequences by retaining the phase information from standard structural scans, or alternatively acquiring these sequences for only a training set, allowing models to learn the "improved" segmentation from T1-sequences alone.
Collapse
Affiliation(s)
- Alex M Pagnozzi
- CSIRO Health and Biosecurity, The Australian e-Health Research Centre, Brisbane, Australia.
| | - Jurgen Fripp
- CSIRO Health and Biosecurity, The Australian e-Health Research Centre, Brisbane, Australia
| | - Stephen E Rose
- CSIRO Health and Biosecurity, The Australian e-Health Research Centre, Brisbane, Australia
| |
Collapse
|
49
|
Huo Y, Xu Z, Xiong Y, Aboud K, Parvathaneni P, Bao S, Bermudez C, Resnick SM, Cutting LE, Landman BA. 3D whole brain segmentation using spatially localized atlas network tiles. Neuroimage 2019; 194:105-119. [PMID: 30910724 PMCID: PMC6536356 DOI: 10.1016/j.neuroimage.2019.03.041] [Citation(s) in RCA: 168] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 02/23/2019] [Accepted: 03/19/2019] [Indexed: 01/18/2023] Open
Abstract
Detailed whole brain segmentation is an essential quantitative technique in medical image analysis, which provides a non-invasive way of measuring brain regions from a clinical acquired structural magnetic resonance imaging (MRI). Recently, deep convolution neural network (CNN) has been applied to whole brain segmentation. However, restricted by current GPU memory, 2D based methods, downsampling based 3D CNN methods, and patch-based high-resolution 3D CNN methods have been the de facto standard solutions. 3D patch-based high resolution methods typically yield superior performance among CNN approaches on detailed whole brain segmentation (>100 labels), however, whose performance are still commonly inferior compared with state-of-the-art multi-atlas segmentation methods (MAS) due to the following challenges: (1) a single network is typically used to learn both spatial and contextual information for the patches, (2) limited manually traced whole brain volumes are available (typically less than 50) for training a network. In this work, we propose the spatially localized atlas network tiles (SLANT) method to distribute multiple independent 3D fully convolutional networks (FCN) for high-resolution whole brain segmentation. To address the first challenge, multiple spatially distributed networks were used in the SLANT method, in which each network learned contextual information for a fixed spatial location. To address the second challenge, auxiliary labels on 5111 initially unlabeled scans were created by multi-atlas segmentation for training. Since the method integrated multiple traditional medical image processing methods with deep learning, we developed a containerized pipeline to deploy the end-to-end solution. From the results, the proposed method achieved superior performance compared with multi-atlas segmentation methods, while reducing the computational time from >30 h to 15 min. The method has been made available in open source (https://github.com/MASILab/SLANTbrainSeg).
Collapse
Affiliation(s)
- Yuankai Huo
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA.
| | - Zhoubing Xu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Yunxi Xiong
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Katherine Aboud
- Department of Special Education, Vanderbilt University, Nashville, TN, USA
| | - Prasanna Parvathaneni
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Shunxing Bao
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Camilo Bermudez
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Susan M Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging, Baltimore, MD, USA
| | - Laurie E Cutting
- Department of Special Education, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Pediatrics, Vanderbilt University, Nashville, TN, USA; Radiology and Radiological Sciences, Vanderbilt University, Nashville, TN, USA
| | - Bennett A Landman
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA; Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA; Radiology and Radiological Sciences, Vanderbilt University, Nashville, TN, USA; Institute of Imaging Science, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
50
|
Automatic Labeling of MR Brain Images Through the Hashing Retrieval Based Atlas Forest. J Med Syst 2019; 43:241. [PMID: 31227923 DOI: 10.1007/s10916-019-1385-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2019] [Accepted: 06/10/2019] [Indexed: 10/26/2022]
Abstract
The multi-atlas method is one of the efficient and common automatic labeling method, which uses the prior information provided by expert-labeled images to guide the labeling of the target. However, most multi-atlas-based methods depend on the registration that may not give the correct information during the label propagation. To address the issue, we designed a new automatic labeling method through the hashing retrieval based atlas forest. The proposed method propagates labels without registration to reduce the errors, and constructs a target-oriented learning model to integrate information among the atlases. This method innovates a coarse classification strategy to preprocess the dataset, which retains the integrity of dataset and reduces computing time. Furthermore, the method considers each voxel in the atlas as a sample and encodes these samples with hashing for the fast sample retrieval. In the stage of labeling, the method selects suitable samples through hashing learning and trains atlas forests by integrating the information from the dataset. Then, the trained model is used to predict the labels of the target. Experimental results on two datasets illustrated that the proposed method is promising in the automatic labeling of MR brain images.
Collapse
|