1
|
Lyu X, Duong MT, Xie L, de Flores R, Richardson H, Hwang G, Wisse LEM, DiCalogero M, McMillan CT, Robinson JL, Xie SX, Grossman M, Lee EB, Irwin DJ, Dickerson BC, Davatzikos C, Nasrallah IM, Yushkevich PA, Wolk DA, Das SR. Tau-Neurodegeneration mismatch reveals vulnerability and resilience to comorbidities in Alzheimer's continuum. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.02.12.23285594. [PMID: 36824762 PMCID: PMC9949174 DOI: 10.1101/2023.02.12.23285594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Abstract
Variability in the relationship of tau-based neurofibrillary tangles (T) and degree of neurodegeneration (N) in Alzheimer's Disease (AD) is likely attributable to the non-specific nature of N, which is also modulated by such factors as other co-pathologies, age-related changes, and developmental differences. We studied this variability by partitioning patients within the Alzheimer's continuum into data-driven groups based on their regional T-N dissociation, which reflects the residuals after the effect of tau pathology is "removed". We found six groups displaying distinct spatial T-N mismatch and thickness patterns despite similar tau burden. Their T-N patterns resembled the neurodegeneration patterns of non-AD groups partitioned on the basis of z-scores of cortical thickness alone and were similarly associated with surrogates of non-AD factors. In an additional sample of individuals with antemortem imaging and autopsy, T-N mismatch was associated with TDP-43 co-pathology. Finally, T-N mismatch training was then applied to a separate cohort to determine the ability to classify individual patients within these groups. These findings suggest that T-N mismatch may provide a personalized approach for determining non-AD factors associated with resilience/vulnerability to Alzheimer's disease.
Collapse
|
2
|
de Flores R, Das SR, Xie L, Wisse LEM, Lyu X, Shah P, Yushkevich PA, Wolk DA. Medial Temporal Lobe Networks in Alzheimer's Disease: Structural and Molecular Vulnerabilities. J Neurosci 2022; 42:2131-2141. [PMID: 35086906 PMCID: PMC8916768 DOI: 10.1523/jneurosci.0949-21.2021] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 11/30/2021] [Accepted: 12/04/2021] [Indexed: 11/21/2022] Open
Abstract
The medial temporal lobe (MTL) is connected to the rest of the brain through two main networks: the anterior-temporal (AT) and the posterior-medial (PM) systems. Given the crucial role of the MTL and networks in the physiopathology of Alzheimer's disease (AD), the present study aimed at (1) investigating whether MTL atrophy propagates specifically within the AT and PM networks, and (2) evaluating the vulnerability of these networks to AD proteinopathies. To do that, we used neuroimaging data acquired in human male and female in three distinct cohorts: (1) resting-state functional MRI (rs-fMRI) from the aging brain cohort (ABC) to define the AT and PM networks (n = 68); (2) longitudinal structural MRI from Alzheimer's disease neuroimaging initiative (ADNI)GO/2 to highlight structural covariance patterns (n = 349); and (3) positron emission tomography (PET) data from ADNI3 to evaluate the networks' vulnerability to amyloid and tau (n = 186). Our results suggest that the atrophy of distinct MTL subregions propagates within the AT and PM networks in a dissociable manner. Brodmann area (BA)35 structurally covaried within the AT network while the parahippocampal cortex (PHC) covaried within the PM network. In addition, these networks are differentially associated with relative tau and amyloid burden, with higher tau levels in AT than in PM and higher amyloid levels in PM than in AT. Our results also suggest differences in the relative burden of tau species. The current results provide further support for the notion that two distinct MTL networks display differential alterations in the context of AD. These findings have important implications for disease spread and the cognitive manifestations of AD.SIGNIFICANCE STATEMENT The current study provides further support for the notion that two distinct medial temporal lobe (MTL) networks, i.e., anterior-temporal (AT) and the posterior-medial (PM), display differential alterations in the context of Alzheimer's disease (AD). Importantly, neurodegeneration appears to occur within these networks in a dissociable manner marked by their covariance patterns. In addition, the AT and PM networks are also differentially associated with relative tau and amyloid burden, and perhaps differences in the relative burden of tau species [e.g., neurofibriliary tangles (NFTs) vs tau in neuritic plaques]. These findings, in the context of a growing literature consistent with the present results, have important implications for disease spread and the cognitive manifestations of AD in light of the differential cognitive processes ascribed to them.
Collapse
Affiliation(s)
- Robin de Flores
- Department of Neurology, University of Pennsylvania, Philadelphia 19104, Pennsylvania
- Université de Caen Normandie, Institut National de la Santé et de la Recherche Médicale Unité Mixte de Recherche Scientifique (UMRS) Unité 1237, Caen 14000, France
| | - Sandhitsu R Das
- Penn Image Computing and Science Laboratory (PICSL), University of Pennsylvania, Philadelphia 19104, Pennsylvania
| | - Long Xie
- Penn Image Computing and Science Laboratory (PICSL), University of Pennsylvania, Philadelphia 19104, Pennsylvania
- Department of Radiology, University of Pennsylvania, Philadelphia 19104, Pennsylvania
| | - Laura E M Wisse
- Penn Image Computing and Science Laboratory (PICSL), University of Pennsylvania, Philadelphia 19104, Pennsylvania
- Department of Diagnostic Radiology, Lund University, Lund 22185, Sweden
| | - Xueying Lyu
- Department of Bioengineering, University of Pennsylvania, Philadelphia 19104, Pennsylvania
| | - Preya Shah
- Department of Bioengineering, University of Pennsylvania, Philadelphia 19104, Pennsylvania
| | - Paul A Yushkevich
- Penn Image Computing and Science Laboratory (PICSL), University of Pennsylvania, Philadelphia 19104, Pennsylvania
| | - David A Wolk
- Department of Neurology, University of Pennsylvania, Philadelphia 19104, Pennsylvania
| |
Collapse
|
3
|
Abstract
Segmentation of medical images using multiple atlases has recently gained immense attention due to their augmented robustness against variabilities across different subjects. These atlas-based methods typically comprise of three steps: atlas selection, image registration, and finally label fusion. Image registration is one of the core steps in this process, accuracy of which directly affects the final labeling performance. However, due to inter-subject anatomical variations, registration errors are inevitable. The aim of this paper is to develop a deep learning-based confidence estimation method to alleviate the potential effects of registration errors. We first propose a fully convolutional network (FCN) with residual connections to learn the relationship between the image patch pair (i.e., patches from the target subject and the atlas) and the related label confidence patch. With the obtained label confidence patch, we can identify the potential errors in the warped atlas labels and correct them. Then, we use two label fusion methods to fuse the corrected atlas labels. The proposed methods are validated on a publicly available dataset for hippocampus segmentation. Experimental results demonstrate that our proposed methods outperform the state-of-the-art segmentation methods.
Collapse
Affiliation(s)
- Hancan Zhu
- School of Mathematics Physics and Information, Shaoxing University, Shaoxing, 312000, China
| | - Ehsan Adeli
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, 94305, CA, USA
| | - Feng Shi
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, 27599, North Carolina, USA.
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea.
| |
Collapse
|
4
|
Das SR, Xie L, Wisse LEM, Vergnet N, Ittyerah R, Cui S, Yushkevich PA, Wolk DA. In vivo measures of tau burden are associated with atrophy in early Braak stage medial temporal lobe regions in amyloid-negative individuals. Alzheimers Dement 2019; 15:1286-1295. [PMID: 31495603 PMCID: PMC6941656 DOI: 10.1016/j.jalz.2019.05.009] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2019] [Revised: 04/28/2019] [Accepted: 05/21/2019] [Indexed: 12/14/2022]
Abstract
INTRODUCTION It is unclear the degree to which tau pathology in the medial temporal lobe (MTL) measured by 18F-flortaucipir positron emission tomography relates to MTL subregional atrophy and whether this relationship differs between amyloid-β-positive and amyloid-β-negative individuals. METHODS We analyzed correlation of MTL 18F-flortaucipir uptake with MTL subregional atrophy measured with high-resolution magnetic resonance imaging in a region of interest and regional thickness analysis and determined the relationship between memory performance and positron emission tomography and magnetic resonance imaging measures. RESULTS Both groups showed strong correlations between 18F-flortaucipir uptake and atrophy, with similar spatial patterns. Effects in the rhinal cortex recapitulated Braak staging. Correlations of memory recall with atrophy and tracer uptake were observed. DISCUSSION Correlation patterns between tau burden and atrophy in the amyloid-β-negative group mimicking early Braak stages suggests that 18F-flortaucipir is sensitive to tau pathology in primary age-related tauopathy. Correlations of imaging measures with memory performance indicate that this pathology is associated with poorer cognition.
Collapse
Affiliation(s)
- Sandhitsu R Das
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA; Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, PA, USA; Penn Memory Center, University of Pennsylvania, Philadelphia, PA, USA; Penn Alzheimer's Disease Core Center, University of Pennsylvania, Philadelphia, PA, USA.
| | - Long Xie
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Laura E M Wisse
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA; Penn Memory Center, University of Pennsylvania, Philadelphia, PA, USA
| | - Nicolas Vergnet
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, PA, USA
| | - Ranjit Ittyerah
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Salena Cui
- Jefferson University, Philadelphia, PA, USA
| | - Paul A Yushkevich
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, PA, USA; Penn Alzheimer's Disease Core Center, University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - David A Wolk
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA; Penn Memory Center, University of Pennsylvania, Philadelphia, PA, USA; Penn Alzheimer's Disease Core Center, University of Pennsylvania, Philadelphia, PA, USA
| | | |
Collapse
|
5
|
Chang C, Huang C, Zhou N, Li SX, Ver Hoef L, Gao Y. The bumps under the hippocampus. Hum Brain Mapp 2017; 39:472-490. [PMID: 29058349 DOI: 10.1002/hbm.23856] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 10/09/2017] [Accepted: 10/11/2017] [Indexed: 12/27/2022] Open
Abstract
Shown in every neuroanatomy textbook, a key morphological feature is the bumpy ridges, which we refer to as hippocampal dentation, on the inferior aspect of the hippocampus. Like the folding of the cerebral cortex, hippocampal dentation allows for greater surface area in a confined space. However, examining numerous approaches to hippocampal segmentation and morphology analysis, virtually all published 3D renderings of the hippocampus show the inferior surface to be quite smooth or mildly irregular; we have rarely seen the characteristic bumpy structure on reconstructed 3D surfaces. The only exception is a 9.4T postmortem study (Yushkevich et al. [2009]: NeuroImage 44:385-398). An apparent question is, does this indicate that this specific morphological signature can only be captured using ultra high-resolution techniques? Or, is such information buried in the data we commonly acquire, awaiting a computation technique that can extract and render it clearly? In this study, we propose an automatic and robust super-resolution technique that captures the fine scale morphometric features of the hippocampus based on common 3T MR images. The method is validated on 9.4T ultra-high field images and then applied on 3T data sets. This method opens possibilities of future research on the hippocampus and other sub-cortical structural morphometry correlating the degree of dentation with a range of diseases including epilepsy, Alzheimer's disease, and schizophrenia. Hum Brain Mapp 39:472-490, 2018. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Cheng Chang
- Department of Electrical and Computer Engineering, Stony Brook University, Stony Brook, New York, 11794
| | - Chuan Huang
- Department of Radiology, Stony Brook University, Stony Brook, New York, 11794.,Department of Psychiatry, Stony Brook University, Stony Brook, New York, 11794
| | - Naiyun Zhou
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, New York, 11794
| | - Shawn Xiang Li
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | - Lawrence Ver Hoef
- Department of Neurology, The University of Alabama at Birmingham, CIRC 312, Birmingham, Alabama, 35294.,Epilepsy center, The University of Alabama at Birmingham, CIRC 312, Birmingham, Alabama, 35294
| | - Yi Gao
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China.,Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen, 518060, China.,Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, New York, 11794
| |
Collapse
|
6
|
Zu C, Wang Z, Zhang D, Liang P, Shi Y, Shen D, Wu G. Robust multi-atlas label propagation by deep sparse representation. PATTERN RECOGNITION 2017; 63:511-517. [PMID: 27942077 PMCID: PMC5144541 DOI: 10.1016/j.patcog.2016.09.028] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.
Collapse
Affiliation(s)
- Chen Zu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Zhengxia Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Information Science and Engineering, Chongqing Jiaotong University, Chongqing 400074, China
| | - Daoqiang Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Peipeng Liang
- Department of Radiology, Xuanwu Hospital, Capital Medical University, Beijing 100053, China
| | - Yonghong Shi
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai 200032, China
- Shanghai Key Laboratory of Medical Image Computing and Computer-Assisted Intervention, Shanghai 200032, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
7
|
Song Y, Wu G, Bahrami K, Sun Q, Shen D. Progressive multi-atlas label fusion by dictionary evolution. Med Image Anal 2017; 36:162-171. [PMID: 27914302 PMCID: PMC5239730 DOI: 10.1016/j.media.2016.11.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2016] [Revised: 10/08/2016] [Accepted: 11/18/2016] [Indexed: 10/20/2022]
Abstract
Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary.
Collapse
Affiliation(s)
- Yantao Song
- School of Computer Science & Technology, Nanjing University of Science and Technology, Nanjing, Jiangsu 210094, China; Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA.
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Khosro Bahrami
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Quansen Sun
- School of Computer Science & Technology, Nanjing University of Science and Technology, Nanjing, Jiangsu 210094, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea.
| |
Collapse
|
8
|
Xing F, Prince JL, Landman BA. Investigation of Bias in Continuous Medical Image Label Fusion. PLoS One 2016; 11:e0155862. [PMID: 27258158 PMCID: PMC4892597 DOI: 10.1371/journal.pone.0155862] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Accepted: 05/05/2016] [Indexed: 11/30/2022] Open
Abstract
Image labeling is essential for analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms, both of which suffer from errors. The Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm for both discrete-valued and continuous-valued labels has been proposed to find the consensus fusion while simultaneously estimating rater performance. In this paper, we first show that the previously reported continuous STAPLE in which bias and variance are used to represent rater performance yields a maximum likelihood solution in which bias is indeterminate. We then analyze the major cause of the deficiency and evaluate two classes of auxiliary bias estimation processes, one that estimates the bias as part of the algorithm initialization and the other that uses a maximum a posteriori criterion with a priori probabilities on the rater bias. We compare the efficacy of six methods, three variants from each class, in simulations and through empirical human rater experiments. We comment on their properties, identify deficient methods, and propose effective methods as solution.
Collapse
Affiliation(s)
- Fangxu Xing
- Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Boston, Massachusetts, United States of America
| | - Jerry L. Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Bennett A. Landman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
- Department of Electrical Engineering, Vanderbilt University, Nashville, Tennessee, United States of America
| |
Collapse
|
9
|
Chen S, Quan H, Qin A, Yee S, Yan D. MR image-based synthetic CT for IMRT prostate treatment planning and CBCT image-guided localization. J Appl Clin Med Phys 2016; 17:236-245. [PMID: 27167281 PMCID: PMC5690904 DOI: 10.1120/jacmp.v17i3.6065] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2015] [Revised: 01/08/2016] [Accepted: 01/06/2016] [Indexed: 11/23/2022] Open
Abstract
The purpose of this study was to propose and evaluate a method of creating a synthetic CT (S-CT) from MRI simulation for dose calculation and daily CBCT localization. A pair of MR and CT images was obtained in the same day from each of 10 prostate patients. The pair of MR and CT images was preregistered using the deformable image registration (DIR). Using the corresponding displacement vector field (atlas-DVF), the CT image was deformed to the MR image to create an atlas MR-CT pair. Regions of interest (ROI) on the atlas MR-CT pair were delineated and used to create atlas-ROI masks. 'Leave-one-out' test (one pair of MR and CT was used as subject-MR and subject-CT for evaluation, and the remaining 9 pairs were in the atlas library) was performed. For a subject-MR, autosegmentation and DVFs were generated using DIR between the subject-MR and the 9 atlas-MRs. An S-CT was then generated using the corresponding 9 paired atlas-CTs, the 9 atlas-DVFs and the corresponding atlas-ROI masks. The total 10 S-CTs were evaluated using the Hounsfield unit (HU), the calculated dose distribution, and the auto bony registration to daily CBCT images with respect to the 10 subject-CTs. HU differences (mean ± STD) were (2.4 ± 25.23), (1.18 ± 39.49), (32.46 ± 81.9), (0.23 ± 40.13), and (3.74 ± 144.76) for prostate, bladder, rectal wall, soft tissue outside all ROIs, and bone, respectively. The discrepancy of dose-volume param-eters calculated using the S-CT for treatment planning was small (≤ 0.22% with 95% confidence). Gamma pass rate (2% & 2 mm) was higher than 99.86% inside PTV and 98.45% inside normal structures. Using the 10 S-CTs as the reference CT for daily CBCT localization achieved the similar results compared to using the subject-CT. The translational vector differences were within 1.08 mm (0.37 ± 0.23 mm), and the rotational differences were within 1.1° in all three directions. S-CT created from a simulation MR image using the proposed approach with the preconstructed atlas library can replace the planning CT for dose calculation and daily CBCT image guidance.
Collapse
|
10
|
Wachinger C, Fritscher K, Sharp G, Golland P. Contour-Driven Atlas-Based Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:2492-505. [PMID: 26068202 PMCID: PMC4756595 DOI: 10.1109/tmi.2015.2442753] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images.
Collapse
|
11
|
Sanroma G, Wu G, Gao Y, Thung KH, Guo Y, Shen D. A transversal approach for patch-based label fusion via matrix completion. Med Image Anal 2015; 24:135-148. [PMID: 26160394 PMCID: PMC4701198 DOI: 10.1016/j.media.2015.06.002] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2014] [Revised: 04/10/2015] [Accepted: 06/11/2015] [Indexed: 11/22/2022]
Abstract
Recently, multi-atlas patch-based label fusion has received an increasing interest in the medical image segmentation field. After warping the anatomical labels from the atlas images to the target image by registration, label fusion is the key step to determine the latent label for each target image point. Two popular types of patch-based label fusion approaches are (1) reconstruction-based approaches that compute the target labels as a weighted average of atlas labels, where the weights are derived by reconstructing the target image patch using the atlas image patches; and (2) classification-based approaches that determine the target label as a mapping of the target image patch, where the mapping function is often learned using the atlas image patches and their corresponding labels. Both approaches have their advantages and limitations. In this paper, we propose a novel patch-based label fusion method to combine the above two types of approaches via matrix completion (and hence, we call it transversal). As we will show, our method overcomes the individual limitations of both reconstruction-based and classification-based approaches. Since the labeling confidences may vary across the target image points, we further propose a sequential labeling framework that first labels the highly confident points and then gradually labels more challenging points in an iterative manner, guided by the label information determined in the previous iterations. We demonstrate the performance of our novel label fusion method in segmenting the hippocampus in the ADNI dataset, subcortical and limbic structures in the LONI dataset, and mid-brain structures in the SATA dataset. We achieve more accurate segmentation results than both reconstruction-based and classification-based approaches. Our label fusion method is also ranked 1st in the online SATA Multi-Atlas Segmentation Challenge.
Collapse
Affiliation(s)
- Gerard Sanroma
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, USA
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, USA
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, USA
| | - Kim-Han Thung
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, USA
| | - Yanrong Guo
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
12
|
Iglesias JE, Sabuncu MR. Multi-atlas segmentation of biomedical images: A survey. Med Image Anal 2015; 24:205-219. [PMID: 26201875 PMCID: PMC4532640 DOI: 10.1016/j.media.2015.06.012] [Citation(s) in RCA: 371] [Impact Index Per Article: 37.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Revised: 06/12/2015] [Accepted: 06/15/2015] [Indexed: 10/23/2022]
Abstract
Multi-atlas segmentation (MAS), first introduced and popularized by the pioneering work of Rohlfing, et al. (2004), Klein, et al. (2005), and Heckemann, et al. (2006), is becoming one of the most widely-used and successful image segmentation techniques in biomedical applications. By manipulating and utilizing the entire dataset of "atlases" (training images that have been previously labeled, e.g., manually by an expert), rather than some model-based average representation, MAS has the flexibility to better capture anatomical variation, thus offering superior segmentation accuracy. This benefit, however, typically comes at a high computational cost. Recent advancements in computer hardware and image processing software have been instrumental in addressing this challenge and facilitated the wide adoption of MAS. Today, MAS has come a long way and the approach includes a wide array of sophisticated algorithms that employ ideas from machine learning, probabilistic modeling, optimization, and computer vision, among other fields. This paper presents a survey of published MAS algorithms and studies that have applied these methods to various biomedical problems. In writing this survey, we have three distinct aims. Our primary goal is to document how MAS was originally conceived, later evolved, and now relates to alternative methods. Second, this paper is intended to be a detailed reference of past research activity in MAS, which now spans over a decade (2003-2014) and entails novel methodological developments and application-specific solutions. Finally, our goal is to also present a perspective on the future of MAS, which, we believe, will be one of the dominant approaches in biomedical image segmentation.
Collapse
Affiliation(s)
| | - Mert R Sabuncu
- A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA.
| |
Collapse
|
13
|
Gao Y, Zhu L, Cates J, MacLeod RS, Bouix S, Tannenbaum A. A Kalman Filtering Perspective for Multiatlas Segmentation. SIAM JOURNAL ON IMAGING SCIENCES 2015; 8:1007-1029. [PMID: 26807162 PMCID: PMC4722821 DOI: 10.1137/130933423] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In multiatlas segmentation, one typically registers several atlases to the novel image, and their respective segmented label images are transformed and fused to form the final segmentation. In this work, we provide a new dynamical system perspective for multiatlas segmentation, inspired by the following fact: The transformation that aligns the current atlas to the novel image can be not only computed by direct registration but also inferred from the transformation that aligns the previous atlas to the image together with the transformation between the two atlases. This process is similar to the global positioning system on a vehicle, which gets position by inquiring from the satellite and by employing the previous location and velocity-neither answer in isolation being perfect. To solve this problem, a dynamical system scheme is crucial to combine the two pieces of information; for example, a Kalman filtering scheme is used. Accordingly, in this work, a Kalman multiatlas segmentation is proposed to stabilize the global/affine registration step. The contributions of this work are twofold. First, it provides a new dynamical systematic perspective for standard independent multiatlas registrations, and it is solved by Kalman filtering. Second, with very little extra computation, it can be combined with most existing multiatlas segmentation schemes for better registration/segmentation accuracy.
Collapse
Affiliation(s)
- Yi Gao
- Department of Biomedical Informatics and Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, NY 11794
| | - Liangjia Zhu
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11790
| | - Joshua Cates
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT 84112
| | - Rob S. MacLeod
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT 84112
| | - Sylvain Bouix
- Department of Psychiatry, Harvard Medical School, Boston, MA 02215
| | - Allen Tannenbaum
- Department of Computer Science and Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, NY 11794
| |
Collapse
|
14
|
Akhondi-Asl A, Hoyte L, Lockhart ME, Warfield SK. A logarithmic opinion pool based STAPLE algorithm for the fusion of segmentations with associated reliability weights. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:1997-2009. [PMID: 24951681 PMCID: PMC4264575 DOI: 10.1109/tmi.2014.2329603] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Pelvic floor dysfunction is common in women after childbirth and precise segmentation of magnetic resonance images (MRI) of the pelvic floor may facilitate diagnosis and treatment of patients. However, because of the complexity of its structures, manual segmentation of the pelvic floor is challenging and suffers from high inter and intra-rater variability of expert raters. Multiple template fusion algorithms are promising segmentation techniques for these types of applications, but they have been limited by imperfections in the alignment of templates to the target, and by template segmentation errors. A number of algorithms sought to improve segmentation performance by combining image intensities and template labels as two independent sources of information, carrying out fusion through local intensity weighted voting schemes. This class of approach is a form of linear opinion pooling, and achieves unsatisfactory performance for this application. We hypothesized that better decision fusion could be achieved by assessing the contribution of each template in comparison to a reference standard segmentation of the target image and developed a novel segmentation algorithm to enable automatic segmentation of MRI of the female pelvic floor. The algorithm achieves high performance by estimating and compensating for both imperfect registration of the templates to the target image and template segmentation inaccuracies. A local image similarity measure is used to infer a local reliability weight, which contributes to the fusion through a novel logarithmic opinion pooling. We evaluated our new algorithm in comparison to nine state-of-the-art segmentation methods and demonstrated our algorithm achieves the highest performance.
Collapse
Affiliation(s)
- Alireza Akhondi-Asl
- Computational Radiology Laboratory, Department of Radiology, Children's Hospital, 300 Longwood Avenue, Boston, MA, 02115, USA
| | - Lennox Hoyte
- Department of Obstetrics and Gynecology, University of South Florida, 2 Tampa General Circle, 6th oor, Tampa, FL 33606, USA
| | - Mark E. Lockhart
- Department of Radiology, University of Alabama at Birmingham, 1802 6th Avenue South, Birmingham, AL 35233, USA
| | - Simon K. Warfield
- Computational Radiology Laboratory, Department of Radiology, Children's Hospital, 300 Longwood Avenue, Boston, MA, 02115, USA
| |
Collapse
|
15
|
Bai W, Shi W, Ledig C, Rueckert D. Multi-atlas segmentation with augmented features for cardiac MR images. Med Image Anal 2014; 19:98-109. [PMID: 25299433 DOI: 10.1016/j.media.2014.09.005] [Citation(s) in RCA: 70] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2014] [Revised: 09/08/2014] [Accepted: 09/09/2014] [Indexed: 02/07/2023]
Abstract
Multi-atlas segmentation infers the target image segmentation by combining prior anatomical knowledge encoded in multiple atlases. It has been quite successfully applied to medical image segmentation in the recent years, resulting in highly accurate and robust segmentation for many anatomical structures. However, to guide the label fusion process, most existing multi-atlas segmentation methods only utilise the intensity information within a small patch during the label fusion process and may neglect other useful information such as gradient and contextual information (the appearance of surrounding regions). This paper proposes to combine the intensity, gradient and contextual information into an augmented feature vector and incorporate it into multi-atlas segmentation. Also, it explores the alternative to the K nearest neighbour (KNN) classifier in performing multi-atlas label fusion, by using the support vector machine (SVM) for label fusion instead. Experimental results on a short-axis cardiac MR data set of 83 subjects have demonstrated that the accuracy of multi-atlas segmentation can be significantly improved by using the augmented feature vector. The mean Dice metric of the proposed segmentation framework is 0.81 for the left ventricular myocardium on this data set, compared to 0.79 given by the conventional multi-atlas patch-based segmentation (Coupé et al., 2011; Rousseau et al., 2011). A major contribution of this paper is that it demonstrates that the performance of non-local patch-based segmentation can be improved by using augmented features.
Collapse
Affiliation(s)
- Wenjia Bai
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, United Kingdom.
| | - Wenzhe Shi
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, United Kingdom
| | - Christian Ledig
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, United Kingdom
| | - Daniel Rueckert
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, United Kingdom
| |
Collapse
|
16
|
Wu G, Wang Q, Zhang D, Nie F, Huang H, Shen D. A generative probability model of joint label fusion for multi-atlas based brain segmentation. Med Image Anal 2014; 18:881-90. [PMID: 24315359 PMCID: PMC4024092 DOI: 10.1016/j.media.2013.10.013] [Citation(s) in RCA: 77] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2013] [Revised: 07/29/2013] [Accepted: 10/27/2013] [Indexed: 11/30/2022]
Abstract
Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling method, indicating the potential application of the proposed method in the future clinical studies.
Collapse
Affiliation(s)
- Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA
| | - Qian Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA; Department of Computer Science, University of North Carolina at Chapel Hill, USA
| | - Daoqiang Zhang
- Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Feiping Nie
- Department of Computer Science and Engineering, University of Texas, Arlington, USA
| | - Heng Huang
- Department of Computer Science and Engineering, University of Texas, Arlington, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA.
| |
Collapse
|
17
|
Yan Z, Zhang S, Liu X, Metaxas DN, Montillo A. Accurate Whole-Brain Segmentation for Alzheimer's Disease Combining an Adaptive Statistical Atlas and Multi-atlas. MEDICAL COMPUTER VISION : LARGE DATA IN MEDICAL IMAGING : THIRD INTERNATIONAL MICCAI WORKSHOP, MCV 2013, NAGOYA, JAPAN, SEPTEMBER 26, 2013 : REVISED SELECTED PAPERS. MCV (WORKSHOP) (3RD : 2013 : NAGOYA-SHI, JAPAN) 2014; 8331:65-73. [PMID: 31723945 PMCID: PMC6853627 DOI: 10.1007/978-3-319-05530-5_7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Accurate segmentation of whole brain MR images including the cortex, white matter and subcortical structures is challenging due to inter-subject variability and the complex geometry of brain anatomy. However a precise solution would enable accurate, objective measurement of structure volumes for disease quantification. Our contribution is three-fold. First we construct an adaptive statistical atlas that combines structure specific relaxation and spatially varying adaptivity. Second we integrate an isotropic pairwise class-specific MRF model of label connectivity. Together these permit precise control over adaptivity, allowing many structures to be segmented simultaneously with superior accuracy. Third, we develop a framework combining the improved adaptive statistical atlas with a multi-atlas method which achieves simultaneous accurate segmentation of the cortex, ventricles, and sub-cortical structures in severely diseased brains, a feat not attained in [18]. We test the proposed method on 46 brains including 28 diseased brain with Alzheimer's and 18 healthy brains. Our proposed method yields higher accuracy than state-of-the-art approaches on both healthy and diseased brains.
Collapse
Affiliation(s)
- Zhennan Yan
- CBIM, Rutgers, The State University of New Jersey, Piscataway, NJ, USA
| | - Shaoting Zhang
- CBIM, Rutgers, The State University of New Jersey, Piscataway, NJ, USA
| | | | | | | |
Collapse
|
18
|
|
19
|
Hao Y, Wang T, Zhang X, Duan Y, Yu C, Jiang T, Fan Y. Local label learning (LLL) for subcortical structure segmentation: application to hippocampus segmentation. Hum Brain Mapp 2013; 35:2674-97. [PMID: 24151008 DOI: 10.1002/hbm.22359] [Citation(s) in RCA: 69] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2013] [Accepted: 06/17/2013] [Indexed: 11/10/2022] Open
Abstract
Automatic and reliable segmentation of subcortical structures is an important but difficult task in quantitative brain image analysis. Multi-atlas based segmentation methods have attracted great interest due to their promising performance. Under the multi-atlas based segmentation framework, using deformation fields generated for registering atlas images onto a target image to be segmented, labels of the atlases are first propagated to the target image space and then fused to get the target image segmentation based on a label fusion strategy. While many label fusion strategies have been developed, most of these methods adopt predefined weighting models that are not necessarily optimal. In this study, we propose a novel local label learning strategy to estimate the target image's segmentation label using statistical machine learning techniques. In particular, we use a L1-regularized support vector machine (SVM) with a k nearest neighbor (kNN) based training sample selection strategy to learn a classifier for each of the target image voxel from its neighboring voxels in the atlases based on both image intensity and texture features. Our method has produced segmentation results consistently better than state-of-the-art label fusion methods in validation experiments on hippocampal segmentation of over 100 MR images obtained from publicly available and in-house datasets. Volumetric analysis has also demonstrated the capability of our method in detecting hippocampal volume changes due to Alzheimer's disease.
Collapse
Affiliation(s)
- Yongfu Hao
- Brainnetome Center, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | | | | | | | | | | | | | | |
Collapse
|
20
|
Akhondi-Asl A, Warfield SK. Simultaneous truth and performance level estimation through fusion of probabilistic segmentations. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1840-52. [PMID: 23744673 PMCID: PMC3788853 DOI: 10.1109/tmi.2013.2266258] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Recent research has demonstrated that improved image segmentation can be achieved by multiple template fusion utilizing both label and intensity information. However, intensity weighted fusion approaches use local intensity similarity as a surrogate measure of local template quality for predicting target segmentation and do not seek to characterize template performance. This limits both the usefulness and accuracy of these techniques. Our work here was motivated by the observation that the local intensity similarity is a poor surrogate measure for direct comparison of the template image with the true image target segmentation. Although the true image target segmentation is not available, a high quality estimate can be inferred, and this in turn allows a principled estimate to be made of the local quality of each template at contributing to the target segmentation. We developed a fusion algorithm that uses probabilistic segmentations of the target image to simultaneously infer a reference standard segmentation of the target image and the local quality of each probabilistic segmentation. The concept of comparing templates to a hidden reference standard segmentation enables accurate assessments of the contribution of each template to inferring the target image segmentation to be made, and in practice leads to excellent target image segmentation. We have used the new algorithm for the multiple-template-based segmentation and parcellation of magnetic resonance images of the brain. Intensity and label map images of each one of the aligned templates are used to train a local Gaussian mixture model based classifier. Then, each classifier is used to compute the probabilistic segmentations of the target image. Finally, the generated probabilistic segmentations are fused together using the new fusion algorithm to obtain the segmentation of the target image. We evaluated our method in comparison to other state-of-the-art segmentation methods. We demonstrated that our new fusion algorithm has higher segmentation performance than these methods.
Collapse
Affiliation(s)
- Alireza Akhondi-Asl
- Computational Radiology Laboratory, Department of Radiology, Children’s Hospital, 300 Longwood Avenue, Boston, MA, 02115, USA
| | - Simon K. Warfield
- Computational Radiology Laboratory, Department of Radiology, Children’s Hospital, 300 Longwood Avenue, Boston, MA, 02115, USA
| |
Collapse
|
21
|
Evaluation of group-specific, whole-brain atlas generation using Volume-based Template Estimation (VTE): application to normal and Alzheimer's populations. Neuroimage 2013; 84:406-19. [PMID: 24051356 DOI: 10.1016/j.neuroimage.2013.09.011] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2012] [Revised: 08/03/2013] [Accepted: 09/05/2013] [Indexed: 11/23/2022] Open
Abstract
MRI-based human brain atlases, which serve as a common coordinate system for image analysis, play an increasingly important role in our understanding of brain anatomy, image registration, and segmentation. Study-specific brain atlases are often obtained from one of the subjects in a study or by averaging the images of all participants after linear or non-linear registration. The latter approach has the advantage of providing an unbiased anatomical representation of the study population. But, the image contrast is influenced by both inherent MR contrasts and residual anatomical variability after the registration; in addition, the topology of the brain structures cannot reliably be preserved. In this study, we demonstrated a population-based template-creation approach, which is based on Bayesian template estimation on a diffeomorphic random orbit model. This approach attempts to define a population-representative template without the cross-subject intensity averaging; thus, the topology of the brain structures is preserved. It has been tested for segmented brain structures, such as the hippocampus, but its validity on whole-brain MR images has not been examined. This paper validates and evaluates this atlas generation approach, i.e., Volume-based Template Estimation (VTE). Using datasets from normal subjects and Alzheimer's patients, quantitative measurements of sub-cortical structural volumes, metric distance, displacement vector, and Jacobian were examined to validate the group-averaged shape features of the VTE. In addition to the volume-based quantitative analysis, the preserved brain topology of the VTE allows surface-based analysis within the same atlas framework. This property was demonstrated by analyzing the registration accuracy of the pre- and post-central gyri. The proposed method achieved registration accuracy within 1mm for these population-preserved cortical structures in an elderly population.
Collapse
|
22
|
Bai W, Shi W, O'Regan DP, Tong T, Wang H, Jamil-Copley S, Peters NS, Rueckert D. A probabilistic patch-based label fusion model for multi-atlas segmentation with registration refinement: application to cardiac MR images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1302-1315. [PMID: 23568495 DOI: 10.1109/tmi.2013.2256922] [Citation(s) in RCA: 116] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
The evaluation of ventricular function is important for the diagnosis of cardiovascular diseases. It typically involves measurement of the left ventricular (LV) mass and LV cavity volume. Manual delineation of the myocardial contours is time-consuming and dependent on the subjective experience of the expert observer. In this paper, a multi-atlas method is proposed for cardiac magnetic resonance (MR) image segmentation. The proposed method is novel in two aspects. First, it formulates a patch-based label fusion model in a Bayesian framework. Second, it improves image registration accuracy by utilizing label information, which leads to improvement of segmentation accuracy. The proposed method was evaluated on a cardiac MR image set of 28 subjects. The average Dice overlap metric of our segmentation is 0.92 for the LV cavity, 0.89 for the right ventricular cavity and 0.82 for the myocardium. The results show that the proposed method is able to provide accurate information for clinical diagnosis.
Collapse
Affiliation(s)
- Wenjia Bai
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, SW7 2RH London, UK
| | | | | | | | | | | | | | | |
Collapse
|
23
|
Wang H, Suh JW, Das SR, Pluta JB, Craige C, Yushkevich PA. Multi-Atlas Segmentation with Joint Label Fusion. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2013; 35:611-23. [PMID: 22732662 PMCID: PMC3864549 DOI: 10.1109/tpami.2012.143] [Citation(s) in RCA: 510] [Impact Index Per Article: 42.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Multi-atlas segmentation is an effective approach for automatically labeling objects of interest in biomedical images. In this approach, multiple expert-segmented example images, called atlases, are registered to a target image, and deformed atlas segmentations are combined using label fusion. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity have been particularly successful. However, one limitation of these strategies is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this limitation, we propose a new solution for the label fusion problem in which weighted voting is formulated in terms of minimizing the total expectation of labeling error and in which pairwise dependency between atlases is explicitly modeled as the joint probability of two atlases making a segmentation error at a voxel. This probability is approximated using intensity similarity between a pair of atlases and the target image in the neighborhood of each voxel. We validate our method in two medical image segmentation problems: hippocampus segmentation and hippocampus subfield segmentation in magnetic resonance (MR) images. For both problems, we show consistent and significant improvement over label fusion strategies that assign atlas weights independently.
Collapse
|
24
|
Abstract
We present a new fusion algorithm for the segmentation and parcellation of magnetic resonance (MR) images of the brain. Our algorithm is a parametric empirical Bayesian extension of the STAPLE algorithm which uses the observations to accurately estimate the prior distribution of the hidden ground truth using an expectation maximization (EM) algorithm. We use IBSR dataset for the evaluation of our fusion algorithm. We segment 128 principle gray and white matter structures of the brain using our novel method and eight other state-of-the-art algorithms in the literature. Our prior distribution estimation strategy improves the accuracy of the fusion algorithm. It was shown that our new fusion algorithm has superior performance compared to the other state-of-the-art fusion methods in the literature.
Collapse
|
25
|
Wachinger C, Sharp GC, Golland P. Contour-driven regression for label inference in atlas-based segmentation. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2013; 16:211-8. [PMID: 24505763 PMCID: PMC3935362 DOI: 10.1007/978-3-642-40760-4_27] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
We present a novel method for inferring tissue labels in atlas-based image segmentation using Gaussian process regression. Atlas-based segmentation results in probabilistic label maps that serve as input to our method. We introduce a contour-driven prior distribution over label maps to incorporate image features of the input scan into the label inference problem. The mean function of the Gaussian process posterior distribution yields the MAP estimate of the label map and is used in the subsequent voting. We demonstrate improved segmentation accuracy when our approach is combined with two different patch-based segmentation techniques. We focus on the segmentation of parotid glands in CT scans of patients with head and neck cancer, which is important for radiation therapy planning.
Collapse
Affiliation(s)
| | | | - Polina Golland
- Computer Science and Artificial Intelligence Lab, MIT, USA
| |
Collapse
|
26
|
Hu S, Coupé P, Pruessner JC, Collins DL. Nonlocal regularization for active appearance model: Application to medial temporal lobe segmentation. Hum Brain Mapp 2012; 35:377-95. [PMID: 22987811 DOI: 10.1002/hbm.22183] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2012] [Revised: 07/24/2012] [Accepted: 07/25/2012] [Indexed: 01/18/2023] Open
Abstract
The human medial temporal lobe (MTL) is an important part of the limbic system, and its substructures play key roles in learning, memory, and neurodegeneration. The MTL includes the hippocampus (HC), amygdala (AG), parahippocampal cortex (PHC), entorhinal cortex, and perirhinal cortex--structures that are complex in shape and have low between-structure intensity contrast, making them difficult to segment manually in magnetic resonance images. This article presents a new segmentation method that combines active appearance modeling and patch-based local refinement to automatically segment specific substructures of the MTL including HC, AG, PHC, and entorhinal/perirhinal cortex from MRI data. Appearance modeling, relying on eigen-decomposition to analyze statistical variations in image intensity and shape information in study population, is used to capture global shape characteristics of each structure of interest with a generative model. Patch-based local refinement, using nonlocal means to compare the image local intensity properties, is applied to locally refine the segmentation results along the structure borders to improve structure delimitation. In this manner, nonlocal regularization and global shape constraints could allow more accurate segmentations of structures. Validation experiments against manually defined labels demonstrate that this new segmentation method is computationally efficient, robust, and accurate. In a leave-one-out validation on 54 normal young adults, the method yielded a mean Dice κ of 0.87 for the HC, 0.81 for the AG, 0.73 for the anterior parts of the parahippocampal gyrus (entorhinal and perirhinal cortex), and 0.73 for the posterior parahippocampal gyrus.
Collapse
Affiliation(s)
- Shiyan Hu
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| | | | | | | |
Collapse
|
27
|
Wang H, Yushkevich PA. Spatial Bias in Multi-Atlas Based Segmentation. CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. WORKSHOPS 2012; 2012:909-916. [PMID: 23476901 PMCID: PMC3589983 DOI: 10.1109/cvpr.2012.6247765] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Multi-atlas segmentation has been widely applied in medical image analysis. With deformable registration, this technique realizes label transfer from pre-labeled atlases to unknown images. When deformable registration produces error, label fusion that combines results produced by multiple atlases is an effective way for reducing segmentation errors. Among the existing label fusion strategies, similarity-weighted voting strategies with spatially varying weight distributions have been particularly successful. We show that, weighted voting based label fusion produces a spatial bias that under-segments structures with convex shapes. The bias can be approximated as applying spatial convolution to the ground truth spatial label probability maps, where the convolution kernel combines the distribution of residual registration errors and the function producing similarity-based voting weights. To reduce this bias, we apply a standard spatial deconvolution to the spatial probability maps obtained from weighted voting. In a brain image segmentation experiment, we demonstrate the spatial bias and show that our technique substantially reduces this spatial bias.
Collapse
|
28
|
Wang H, Yushkevich PA. DEPENDENCY PRIOR FOR MULTI-ATLAS LABEL FUSION. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2012; 2012:892-895. [PMID: 24443676 DOI: 10.1109/isbi.2012.6235692] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Multi-atlas label fusion has been widely applied in medical image analysis. To reduce the bias in label fusion, we proposed a joint label fusion technique to reduce correlated errors produced by different atlases via considering the pairwise dependencies between them. Using image similarities from image patches to estimate the pairwise dependencies, we showed promising performance. To address the unreliability in purely using local image similarity for dependency estimation, we propose to improve the accuracy of the estimated dependencies by including empirical knowledge, which is learned from the atlases in a leave-one-out strategy. We apply the new technique to segment the hippocampus from MRI and show significant improvement over our initial results.
Collapse
Affiliation(s)
- Hongzhi Wang
- Penn Image Computing and Science Lab, University of Pennsylvania
| | | |
Collapse
|
29
|
Awate SP, Zhu P, Whitaker RT. How Many Templates Does It Take for a Good Segmentation?: Error Analysis in Multiatlas Segmentation as a Function of Database Size. ACTA ACUST UNITED AC 2012; 7509:103-114. [PMID: 24501720 DOI: 10.1007/978-3-642-33530-3_9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
Abstract
This paper proposes a novel formulation to model and analyze the statistical characteristics of some types of segmentation problems that are based on combining label maps / templates / atlases. Such segmentation-by-example approaches are quite powerful on their own for several clinical applications and they provide prior information, through spatial context, when combined with intensity-based segmentation methods. The proposed formulation models a class of multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of images. The paper presents a systematic analysis of the nonparametric estimation's convergence behavior (i.e. characterizing segmentation error as a function of the size of the multiatlas database) and shows that it has a specific analytic form involving several parameters that are fundamental to the specific segmentation problem (i.e. chosen anatomical structure, imaging modality, registration method, label-fusion algorithm, etc.). We describe how to estimate these parameters and show that several brain anatomical structures exhibit the trends determined analytically. The proposed framework also provides per-voxel confidence measures for the segmentation. We show that the segmentation error for large database sizes can be predicted using small-sized databases. Thus, small databases can be exploited to predict the database sizes required ("how many templates") to achieve "good" segmentations having errors lower than a specified tolerance. Such cost-benefit analysis is crucial for designing and deploying multiatlas segmentation systems.
Collapse
|
30
|
Sparse Patch-Based Label Fusion for Multi-Atlas Segmentation. MULTIMODAL BRAIN IMAGE ANALYSIS 2012. [DOI: 10.1007/978-3-642-33530-3_8] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|