251
|
Commowick O, Akhondi-Asl A, Warfield SK. Estimating a reference standard segmentation with spatially varying performance parameters: local MAP STAPLE. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:1593-606. [PMID: 22562727 PMCID: PMC3496174 DOI: 10.1109/tmi.2012.2197406] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
We present a new algorithm, called local MAP STAPLE, to estimate from a set of multi-label segmentations both a reference standard segmentation and spatially varying performance parameters. It is based on a sliding window technique to estimate the segmentation and the segmentation performance parameters for each input segmentation. In order to allow for optimal fusion from the small amount of data in each local region, and to account for the possibility of labels not being observed in a local region of some (or all) input segmentations, we introduce prior probabilities for the local performance parameters through a new maximum a posteriori formulation of STAPLE. Further, we propose an expression to compute confidence intervals in the estimated local performance parameters. We carried out several experiments with local MAP STAPLE to characterize its performance and value for local segmentation evaluation. First, with simulated segmentations with known reference standard segmentation and spatially varying performance, we show that local MAP STAPLE performs better than both STAPLE and majority voting. Then we present evaluations with data sets from clinical applications. These experiments demonstrate that spatial adaptivity in segmentation performance is an important property to capture. We compared the local MAP STAPLE segmentations to STAPLE, and to previously published fusion techniques and demonstrate the superiority of local MAP STAPLE over other state-of-the-art algorithms.
Collapse
|
252
|
Reuter M, Schmansky NJ, Rosas HD, Fischl B. Within-subject template estimation for unbiased longitudinal image analysis. Neuroimage 2012; 61:1402-18. [PMID: 22430496 PMCID: PMC3389460 DOI: 10.1016/j.neuroimage.2012.02.084] [Citation(s) in RCA: 1757] [Impact Index Per Article: 135.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2011] [Revised: 02/24/2012] [Accepted: 02/27/2012] [Indexed: 01/11/2023] Open
Abstract
Longitudinal image analysis has become increasingly important in clinical studies of normal aging and neurodegenerative disorders. Furthermore, there is a growing appreciation of the potential utility of longitudinally acquired structural images and reliable image processing to evaluate disease modifying therapies. Challenges have been related to the variability that is inherent in the available cross-sectional processing tools, to the introduction of bias in longitudinal processing and to potential over-regularization. In this paper we introduce a novel longitudinal image processing framework, based on unbiased, robust, within-subject template creation, for automatic surface reconstruction and segmentation of brain MRI of arbitrarily many time points. We demonstrate that it is essential to treat all input images exactly the same as removing only interpolation asymmetries is not sufficient to remove processing bias. We successfully reduce variability and avoid over-regularization by initializing the processing in each time point with common information from the subject template. The presented results show a significant increase in precision and discrimination power while preserving the ability to detect large anatomical deviations; as such they hold great potential in clinical applications, e.g. allowing for smaller sample sizes or shorter trials to establish disease specific biomarkers or to quantify drug effects.
Collapse
Affiliation(s)
- Martin Reuter
- Massachusetts General Hospital/Harvard Medical School, Boston, MA, USA.
| | | | | | | |
Collapse
|
253
|
Wang H, Yushkevich PA. Spatial Bias in Multi-Atlas Based Segmentation. CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. WORKSHOPS 2012; 2012:909-916. [PMID: 23476901 PMCID: PMC3589983 DOI: 10.1109/cvpr.2012.6247765] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Multi-atlas segmentation has been widely applied in medical image analysis. With deformable registration, this technique realizes label transfer from pre-labeled atlases to unknown images. When deformable registration produces error, label fusion that combines results produced by multiple atlases is an effective way for reducing segmentation errors. Among the existing label fusion strategies, similarity-weighted voting strategies with spatially varying weight distributions have been particularly successful. We show that, weighted voting based label fusion produces a spatial bias that under-segments structures with convex shapes. The bias can be approximated as applying spatial convolution to the ground truth spatial label probability maps, where the convolution kernel combines the distribution of residual registration errors and the function producing similarity-based voting weights. To reduce this bias, we apply a standard spatial deconvolution to the spatial probability maps obtained from weighted voting. In a brain image segmentation experiment, we demonstrate the spatial bias and show that our technique substantially reduces this spatial bias.
Collapse
|
254
|
Asman AJ, Landman BA. Formulating spatially varying performance in the statistical fusion framework. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:1326-36. [PMID: 22438513 PMCID: PMC3368083 DOI: 10.1109/tmi.2012.2190992] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
To date, label fusion methods have primarily relied either on global [e.g., simultaneous truth and performance level estimation (STAPLE), globally weighted vote] or voxelwise (e.g., locally weighted vote) performance models. Optimality of the statistical fusion framework hinges upon the validity of the stochastic model of how a rater errs (i.e., the labeling process model). Hitherto, approaches have tended to focus on the extremes of potential models. Herein, we propose an extension to the STAPLE approach to seamlessly account for spatially varying performance by extending the performance level parameters to account for a smooth, voxelwise performance level field that is unique to each rater. This approach, Spatial STAPLE, provides significant improvements over state-of-the-art label fusion algorithms in both simulated and empirical data sets.
Collapse
Affiliation(s)
- Andrew J. Asman
- Department of Electrical Engineering, Vanderbilt University, Nashville, TN, 37235 USA (phone: 615-322-2338; fax: 615-343-5459; )
| | - Bennett A. Landman
- Department of Electrical Engineering, Vanderbilt University, Nashville, TN, 37235 USA ()
| |
Collapse
|
255
|
Yushkevich PA, Wang H, Pluta J, Avants BB. From label fusion to correspondence fusion: a new approach to unbiased groupwise registration. PROCEEDINGS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2012:956-963. [PMID: 24457950 DOI: 10.1109/cvpr.2012.6247771] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Label fusion strategies are used in multi-atlas image segmentation approaches to compute a consensus segmentation of an image, given a set of candidate segmentations produced by registering the image to a set of atlases [19, 11, 8]. Effective label fusion strategies, such as local similarity-weighted voting [1, 13] substantially reduce segmentation errors compared to single-atlas segmentation. This paper extends the label fusion idea to the problem of finding correspondences across a set of images. Instead of computing a consensus segmentation, weighted voting is used to estimate a consensus coordinate map between a target image and a reference space. Two variants of the problem are considered: (1) where correspondences between a set of atlases are known and are propagated to the target image; (2) where correspondences are estimated across a set of images without prior knowledge. Evaluation in synthetic data shows that correspondences recovered by fusion methods are more accurate than those based on registration to a population template. In a 2D example in real MRI data, fusion methods result in more consistent mappings between manual segmentations of the hippocampus.
Collapse
Affiliation(s)
- Paul A Yushkevich
- Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - Hongzhi Wang
- Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - John Pluta
- Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - Brian B Avants
- Department of Radiology, University of Pennsylvania, Philadelphia, USA
| |
Collapse
|
256
|
Iglesias JE, Sabuncu MR, Van Leemput K. A GENERATIVE MODEL FOR MULTI-ATLAS SEGMENTATION ACROSS MODALITIES. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2012:888-891. [PMID: 23568278 DOI: 10.1109/isbi.2012.6235691] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Current label fusion methods enhance multi-atlas segmentation by locally weighting the contribution of the atlases according to their similarity to the target volume after registration. However, these methods cannot handle voxel intensity inconsistencies between the atlases and the target image, which limits their application across modalities or even across MRI datasets due to differences in image contrast. Here we present a generative model for multi-atlas image segmentation, which does not rely on the intensity of the training images. Instead, we exploit the consistency of voxel intensities within regions in the target volume and their relation to the propagated labels. This is formulated in a probabilistic framework, where the most likely segmentation is obtained with variational expectation maximization (EM). The approach is demonstrated in an experiment where T1-weighted MRI atlases are used to segment proton-density (PD) weighted brain MRI scans, a scenario in which traditional weighting schemes cannot be used. Our method significantly improves the results provided by majority voting and STAPLE.
Collapse
|
257
|
Wang H, Yushkevich PA. DEPENDENCY PRIOR FOR MULTI-ATLAS LABEL FUSION. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2012; 2012:892-895. [PMID: 24443676 DOI: 10.1109/isbi.2012.6235692] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Multi-atlas label fusion has been widely applied in medical image analysis. To reduce the bias in label fusion, we proposed a joint label fusion technique to reduce correlated errors produced by different atlases via considering the pairwise dependencies between them. Using image similarities from image patches to estimate the pairwise dependencies, we showed promising performance. To address the unreliability in purely using local image similarity for dependency estimation, we propose to improve the accuracy of the estimated dependencies by including empirical knowledge, which is learned from the atlases in a leave-one-out strategy. We apply the new technique to segment the hippocampus from MRI and show significant improvement over our initial results.
Collapse
Affiliation(s)
- Hongzhi Wang
- Penn Image Computing and Science Lab, University of Pennsylvania
| | | |
Collapse
|
258
|
Gholipour A, Akhondi-Asl A, Estroff JA, Warfield SK. Multi-atlas multi-shape segmentation of fetal brain MRI for volumetric and morphometric analysis of ventriculomegaly. Neuroimage 2012; 60:1819-31. [PMID: 22500924 PMCID: PMC3329183 DOI: 10.1016/j.neuroimage.2012.01.128] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2011] [Revised: 01/05/2012] [Accepted: 01/29/2012] [Indexed: 11/18/2022] Open
Abstract
The recent development of motion robust super-resolution fetal brain MRI holds out the potential for dramatic new advances in volumetric and morphometric analysis. Volumetric analysis based on volumetric and morphometric biomarkers of the developing fetal brain must include segmentation. Automatic segmentation of fetal brain MRI is challenging, however, due to the highly variable size and shape of the developing brain; possible structural abnormalities; and the relatively poor resolution of fetal MRI scans. To overcome these limitations, we present a novel, constrained, multi-atlas, multi-shape automatic segmentation method that specifically addresses the challenge of segmenting multiple structures with similar intensity values in subjects with strong anatomic variability. Accordingly, we have applied this method to shape segmentation of normal, dilated, or fused lateral ventricles for quantitative analysis of ventriculomegaly (VM), which is a pivotal finding in the earliest stages of fetal brain development, and warrants further investigation. Utilizing these innovative techniques, we introduce novel volumetric and morphometric biomarkers of VM comparing these values to those that are generated by standard methods of VM analysis, i.e., by measuring the ventricular atrial diameter (AD) on manually selected sections of 2D ultrasound or 2D MRI. To this end, we studied 25 normal and abnormal fetuses in the gestation age (GA) range of 19 to 39 weeks (mean=28.26, stdev=6.56). This heterogeneous dataset was essentially used to 1) validate our segmentation method for normal and abnormal ventricles; and 2) show that the proposed biomarkers may provide improved detection of VM as compared to the AD measurement.
Collapse
Affiliation(s)
- Ali Gholipour
- Computational Radiology Laboratory, Department of Radiology, Children’s Hospital Boston, and Harvard Medical School, Boston, MA, 02115 USA
| | - Alireza Akhondi-Asl
- Computational Radiology Laboratory, Department of Radiology, Children’s Hospital Boston, and Harvard Medical School, Boston, MA, 02115 USA
| | - Judy A. Estroff
- Advanced Fetal Care Center, Department of Radiology, Children’s Hospital Boston, and Harvard Medical School, Boston, MA, 02115 USA
| | - Simon K. Warfield
- Computational Radiology Laboratory, Department of Radiology, Children’s Hospital Boston, and Harvard Medical School, Boston, MA, 02115 USA
| |
Collapse
|
259
|
Rehman A, Saba T. RETRACTED ARTICLE: Analysis of advanced image processing to clinical and preclinical decision making with prospectus of quantitative imaging biomarkers. Artif Intell Rev 2012. [DOI: 10.1007/s10462-012-9335-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
260
|
Xu Z, Asman AJ, Landman BA. Generalized Statistical Label Fusion using Multiple Consensus Levels. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2012; 8314. [PMID: 22977295 DOI: 10.1117/12.910918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Segmentation plays a critical role in exposing connections between biological structure and function. The process of label fusion collects and combines multiple observations into a single estimate. Statistically driven techniques provide mechanisms to optimally combine segmentations; yet, optimality hinges upon accurate modeling of rater behavior. Traditional approaches, e.g., Majority Vote and Simultaneous Truth and Performance Level Estimation (STAPLE), have been shown to yield excellent performance in some cases, but do not account for spatial dependences of rater performance (i.e., regional task difficulty). Recently, the COnsensus Level, Labeler Accuracy and Truth Estimation (COLLATE) label fusion technique augmented the seminal STAPLE approach to simultaneously estimate regions of relative consensus versus confusion along with rater performance. Herein, we extend the COLLATE framework to account for multiple consensus levels. Toward this end, we posit a generalized model of rater behavior of which Majority Vote, STAPLE, STAPLE Ignoring Consensus Voxels, and COLLATE are special cases. The new algorithm is evaluated with simulations and shown to yield improved performance in cases with complex region difficulties. Multi-COLLATE achieve these results by capturing different consensus levels. The potential impacts and applications of generative model to label fusion problems are discussed.
Collapse
Affiliation(s)
- Zhoubing Xu
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | | | | |
Collapse
|
261
|
Asman AJ, Landmana BA. Simultaneous Segmentation and Statistical Label Fusion. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2012; 8314. [PMID: 24357909 DOI: 10.1117/12.910794] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Labeling or segmentation of structures of interest in medical imaging plays an essential role in both clinical and scientific understanding. Two of the common techniques to obtain these labels are through either fully automated segmentation or through multi-atlas based segmentation and label fusion. Fully automated techniques often result in highly accurate segmentations but lack the robustness to be viable in many cases. On the other hand, label fusion techniques are often extremely robust, but lack the accuracy of automated algorithms for specific classes of problems. Herein, we propose to perform simultaneous automated segmentation and statistical label fusion through the reformulation of a generative model to include a linkage structure that explicitly estimates the complex global relationships between labels and intensities. These relationships are inferred from the atlas labels and intensities and applied to the target using a non-parametric approach. The novelty of this approach lies in the combination of previously exclusive techniques and attempts to combine the accuracy benefits of automated segmentation with the robustness of a multi-atlas based approach. The accuracy benefits of this simultaneous approach are assessed using a multi-label multi- atlas whole-brain segmentation experiment and the segmentation of the highly variable thyroid on computed tomography images. The results demonstrate that this technique has major benefits for certain types of problems and has the potential to provide a paradigm shift in which the lines between statistical label fusion and automated segmentation are dramatically blurred.
Collapse
Affiliation(s)
- Andrew J Asman
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | - Bennett A Landmana
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235 ; Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218
| |
Collapse
|
262
|
Label fusion strategy selection. Int J Biomed Imaging 2012; 2012:431095. [PMID: 22518113 PMCID: PMC3296312 DOI: 10.1155/2012/431095] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2011] [Revised: 09/08/2011] [Accepted: 09/25/2011] [Indexed: 12/03/2022] Open
Abstract
Label fusion is used in medical image segmentation to combine several different labels of the same entity into a single discrete label, potentially more accurate, with respect to the exact, sought segmentation, than the best input element. Using simulated data, we compared three existing label fusion techniques—STAPLE, Voting, and Shape-Based Averaging (SBA)—and observed that none could be considered superior depending on the dissimilarity between the input elements. We thus developed an empirical, hybrid technique called SVS, which selects the most appropriate technique to apply based on this dissimilarity. We evaluated the label fusion strategies on two- and three-dimensional simulated data and showed that SVS is superior to any of the three existing methods examined. On real data, we used SVS to perform fusions of 10 segmentations of the hippocampus and amygdala in 78 subjects from the ICBM dataset. SVS selected SBA in almost all cases, which was the most appropriate method overall.
Collapse
|
263
|
Landman BA, Asman AJ, Scoggins AG, Bogovic JA, Xing F, Prince JL. Robust statistical fusion of image labels. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:512-22. [PMID: 22010145 PMCID: PMC3262958 DOI: 10.1109/tmi.2011.2172215] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Image labeling and parcellation (i.e., assigning structure to a collection of voxels) are critical tasks for the assessment of volumetric and morphometric features in medical imaging data. The process of image labeling is inherently error prone as images are corrupted by noise and artifacts. Even expert interpretations are subject to subjectivity and the precision of the individual raters. Hence, all labels must be considered imperfect with some degree of inherent variability. One may seek multiple independent assessments to both reduce this variability and quantify the degree of uncertainty. Existing techniques have exploited maximum a posteriori statistics to combine data from multiple raters and simultaneously estimate rater reliabilities. Although quite successful, wide-scale application has been hampered by unstable estimation with practical datasets, for example, with label sets with small or thin objects to be labeled or with partial or limited datasets. As well, these approaches have required each rater to generate a complete dataset, which is often impossible given both human foibles and the typical turnover rate of raters in a research or clinical environment. Herein, we propose a robust approach to improve estimation performance with small anatomical structures, allow for missing data, account for repeated label sets, and utilize training/catch trial data. With this approach, numerous raters can label small, overlapping portions of a large dataset, and rater heterogeneity can be robustly controlled while simultaneously estimating a single, reliable label set and characterizing uncertainty. The proposed approach enables many individuals to collaborate in the construction of large datasets for labeling tasks (e.g., human parallel processing) and reduces the otherwise detrimental impact of rater unavailability.
Collapse
Affiliation(s)
- Bennett A. Landman
- Department of Electrical Engineering, Vanderbilt University, Nashville, TN, 37235 USA (phone: 615-322-2338; fax: 615-343-5459 )
| | - Andrew J. Asman
- Department of Electrical Engineering, Vanderbilt University, Nashville, TN, 37235 USA ()
| | - Andrew G. Scoggins
- Department of Electrical Engineering, Vanderbilt University, Nashville, TN, 37235 USA ()
| | - John A. Bogovic
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, 21218 USA ()
| | - Fangxu Xing
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, 21218 USA ()
| | - Jerry L. Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, 21218 USA ()
| |
Collapse
|
264
|
Fischl B. FreeSurfer. Neuroimage 2012; 62:774-81. [PMID: 22248573 DOI: 10.1016/j.neuroimage.2012.01.021] [Citation(s) in RCA: 5809] [Impact Index Per Article: 446.8] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2011] [Revised: 11/19/2011] [Accepted: 01/01/2012] [Indexed: 12/16/2022] Open
Abstract
FreeSurfer is a suite of tools for the analysis of neuroimaging data that provides an array of algorithms to quantify the functional, connectional and structural properties of the human brain. It has evolved from a package primarily aimed at generating surface representations of the cerebral cortex into one that automatically creates models of most macroscopically visible structures in the human brain given any reasonable T1-weighted input image. It is freely available, runs on a wide variety of hardware and software platforms, and is open source.
Collapse
Affiliation(s)
- Bruce Fischl
- Athinoula A Martinos Center, Dept. of Radiology, MGH, Harvard Medical School, MA , USA.
| |
Collapse
|
265
|
Awate SP, Zhu P, Whitaker RT. How Many Templates Does It Take for a Good Segmentation?: Error Analysis in Multiatlas Segmentation as a Function of Database Size. ACTA ACUST UNITED AC 2012; 7509:103-114. [PMID: 24501720 DOI: 10.1007/978-3-642-33530-3_9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
Abstract
This paper proposes a novel formulation to model and analyze the statistical characteristics of some types of segmentation problems that are based on combining label maps / templates / atlases. Such segmentation-by-example approaches are quite powerful on their own for several clinical applications and they provide prior information, through spatial context, when combined with intensity-based segmentation methods. The proposed formulation models a class of multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of images. The paper presents a systematic analysis of the nonparametric estimation's convergence behavior (i.e. characterizing segmentation error as a function of the size of the multiatlas database) and shows that it has a specific analytic form involving several parameters that are fundamental to the specific segmentation problem (i.e. chosen anatomical structure, imaging modality, registration method, label-fusion algorithm, etc.). We describe how to estimate these parameters and show that several brain anatomical structures exhibit the trends determined analytically. The proposed framework also provides per-voxel confidence measures for the segmentation. We show that the segmentation error for large database sizes can be predicted using small-sized databases. Thus, small databases can be exploited to predict the database sizes required ("how many templates") to achieve "good" segmentations having errors lower than a specified tolerance. Such cost-benefit analysis is crucial for designing and deploying multiatlas segmentation systems.
Collapse
|
266
|
Liao S, Zhang D, Yap PT, Wu G, Shen D. Group Sparsity Constrained Automatic Brain Label Propagation. MACHINE LEARNING FOR MULTIMODAL INTERACTION : ... INTERNATIONAL WORKSHOP, MLMI ... : REVISED SELECTED PAPERS. WORKSHOP ON MACHINE LEARNING FOR MULTIMODAL INTERACTION 2012; 7588:45-53. [PMID: 25328918 PMCID: PMC4197995 DOI: 10.1007/978-3-642-35428-1_6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this paper, we present a group sparsity constrained patch based label propagation method for multi-atlas automatic brain labeling. The proposed method formulates the label propagation process as a graph-based theoretical framework, where each voxel in the input image is linked to each candidate voxel in each atlas image by an edge in the graph. The weight of the edge is estimated based on a sparse representation framework to identify a limited number of candidate voxles whose local image patches can best represent the local image patch of each voxel in the input image. The group sparsity constraint to capture the dependency among candidate voxels with the same anatomical label is also enforced. It is shown that based on the edge weight estimated by the proposed method, the anatomical label for each voxel in the input image can be estimated more accurately by the label propagation process. Moreover, we extend our group sparsity constrained patch based label propagation framework to the reproducing kernel Hilbert space (RKHS) to capture the nonlinear similarity of patches among different voxels and construct the sparse representation in high dimensional feature space. The proposed method was evaluated on the NA0-NIREP database for automatic human brain anatomical labeling. It was also compared with several state-of-the-art multi-atlas based brain labeling algorithms. Experimental results demonstrate that our method consistently achieves the highest segmentation accuracy among all methods used for comparison.
Collapse
|
267
|
Jin Y, Shi Y, Zhan L, Li J, de Zubicaray GI, McMahon KL, Martin NG, Wright MJ, Thompson PM. Automatic Population HARDI White Matter Tract Clustering by Label Fusion of Multiple Tract Atlases. MULTIMODAL BRAIN IMAGE ANALYSIS : SECOND INTERNATIONAL WORKSHOP, MBIA 2012, HELD IN CONJUNCTION WITH MICCAI 2012, NICE, FRANCE, OCTOBER 1-5, 2012 : PROCEEDINGS. MBIA (WORKSHOP) (2ND : 2012 : NICE, FRANCE) 2012; 7509:147-156. [PMID: 26207263 PMCID: PMC4508862 DOI: 10.1007/978-3-642-33530-3_12] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2023]
Abstract
Automatic labeling of white matter fibres in diffusion-weighted brain MRI is vital for comparing brain integrity and connectivity across populations, but is challenging. Whole brain tractography generates a vast set of fibres throughout the brain, but it is hard to cluster them into anatomically meaningful tracts, due to wide individual variations in the trajectory and shape of white matter pathways. We propose a novel automatic tract labeling algorithm that fuses information from tractography and multiple hand-labeled fibre tract atlases. As streamline tractography can generate a large number of false positive fibres, we developed a top-down approach to extract tracts consistent with known anatomy, based on a distance metric to multiple hand-labeled atlases. Clustering results from different atlases were fused, using a multi-stage fusion scheme. Our "label fusion" method reliably extracted the major tracts from 105-gradient HARDI scans of 100 young normal adults.
Collapse
Affiliation(s)
- Yan Jin
- Laboratory of Neuro Imaging, Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Yonggang Shi
- Laboratory of Neuro Imaging, Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Liang Zhan
- Laboratory of Neuro Imaging, Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Junning Li
- Laboratory of Neuro Imaging, Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | | | - Katie L. McMahon
- University of Queensland, Brisbane St. Lucia, QLD 4072, Australia
| | | | | | - Paul M. Thompson
- Laboratory of Neuro Imaging, Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA 90095, USA
| |
Collapse
|
268
|
Sparse Patch-Based Label Fusion for Multi-Atlas Segmentation. MULTIMODAL BRAIN IMAGE ANALYSIS 2012. [DOI: 10.1007/978-3-642-33530-3_8] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
269
|
Asman AJ, Landman BA. Non-local STAPLE: an intensity-driven multi-atlas rater model. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2012; 15:426-34. [PMID: 23286159 PMCID: PMC3539246 DOI: 10.1007/978-3-642-33454-2_53] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Multi-atlas segmentation provides a general purpose, fully automated class of techniques for transferring spatial information from an existing dataset ("atlases") to a previously unseen context ("target") through image registration. The method used to combine information after registration ("label fusion") has a substantial impact on the overall accuracy and robustness. In practice, weighted voting techniques have dramatically outperformed algorithms based on statistical fusion (i.e., algorithms that incorporate rater performance into the estimation process--STAPLE). We posit that a critical limitation of statistical techniques (as generally proposed) is that they fail to incorporate intensity seamlessly into the estimation process and models of observation error. Herein, we propose a novel statistical fusion algorithm, non-local STAPLE, which merges the STAPLE framework with a non-local means perspective. Non-local STAPLE (1) seamlessly integrates intensity into the estimation process, (2) provides a theoretically consistent model of multi-atlas observation error, and (3) largely bypasses the need for group-wise unbiased registrations. We demonstrate significant improvements in two empirical multi-atlas experiments.
Collapse
Affiliation(s)
- Andrew J. Asman
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | | |
Collapse
|
270
|
Wachinger C, Golland P. Spectral label fusion. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2012; 15:410-7. [PMID: 23286157 PMCID: PMC3539206 DOI: 10.1007/978-3-642-33454-2_51] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/22/2024]
Abstract
We present a new segmentation approach that combines the strengths of label fusion and spectral clustering. The result is an atlas-based segmentation method guided by contour and texture cues in the test image. This offers advantages for datasets with high variability, making the segmentation less prone to registration errors. We achieve the integration by letting the weights of the graph Laplacian depend on image data, as well as atlas-based label priors. The extracted contours are converted to regions, arranged in a hierarchy depending on the strength of the separating boundary. Finally, we construct the segmentation by a region-wise, instead of voxel-wise, voting, increasing the robustness. Our experiments on cardiac MRI show a clear improvement over majority voting and intensity-weighted label fusion.
Collapse
|
271
|
Iglesias JE, Sabuncu MR, Van Leemput K. A Generative Model for Probabilistic Label Fusion of Multimodal Data. ACTA ACUST UNITED AC 2012; 7509:115-133. [PMID: 25685856 DOI: 10.1007/978-3-642-33530-3_10] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2023]
Abstract
The maturity of registration methods, in combination with the increasing processing power of computers, has made multi-atlas segmentation methods practical. The problem of merging the deformed label maps from the atlases is known as label fusion. Even though label fusion has been well studied for intramodality scenarios, it remains relatively unexplored when the nature of the target data is multimodal or when its modality is different from that of the atlases. In this paper, we review the literature on label fusion methods and also present an extension of our previously published algorithm to the general case in which the target data are multimodal. The method is based on a generative model that exploits the consistency of voxel intensities within the target scan based on the current estimate of the segmentation. Using brain MRI scans acquired with a multiecho FLASH sequence, we compare the method with majority voting, statistical-atlas-based segmentation, the popular package FreeSurfer and an adaptive local multi-atlas segmentation method. The results show that our approach produces highly accurate segmentations (Dice 86.3% across 22 brain structures of interest), outperforming the competing methods.
Collapse
Affiliation(s)
| | | | - Koen Van Leemput
- Departments of Information and Computer Science and of Biomedical Engineering and Computational Science, Aalto University, Finland
| |
Collapse
|
272
|
Cabezas M, Oliver A, Lladó X, Freixenet J, Cuadra MB. A review of atlas-based segmentation for magnetic resonance brain images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2011; 104:e158-e177. [PMID: 21871688 DOI: 10.1016/j.cmpb.2011.07.015] [Citation(s) in RCA: 225] [Impact Index Per Article: 16.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2010] [Revised: 07/26/2011] [Accepted: 07/27/2011] [Indexed: 05/31/2023]
Abstract
Normal and abnormal brains can be segmented by registering the target image with an atlas. Here, an atlas is defined as the combination of an intensity image (template) and its segmented image (the atlas labels). After registering the atlas template and the target image, the atlas labels are propagated to the target image. We define this process as atlas-based segmentation. In recent years, researchers have investigated registration algorithms to match atlases to query subjects and also strategies for atlas construction. In this paper we present a review of the automated approaches for atlas-based segmentation of magnetic resonance brain images. We aim to point out the strengths and weaknesses of atlas-based methods and suggest new research directions. We use two different criteria to present the methods. First, we refer to the algorithms according to their atlas-based strategy: label propagation, multi-atlas methods, and probabilistic techniques. Subsequently, we classify the methods according to their medical target: the brain and its internal structures, tissue segmentation in healthy subjects, tissue segmentation in fetus, neonates and elderly subjects, and segmentation of damaged brains. A quantitative comparison of the results reported in the literature is also presented.
Collapse
Affiliation(s)
- Mariano Cabezas
- Institute of Informatics and Applications, Ed. P-IV, Campus Montilivi, University of Girona, 17071 Girona, Spain
| | | | | | | | | |
Collapse
|
273
|
Chen A, Niermann KJ, Deeley MA, Dawant BM. Evaluation of multiple-atlas-based strategies for segmentation of the thyroid gland in head and neck CT images for IMRT. Phys Med Biol 2011; 57:93-111. [PMID: 22126838 DOI: 10.1088/0031-9155/57/1/93] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Segmenting the thyroid gland in head and neck CT images is of vital clinical significance in designing intensity-modulated radiation therapy (IMRT) treatment plans. In this work, we evaluate and compare several multiple-atlas-based methods to segment this structure. Using the most robust method, we generate automatic segmentations for the thyroid gland and study their clinical applicability. The various methods we evaluate range from selecting a single atlas based on one of three similarity measures, to combining the segmentation results obtained with several atlases and weighting their contribution using techniques including a simple majority vote rule, a technique called STAPLE that is widely used in the medical imaging literature, and the similarity between the atlas and the volume to be segmented. We show that the best results are obtained when several atlases are combined and their contributions are weighted with a measure of similarity between each atlas and the volume to be segmented. We also show that with our data set, STAPLE does not always lead to the best results. Automatic segmentations generated by the combination method using the correlation coefficient (CC) between the deformed atlas and the patient volume, which is the most accurate and robust method we evaluated, are presented to a physician as 2D contours and modified to meet clinical requirements. It is shown that about 40% of the contours of the left thyroid and about 42% of the right thyroid can be used directly. An additional 21% on the left and 24% on the right require only minimal modification. The amount and the location of the modifications are qualitatively and quantitatively assessed. We demonstrate that, although challenged by large inter-subject anatomical discrepancy, atlas-based segmentation of the thyroid gland in IMRT CT images is feasible by involving multiple atlases. The results show that a weighted combination of segmentations by atlases using the CC as the similarity measure slightly outperforms standard combination methods, e.g. the majority vote rule and STAPLE, as well as methods selecting a single most similar atlas. The results we have obtained suggest that using our contours as initial contours to be edited has clinical value.
Collapse
Affiliation(s)
- A Chen
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA.
| | | | | | | |
Collapse
|
274
|
Weisenfeld NI, Warfield SK. Learning likelihoods for labeling (L3): a general multi-classifier segmentation algorithm. ACTA ACUST UNITED AC 2011; 14:322-9. [PMID: 22003715 DOI: 10.1007/978-3-642-23626-6_40] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
PURPOSE To develop an MRI segmentation method for brain tissues, regions, and substructures that yields improved classification accuracy. Current brain segmentation strategies include two complementary strategies. Multi-spectral classification techniques generate excellent segmentations for tissues with clear intensity contrast, but fail to identify structures defined largely by location, such as lobar parcellations and certain subcortical structures. Conversely, multi-template label fusion methods are excellent for structures defined largely by location, but perform poorly when segmenting structures that cannot be accurately identified through a consensus of registered templates. METHODS We propose here a novel multi-classifier fusion algorithm with the advantages of both types of segmentation strategy. We illustrate and validate this algorithm using a group of 14 expertly hand-labeled images. RESULTS Our method generated segmentations of cortical and subcortical structures that were more similar to hand-drawn segmentations than majority vote label fusion or a recently published intensity/label fusion method. CONCLUSIONS We have presented a novel, general segmentation algorithm with the advantages of both statistical classifiers and label fusion techniques.
Collapse
Affiliation(s)
- Neil I Weisenfeld
- Computational Radiology Laboratory, Children's Hospital Boston, Harvard Medical School, Boston, MA, USA
| | | |
Collapse
|
275
|
Zhang D, Wu G, Jia H, Shen D. Confidence-guided sequential label fusion for multi-atlas based segmentation. ACTA ACUST UNITED AC 2011; 14:643-50. [PMID: 22003754 DOI: 10.1007/978-3-642-23626-6_79] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2023]
Abstract
Label fusion is a key step in multi-atlas based segmentation, which combines labels from multiple atlases to make the final decision. However, most of the current label fusion methods consider each voxel equally and independently during label fusion. In our point of view, however, different voxels act different roles in the way that some voxels might have much higher confidence in label determination than others, i.e., because of their better alignment across all registered atlases. In light of this, we propose a sequential label fusion framework for multi-atlas based image segmentation by hierarchically using the voxels with high confidence to guide the labeling procedure of other challenging voxels (whose registration results among deformed atlases are not good enough) to afford more accurate label fusion. Specifically, we first measure the corresponding labeling confidence for each voxel based on the k-nearest-neighbor rule, and then perform label fusion sequentially according to the estimated labeling confidence on each voxel. In particular, for each label fusion process, we use not only the propagated labels from atlases, but also the estimated labels from the neighboring voxels with higher labeling confidence. We demonstrate the advantage of our method by deploying it to the two popular label fusion algorithms, i.e., majority voting and local weighted voting. Experimental results show that our sequential label fusion method can consistently improve the performance of both algorithms in terms of segmentation/labeling accuracy.
Collapse
Affiliation(s)
- Daoqiang Zhang
- Dept. of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599.
| | | | | | | |
Collapse
|
276
|
Rousseau F, Habas PA, Studholme C. A supervised patch-based approach for human brain labeling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2011; 30:1852-62. [PMID: 21606021 PMCID: PMC3318921 DOI: 10.1109/tmi.2011.2156806] [Citation(s) in RCA: 166] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
We propose in this work a patch-based image labeling method relying on a label propagation framework. Based on image intensity similarities between the input image and an anatomy textbook, an original strategy which does not require any nonrigid registration is presented. Following recent developments in nonlocal image denoising, the similarity between images is represented by a weighted graph computed from an intensity-based distance between patches. Experiments on simulated and in vivo magnetic resonance images show that the proposed method is very successful in providing automated human brain labeling.
Collapse
Affiliation(s)
- Françcois Rousseau
- Laboratoire des Sciences de l’Image, de l’Informatique et de la Télédétection (LSIIT), UMR 7005 CNRS-University of Strasbourg, 67412 Illkirch, France.
| | | | | |
Collapse
|
277
|
Abstract
Multi-atlas based segmentation has been applied widely in medical image analysis. For label fusion, previous studies show that image similarity-based local weighting techniques produce the most accurate results. However, these methods ignore the correlations between results produced by different atlases. Furthermore, they rely on pre-selected weighting models and ad hoc methods to choose model parameters. We propose a novel label fusion method to address these limitations. Our formulation directly aims at reducing the expectation of the combined error and can be efficiently solved in a closed form. In our hippocampus segmentation experiment, our method significantly outperforms similarity-based local weighting. Using 20 atlases, we produce results with 0.898 +/- 0.019 Dice overlap to manual labelings for controls.
Collapse
|
278
|
Hanseeuw BJ, Van Leemput K, Kavec M, Grandin C, Seron X, Ivanoiu A. Mild cognitive impairment: differential atrophy in the hippocampal subfields. AJNR Am J Neuroradiol 2011; 32:1658-61. [PMID: 21835940 DOI: 10.3174/ajnr.a2589] [Citation(s) in RCA: 93] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
BACKGROUND AND PURPOSE Hippocampus volumetry is a useful surrogate marker for the diagnosis of Alzheimer disease, but it seems insufficiently sensitive for the aMCI stage. We postulated that some hippocampus subfields are specifically atrophic in aMCI and that measuring hippocampus subfield volumes will improve sensitivity of MR imaging to detect aMCI. MATERIALS AND METHODS We evaluated episodic memory and hippocampus subfield volume in 15 patients with aMCI and 15 matched controls. After segmentation of the whole hippocampus from clinical MR imaging, we applied a new computational method allowing fully automated segmentation of the hippocampus subfields. This method used a Bayesian modeling approach to infer segmentations from the imaging data. RESULTS In comparison with controls, subiculum and CA2-3 were significantly atrophic in patients with aMCI, whereas total hippocampus volume and other subfields were not. Total hippocampus volume in controls was age-related, whereas episodic memory was the main explanatory variable for both the total hippocampus volume and the subfields that were atrophic in patients with aMCI. Segmenting subfields increases sensitivity to diagnose aMCI from 40% to 73%. CONCLUSIONS Measuring CA2-3 and subiculum volumes allows a better detection of aMCI.
Collapse
Affiliation(s)
- B J Hanseeuw
- Department of Neurology, Saint-Luc University Hospital, Brussels, Belgium.
| | | | | | | | | | | |
Collapse
|
279
|
Jia H, Yap PT, Shen D. Iterative multi-atlas-based multi-image segmentation with tree-based registration. Neuroimage 2011; 59:422-30. [PMID: 21807102 DOI: 10.1016/j.neuroimage.2011.07.036] [Citation(s) in RCA: 83] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2011] [Revised: 07/07/2011] [Accepted: 07/13/2011] [Indexed: 10/18/2022] Open
Abstract
In this paper, we present a multi-atlas-based framework for accurate, consistent and simultaneous segmentation of a group of target images. Multi-atlas-based segmentation algorithms consider concurrently complementary information from multiple atlases to produce optimal segmentation outcomes. However, the accuracy of these algorithms relies heavily on the precise alignment of the atlases with the target image. In particular, the commonly used pairwise registration may result in inaccurate alignment especially between images with large shape differences. Additionally, when segmenting a group of target images, most current methods consider these images independently with disregard of their correlation, thus resulting in inconsistent segmentations of the same structures across different target images. We propose two novel strategies to address these limitations: 1) a novel tree-based groupwise registration method for concurrent alignment of both the atlases and the target images, and 2) an iterative groupwise segmentation method for simultaneous consideration of segmentation information propagated from all available images, including the atlases and other newly segmented target images. Evaluation based on various datasets indicates that the proposed multi-atlas-based multi-image segmentation (MABMIS) framework yields substantial improvements in terms of consistency and accuracy over methods that do not consider the group of target images holistically.
Collapse
Affiliation(s)
- Hongjun Jia
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA.
| | | | | |
Collapse
|
280
|
Wang H, Suh JW, Das S, Pluta J, Altinay M, Yushkevich P. Regression-Based Label Fusion for Multi-Atlas Segmentation. CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. WORKSHOPS 2011:1113-1120. [PMID: 22562785 PMCID: PMC3343877 DOI: 10.1109/cvpr.2011.5995382] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/12/2023]
Abstract
Automatic segmentation using multi-atlas label fusion has been widely applied in medical image analysis. To simplify the label fusion problem, most methods implicitly make a strong assumption that the segmentation errors produced by different atlases are uncorrelated. We show that violating this assumption significantly reduces the efficiency of multi-atlas segmentation. To address this problem, we propose a regression-based approach for label fusion. Our experiments on segmenting the hippocampus in magnetic resonance images (MRI) show significant improvement over previous label fusion techniques.
Collapse
|
281
|
Automatic 3D segmentation of individual facial muscles using unlabeled prior information. Int J Comput Assist Radiol Surg 2011; 7:35-41. [DOI: 10.1007/s11548-011-0567-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2011] [Accepted: 04/18/2011] [Indexed: 11/26/2022]
|
282
|
Khan AR, Cherbuin N, Wen W, Anstey KJ, Sachdev P, Beg MF. Optimal weights for local multi-atlas fusion using supervised learning and dynamic information (SuperDyn): Validation on hippocampus segmentation. Neuroimage 2011; 56:126-39. [PMID: 21296166 DOI: 10.1016/j.neuroimage.2011.01.078] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2010] [Revised: 01/26/2011] [Accepted: 01/28/2011] [Indexed: 12/20/2022] Open
|
283
|
Kirişli HA, Schaap M, Klein S, Papadopoulou SL, Bonardi M, Chen CH, Weustink AC, Mollet NR, Vonken EJ, van der Geest RJ, van Walsum T, Niessen WJ. Evaluation of a multi-atlas based method for segmentation of cardiac CTA data: a large-scale, multicenter, and multivendor study. Med Phys 2011; 37:6279-91. [PMID: 21302784 DOI: 10.1118/1.3512795] [Citation(s) in RCA: 81] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Computed tomography angiography (CTA) is increasingly used for the diagnosis of coronary artery disease (CAD). However, CTA is not commonly used for the assessment of ventricular and atrial function, although functional information extracted from CTA data is expected to improve the diagnostic value of the examination. In clinical practice, the extraction of ventricular and atrial functional information, such as stroke volume and ejection fraction, requires accurate delineation of cardiac chambers. In this paper, we investigated the accuracy and robustness of cardiac chamber delineation using a multiatlas based segmentation method on multicenter and multivendor CTA data. METHODS A fully automatic multiatlas based method for segmenting the whole heart (i.e., the outer surface of the pericardium) and cardiac chambers from CTA data is presented and evaluated. In the segmentation approach, eight atlas images are registered to a new patient's CTA scan. The eight corresponding manually labeled images are then propagated and combined using a per voxel majority voting procedure, to obtain a cardiac segmentation. RESULTS The method was evaluated on a multicenter/multivendor database, consisting of (1) a set of 1380 Siemens scans from 795 patients and (2) a set of 60 multivendor scans (Siemens, Philips, and GE) from different patients, acquired in six different institutions worldwide. A leave-one-out 3D quantitative validation was carried out on the eight atlas images; we obtained a mean surface-to-surface error of 0.94 +/- 1.12 mm and an average Dice coefficient of 0.93 was achieved. A 2D quantitative evaluation was performed on the 60 multivendor data sets. Here, we observed a mean surface-to-surface error of 1.26 +/- 1.25 mm and an average Dice coefficient of 0.91 was achieved. In addition to this quantitative evaluation, a large-scale 2D and 3D qualitative evaluation was performed on 1380 and 140 images, respectively. Experts evaluated that 49% of the 1380 images were very accurately segmented (below 1 mm error) and that 29% were accurately segmented (error between 1 and 3 mm), which demonstrates the robustness of the presented method. CONCLUSIONS A fully automatic method for whole heart and cardiac chamber segmentation was presented and evaluated using multicenter/multivendor CTA data. The accuracy and robustness of the method were demonstrated by successfully applying the method to 1420 multicenter/ multivendor data sets.
Collapse
Affiliation(s)
- H A Kirişli
- Biomedical Imaging Group Rotterdam, Department of Radiology and Department of Medical Informatics, Erasmus MC, 3000 CA Rotterdam, The Netherlands.
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
284
|
Wang H, Das SR, Suh JW, Altinay M, Pluta J, Craige C, Avants B, Yushkevich PA. A learning-based wrapper method to correct systematic errors in automatic image segmentation: consistently improved performance in hippocampus, cortex and brain segmentation. Neuroimage 2011; 55:968-85. [PMID: 21237273 DOI: 10.1016/j.neuroimage.2011.01.006] [Citation(s) in RCA: 130] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2010] [Revised: 12/30/2010] [Accepted: 01/05/2011] [Indexed: 11/15/2022] Open
Abstract
We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively.
Collapse
Affiliation(s)
- Hongzhi Wang
- Penn Image Computing and Science Laboratory, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA.
| | | | | | | | | | | | | | | |
Collapse
|
285
|
Depa M, Holmvang G, Schmidt EJ, Golland P, Sabuncu MR. Towards Effcient Label Fusion by Pre-Alignment of Training Data. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2011; 14:38-46. [PMID: 24660167 PMCID: PMC3958940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Label fusion is a multi-atlas segmentation approach that explicitly maintains and exploits the entire training dataset, rather than a parametric summary of it. Recent empirical evidence suggests that label fusion can achieve significantly better segmentation accuracy over classical parametric atlas methods that utilize a single coordinate frame. However, this performance gain typically comes at an increased computational cost due to the many pairwise registrations between the novel image and training images. In this work, we present a modified label fusion method that approximates these pairwise warps by first pre-registering the training images via a diffeomorphic groupwise registration algorithm. The novel image is then only registered once, to the template image that represents the average training subject. The pairwise spatial correspondences between the novel image and training images are then computed via concatenation of appropriate transformations. Our experiments on cardiac MR data suggest that this strategy for nonparametric segmentation dramatically improves computational efficiency, while producing segmentation results that are statistically indistinguishable from those obtained with regular label fusion. These results suggest that the key benefit of label fusion approaches is the underlying nonparametric inference algorithm, and not the multiple pairwise registrations.
Collapse
Affiliation(s)
- Michal Depa
- Computer Science and Artificial Intelligence Lab, MIT, Cambridge, MA, USA
| | | | - Ehud J Schmidt
- Department of Radiology, Brigham & Women's Hospital, Boston, MA, USA
| | - Polina Golland
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
| | | |
Collapse
|
286
|
Asman AJ, Landman BA. Characterizing spatially varying performance to improve multi-atlas multi-label segmentation. INFORMATION PROCESSING IN MEDICAL IMAGING : PROCEEDINGS OF THE ... CONFERENCE 2011; 22:85-96. [PMID: 21761648 DOI: 10.1007/978-3-642-22092-0_8] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
Segmentation of medical images has become critical to building understanding of biological structure-functional relationships. Atlas registration and label transfer provide a fully-automated approach for deriving segmentations given atlas training data. When multiple atlases are used, statistical label fusion techniques have been shown to dramatically improve segmentation accuracy. However, these techniques have had limited success with complex structures and atlases with varying similarity to the target data. Previous approaches have parameterized raters by a single confusion matrix, so that spatially varying performance for a single rater is neglected. Herein, we reformulate the statistical fusion model to describe raters by regional confusion matrices so that co-registered atlas labels can be fused in an optimal, spatially varying manner, which leads to an improved label fusion estimation with heterogeneous atlases. The advantages of this approach are characterized in a simulation and an empirical whole-brain labeling task.
Collapse
Affiliation(s)
- Andrew J Asman
- Electrical Engineering, Vanderbilt University, Nashville, TN 37235, USA.
| | | |
Collapse
|