1
|
Segmentation of the aorta in systolic phase from 4D flow MRI: multi-atlas vs. deep learning. MAGMA (NEW YORK, N.Y.) 2023; 36:687-700. [PMID: 36800143 DOI: 10.1007/s10334-023-01066-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 11/26/2022] [Accepted: 01/24/2023] [Indexed: 02/18/2023]
Abstract
OBJECTIVE In the management of the aortic aneurysm, 4D flow magnetic resonance Imaging provides valuable information for the computation of new biomarkers using computational fluid dynamics (CFD). However, accurate segmentation of the aorta is required. Thus, our objective is to evaluate the performance of two automatic segmentation methods on the calculation of aortic wall pressure. METHODS Automatic segmentation of the aorta was performed with methods based on deep learning and multi-atlas using the systolic phase in the 4D flow MRI magnitude image of 36 patients. Using mesh morphing, isotopological meshes were generated, and CFD was performed to calculate the aortic wall pressure. Node-to-node comparisons of the pressure results were made to identify the most robust automatic method respect to the pressures obtained with a manually segmented model. RESULTS Deep learning approach presented the best segmentation performance with a mean Dice similarity coefficient and a mean Hausdorff distance (HD) equal to 0.92+/- 0.02 and 21.02+/- 24.20 mm, respectively. At the global level HD is affected by the performance in the abdominal aorta. Locally, this distance decreases to 9.41+/- 3.45 and 5.82+/- 6.23 for the ascending and descending thoracic aorta, respectively. Moreover, with respect to the pressures from the manual segmentations, the differences in the pressures computed from deep learning were lower than those computed from multi-atlas method. CONCLUSION To reduce biases in the calculation of aortic wall pressure, accurate segmentation is needed, particularly in regions with high blood flow velocities. Thus, the deep learning segmen-tation method should be preferred.
Collapse
|
2
|
HGM-cNet: Integrating hippocampal gray matter probability map into a cascaded deep learning framework improves hippocampus segmentation. Eur J Radiol 2023; 162:110771. [PMID: 36948058 DOI: 10.1016/j.ejrad.2023.110771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 03/07/2023] [Accepted: 03/09/2023] [Indexed: 03/17/2023]
Abstract
A robust cascaded deep learning framework with integrated hippocampal gray matter (HGM) probability map was developed to improve the hippocampus segmentation (called HGM-cNet) due to its significance in various neuropsychiatric disorders such as Alzheimer's disease (AD). Particularly, the HGM-cNet cascaded two identical convolutional neural networks (CNN), where each CNN was devised by incorporating Attention Block, Residual Block, and DropBlock into the typical encoder-decoder architecture. The two CNNs were skip-connected between encoder components at each scale. The adoption of the cascaded deep learning framework was to conveniently incorporate the HGM probability map with the feature map generated by the first CNN. Experiments on 135T1-weighted MRI scans and manual hippocampal labels from publicly available ADNI-HarP dataset demonstrated that the proposed HGM-cNet outperformed seven multi-atlas-based hippocampus segmentation methods and six deep learning methods under comparison in most evaluation metrics. The Dice (average > 0.89 for both left and right hippocampus) was increased by around or more than 1% over other methods. The HGM-cNet also achieved a superior hippocampus segmentation performance in each group of cognitive normal, mild cognitive impairment, and AD. The stability, conveniences and generalizability of the cascaded deep learning framework with integrated HGM probability map in improving hippocampus segmentation was validated by replacing the proposed CNN with 3D-UNet, Atten-UNet, HippoDeep, QuickNet, DeepHarp, and TransBTS models. The integration of the HGM probability map in the cascaded deep learning framework was also demonstrated to facilitate capturing hippocampal atrophy more accurately than alternative methods in AD analysis. The codes are publicly available at https://github.com/Liu1436510768/HGM-cNet.git.
Collapse
|
3
|
Deep label fusion: A generalizable hybrid multi-atlas and deep convolutional neural network for medical image segmentation. Med Image Anal 2023; 83:102683. [PMID: 36379194 PMCID: PMC10009820 DOI: 10.1016/j.media.2022.102683] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 10/18/2022] [Accepted: 11/03/2022] [Indexed: 11/07/2022]
Abstract
Deep convolutional neural networks (DCNN) achieve very high accuracy in segmenting various anatomical structures in medical images but often suffer from relatively poor generalizability. Multi-atlas segmentation (MAS), while less accurate than DCNN in many applications, tends to generalize well to unseen datasets with different characteristics from the training dataset. Several groups have attempted to integrate the power of DCNN to learn complex data representations and the robustness of MAS to changes in image characteristics. However, these studies primarily focused on replacing individual components of MAS with DCNN models and reported marginal improvements in accuracy. In this study we describe and evaluate a 3D end-to-end hybrid MAS and DCNN segmentation pipeline, called Deep Label Fusion (DLF). The DLF pipeline consists of two main components with learnable weights, including a weighted voting subnet that mimics the MAS algorithm and a fine-tuning subnet that corrects residual segmentation errors to improve final segmentation accuracy. We evaluate DLF on five datasets that represent a diversity of anatomical structures (medial temporal lobe subregions and lumbar vertebrae) and imaging modalities (multi-modality, multi-field-strength MRI and Computational Tomography). These experiments show that DLF achieves comparable segmentation accuracy to nnU-Net (Isensee et al., 2020), the state-of-the-art DCNN pipeline, when evaluated on a dataset with similar characteristics to the training datasets, while outperforming nnU-Net on tasks that involve generalization to datasets with different characteristics (different MRI field strength or different patient population). DLF is also shown to consistently improve upon conventional MAS methods. In addition, a modality augmentation strategy tailored for multimodal imaging is proposed and demonstrated to be beneficial in improving the segmentation accuracy of learning-based methods, including DLF and DCNN, in missing data scenarios in test time as well as increasing the interpretability of the contribution of each individual modality.
Collapse
|
4
|
An Efficient Optimization Approach for Glioma Tumor Segmentation in Brain MRI. J Digit Imaging 2022; 35:1634-1647. [PMID: 35995900 PMCID: PMC9712883 DOI: 10.1007/s10278-022-00655-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 04/22/2022] [Accepted: 05/06/2022] [Indexed: 11/29/2022] Open
Abstract
Glioma is an aggressive type of cancer that develops in the brain or spinal cord. Due to many differences in its shape and appearance, accurate segmentation of glioma for identifying all parts of the tumor and its surrounding cancerous tissues is a challenging task. In recent researches, the combination of multi-atlas segmentation and machine learning methods provides robust and accurate results by learning from annotated atlas datasets. To overcome the side effects of limited existing information on atlas-based segmentation, and the long training phase of learning methods, we proposed a semi-supervised unified framework for multi-label segmentation that formulates this problem in terms of a Markov Random Field energy optimization on a parametric graph. To evaluate the proposed framework, we apply it to publicly available BRATS datasets, including low- and high-grade glioma tumors. Experimental results indicate competitive performance compared to the state-of-the-art methods. Compared with the top ranked methods, the proposed framework obtains the best dice score for segmenting of "whole tumor" (WT), "tumor core" (TC ) and "enhancing active tumor" (ET) regions. The achieved accuracy is 94[Formula: see text] characterized by the mean dice score. The motivation of using MRF graph is to map the segmentation problem to an optimization model in a graphical environment. Therefore, by defining perfect graph structure and optimum constraints and flows in the continuous max-flow model, the segmentation is performed precisely.
Collapse
|
5
|
Morphological analysis of subcortical structures for assessment of cognitive dysfunction in Parkinson's disease using multi-atlas based segmentation. Cogn Neurodyn 2021; 15:835-845. [PMID: 34603545 PMCID: PMC8448821 DOI: 10.1007/s11571-021-09671-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 01/27/2021] [Accepted: 02/25/2021] [Indexed: 12/16/2022] Open
Abstract
Cognitive impairment in Parkinson's Disease (PD) is the most prevalent non-motor symptom that requires analysis of anatomical associations to cognitive decline in PD. The objective of this study is to analyse the morphological variations of the subcortical structures to assess cognitive dysfunction in PD. In this study, T1 MR images of 58 Healthy Control (HC) and 135 PD subjects categorised as 91 Cognitively normal PD (NC-PD), 25 PD with Mild Cognitive Impairment (PD-MCI) and 19 PD with Dementia (PD-D) subjects, based on cognitive scores are utilised. The 132 anatomical regions are segmented using spatially localized multi-atlas model and volumetric analysis is carried out. The morphological alterations through textural features are captured to differentiate among the HC and PD subjects under different cognitive domains. The volumetric differences in the segmented subcortical structures of accumbens, amygdala, caudate, putamen and thalamus are able to predict cognitive impairment in PD. The volumetric distribution of the subcortical structures in PD-MCI subjects exhibit an overlap with the HC group due to lack of spatial specificity in their atrophy levels. The 3D GLCM features extracted from the significant subcortical structures could discriminate HC, NC-PD, PD-MCI and PD-D subjects with better classification accuracies. The disease related atrophy levels of the subcortical structures captured through morphological analysis provide sensitive evaluation of cognitive impairment in PD.
Collapse
|
6
|
Automated, open-source segmentation of the Hippocampus and amygdala with the open Vanderbilt archive of the temporal lobe. Magn Reson Imaging 2021; 81:17-23. [PMID: 33901584 PMCID: PMC8715642 DOI: 10.1016/j.mri.2021.04.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 04/14/2021] [Accepted: 04/21/2021] [Indexed: 11/30/2022]
Abstract
Examining volumetric differences of the amygdala and anterior-posterior regions of the hippocampus is important for understanding cognition and clinical disorders. However, the gold standard manual segmentation of these structures is time and labor-intensive. Automated, accurate, and reproducible techniques to segment the hippocampus and amygdala are desirable. Here, we present a hierarchical approach to multi-atlas segmentation of the hippocampus head, body and tail and the amygdala based on atlases from 195 individuals. The Open Vanderbilt Archive of the temporal Lobe (OVAL) segmentation technique outperforms the commonly used FreeSurfer, FSL FIRST, and whole-brain multi-atlas segmentation approaches for the full hippocampus and amygdala and nears or exceeds inter-rater reproducibility for segmentation of the hippocampus head, body and tail. OVAL has been released in open-source and is freely available.
Collapse
|
7
|
Multi-slice low-rank tensor decomposition based multi-atlas segmentation: Application to automatic pathological liver CT segmentation. Med Image Anal 2021; 73:102152. [PMID: 34280669 DOI: 10.1016/j.media.2021.102152] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2021] [Revised: 06/02/2021] [Accepted: 06/27/2021] [Indexed: 12/24/2022]
Abstract
Liver segmentation from abdominal CT images is an essential step for liver cancer computer-aided diagnosis and surgical planning. However, both the accuracy and robustness of existing liver segmentation methods cannot meet the requirements of clinical applications. In particular, for the common clinical cases where the liver tissue contains major pathology, current segmentation methods show poor performance. In this paper, we propose a novel low-rank tensor decomposition (LRTD) based multi-atlas segmentation (MAS) framework that achieves accurate and robust pathological liver segmentation of CT images. Firstly, we propose a multi-slice LRTD scheme to recover the underlying low-rank structure embedded in 3D medical images. It performs the LRTD on small image segments consisting of multiple consecutive image slices. Then, we present an LRTD-based atlas construction method to generate tumor-free liver atlases that mitigates the performance degradation of liver segmentation due to the presence of tumors. Finally, we introduce an LRTD-based MAS algorithm to derive patient-specific liver atlases for each test image, and to achieve accurate pairwise image registration and label propagation. Extensive experiments on three public databases of pathological liver cases validate the effectiveness of the proposed method. Both qualitative and quantitative results demonstrate that, in the presence of major pathology, the proposed method is more accurate and robust than state-of-the-art methods.
Collapse
|
8
|
Improving multi-atlas cardiac structure segmentation of computed tomography angiography: A performance evaluation based on a heterogeneous dataset. Comput Biol Med 2020; 125:104019. [PMID: 33038614 PMCID: PMC7655721 DOI: 10.1016/j.compbiomed.2020.104019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 09/22/2020] [Accepted: 09/23/2020] [Indexed: 11/21/2022]
Abstract
Multi-atlas based segmentation is an effective technique that transforms a representative set of atlas images and labels into a target image for structural segmentation. However, a significant limitation of this approach relates to the fact that the atlas and the target images need to be similar in volume orientation, coverage, or acquisition protocols in order to prevent image misregistration and avoid segmentation fault. In this study, we aim to evaluate the impact of using a heterogeneous Computed Tomography Angiography (CTA) dataset on the performance of a multi-atlas cardiac structure segmentation framework. We propose a generalized technique based upon using the Simple Linear Iterative Clustering (SLIC) supervoxel method to detect a bounding box region enclosing the heart before subsequent cardiac structure segmentation. This technique facilitates our framework to process CTA datasets acquired from distinct imaging protocols and to improve its segmentation accuracy and speed. In a four-way cross comparison based on 60 CTA studies from our institution and 60 CTA datasets from the Multi-Modality Whole Heart Segmentation MICCAI challenge, we show that the proposed framework performs well in segmenting seven different cardiac structures based upon interchangeable atlas and target datasets acquired from different imaging settings. For the overall results, our automated segmentation framework attains a median Dice, mean distance, and Hausdorff distance of 0.88, 1.5 mm, and 9.69 mm over the entire datasets. The average processing time was 1.55 min for both datasets. Furthermore, this study shows that it is feasible to exploit heterogenous datasets from different imaging protocols and institutions for accurate multi-atlas cardiac structure segmentation.
Collapse
|
9
|
Anthropometer3D: Automatic Multi-Slice Segmentation Software for the Measurement of Anthropometric Parameters from CT of PET/CT. J Digit Imaging 2020; 32:241-250. [PMID: 30756268 DOI: 10.1007/s10278-019-00178-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Anthropometric parameters like muscle body mass (MBM), fat body mass (FBM), lean body mass (LBM), visceral adipose tissue (VAT), and subcutaneous adipose tissue (SAT) are used in oncology. Our aim was to develop and evaluate the software Anthropometer3D measuring these anthropometric parameters on the CT of PET/CT. This software performs a multi-atlas segmentation of CT of PET/CT with extrapolation coefficients for the body parts beyond the usual acquisition range (from the ischia to the eyes). The multi-atlas database is composed of 30 truncated CTs manually segmented to isolate three types of voxels (muscle, fat, and visceral fat). To evaluate Anthropomer3D, a leave-one-out cross-validation was performed to measure MBM, FBM, LBM, VAT, and SAT. The reference standard was based on the manual segmentation of the corresponding whole-body CT. A manual segmentation of one CT slice at level L3 was also used. Correlations were analyzed using Dice coefficient, intra-class coefficient correlation (ICC), and Bland-Altman plot. The population was heterogeneous (sex ratio 1:1; mean age 57 years old [min 23; max 74]; mean BMI 27 kg/m2 [min 18; max 40]). Dice coefficients between reference standard and Anthropometer3D were excellent (mean+/-SD): muscle 0.95 ± 0.02, fat 1.00 ± 0.01, and visceral fat 0.97 ± 0.02. The ICC was almost perfect (minimal value of 95% CI of 0.97). All Bland-Altman plot values (mean difference, 95% CI and slopes) were better for Anthropometer3D compared to L3 level segmentation. Anthropometer3D allows multiple anthropometric measurements based on an automatic multi-slice segmentation. It is more precise than estimates using L3 level segmentation.
Collapse
|
10
|
Automated measurement of fat infiltration in the hip abductors from Dixon magnetic resonance imaging. Magn Reson Imaging 2020; 72:61-70. [PMID: 32615150 DOI: 10.1016/j.mri.2020.06.019] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Revised: 06/09/2020] [Accepted: 06/25/2020] [Indexed: 12/29/2022]
Abstract
PURPOSE Intramuscular fat infiltration is a dynamic process, in response to exercise and muscle health, which can be quantified by estimating fat fraction (FF) from Dixon MRI. Healthy hip abductor muscles are a good indicator of a healthy hip and an active lifestyle as they have a fundamental role in walking. The automated measurement of the abductors' FF requires the challenging task of segmenting them. We aimed to design, develop and evaluate a multi-atlas based method for automated measurement of fat fraction in the main hip abductor muscles: gluteus maximus (GMAX), gluteus medius (GMED), gluteus minimus (GMIN) and tensor fasciae latae (TFL). METHOD We collected and manually segmented Dixon MR images of 10 healthy individuals and 7 patients who underwent MRI for hip problems. Twelve of them were selected to build an atlas library used to implement the automated multi-atlas segmentation method. We compared the FF in the hip abductor muscles for the automated and manual segmentations for both healthy and patients groups. Measures of average and spread were reported for FF for both methods. We used the root mean square error (RMSE) to quantify the method accuracy. A linear regression model was used to explain the relationship between FF for automated and manual segmentations. RESULTS The automated median (IQR) FF was 20.0(16.0-26.4) %, 14.3(10.9-16.5) %, 15.5(13.9-18.6) % and 16.2(13.5-25.6) % for GMAX, GMED, GMIN and TFL respectively, with a FF RMSE of 1.6%, 0.8%, 2.1%, 2.7%. A strong linear correlation (R2 = 0.93, p < .001, m = 0.99) was found between the FF from automated and manual segmentations. The mean FF was higher in patients than in healthy subjects. CONCLUSION The automated measurement of FF of hip abductor muscles from Dixon MRI had good agreement with FF measurements from manually segmented images. The method was accurate for both healthy and patients groups.
Collapse
|
11
|
Automatic labeling of cortical sulci using patch- or CNN-based segmentation techniques combined with bottom-up geometric constraints. Med Image Anal 2020; 62:101651. [PMID: 32163879 DOI: 10.1016/j.media.2020.101651] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Revised: 01/14/2020] [Accepted: 01/16/2020] [Indexed: 11/23/2022]
Abstract
The extreme variability of the folding pattern of the human cortex makes the recognition of cortical sulci, both automatic and manual, particularly challenging. Reliable identification of the human cortical sulci in its entirety, is extremely difficult and is practiced by only a few experts. Moreover, these sulci correspond to more than a hundred different structures, which makes manual labeling long and fastidious and therefore limits access to large labeled databases to train machine learning. Here, we seek to improve the current model proposed in the Morphologist toolbox, a widely used sulcus recognition toolbox included in the BrainVISA package. Two novel approaches are proposed: patch-based multi-atlas segmentation (MAS) techniques and convolutional neural network (CNN)-based approaches. Both are currently applied for anatomical segmentations because they embed much better representations of inter-subject variability than approaches based on a single template atlas. However, these methods typically focus on voxel-wise labeling, disregarding certain geometrical and topological properties of interest for sulcus morphometry. Therefore, we propose to refine these approaches with domain specific bottom-up geometric constraints provided by the Morphologist toolbox. These constraints are utilized to provide a single sulcus label to each topologically elementary fold, the building blocks of the pattern recognition problem. To eliminate the shortcomings associated with the Morphologist's pre-segmentation into elementary folds, we complement this regularization scheme using a top-down perspective which triggers an additional cleavage of the elementary folds when required. All the newly proposed models outperform the current Morphologist model, the most efficient being a CNN U-Net-based approach which carries out sulcus recognition within a few seconds.
Collapse
|
12
|
Multi-atlas active contour segmentation method using template optimization algorithm. BMC Med Imaging 2019; 19:42. [PMID: 31126254 PMCID: PMC6534882 DOI: 10.1186/s12880-019-0340-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2018] [Accepted: 05/14/2019] [Indexed: 11/10/2022] Open
Abstract
Background Brain image segmentation is the basis and key to brain disease diagnosis, treatment planning and tissue 3D reconstruction. The accuracy of segmentation directly affects the therapeutic effect. Manual segmentation of these images is time-consuming and subjective. Therefore, it is important to research semi-automatic and automatic image segmentation methods. In this paper, we propose a semi-automatic image segmentation method combined with a multi-atlas registration method and an active contour model (ACM). Method We propose a multi-atlas active contour segmentation method using a template optimization algorithm. First, a multi-atlas registration method is used to obtain the prior shape information of the target tissue, and then a label fusion algorithm is used to generate the initial template. Second, a template optimization algorithm is used to reduce the multi-atlas registration errors and generate the initial active contour (IAC). Finally, a ACM is used to segment the target tissue. Results The proposed method was applied to the challenging publicly available MR datasets IBSR and MRBrainS13. In the MRBrainS13 datasets, we obtained an average thalamus Dice similarity coefficient of 0.927 ± 0.014 and an average Hausdorff distance (HD) of 2.92 ± 0.53. In the IBSR datasets, we obtained a white matter (WM) average Dice similarity coefficient of 0.827 ± 0.04 and a gray gray matter (GM) average Dice similarity coefficient of 0.853 ± 0.03. Conclusion In this paper, we propose a semi-automatic brain image segmentation method. The main contributions of this paper are as follows: 1) Our method uses a multi-atlas registration method based on affine transformation, which effectively reduces the multi-atlas registration time compared to the complex nonlinear registration method. The average registration time of each target image in the IBSR datasets is 255 s, and the average registration time of each target image in the MRBrainS13 datasets is 409 s. 2) We used a template optimization algorithm to improve registration error and generate a continuous IAC. 3) Finally, we used a ACM to segment the target tissue and obtain a smooth continuous target contour.
Collapse
|
13
|
Thalamus Optimized Multi Atlas Segmentation (THOMAS): fast, fully automated segmentation of thalamic nuclei from structural MRI. Neuroimage 2019; 194:272-282. [PMID: 30894331 DOI: 10.1016/j.neuroimage.2019.03.021] [Citation(s) in RCA: 95] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Revised: 02/10/2019] [Accepted: 03/10/2019] [Indexed: 12/21/2022] Open
Abstract
The thalamus and its nuclei are largely indistinguishable on standard T1 or T2 weighted MRI. While diffusion tensor imaging based methods have been proposed to segment the thalamic nuclei based on the angular orientation of the principal diffusion tensor, these are based on echo planar imaging which is inherently limited in spatial resolution and suffers from distortion. We present a multi-atlas segmentation technique based on white-matter-nulled MP-RAGE imaging that segments the thalamus into 12 nuclei with computation times on the order of 10 min on a desktop PC; we call this method THOMAS (THalamus Optimized Multi Atlas Segmentation). THOMAS was rigorously evaluated on 7T MRI data acquired from healthy volunteers and patients with multiple sclerosis by comparing against manual segmentations delineated by a neuroradiologist, guided by the Morel atlas. Segmentation accuracy was very high, with uniformly high Dice indices: at least 0.85 for large nuclei like the pulvinar and mediodorsal nuclei and at least 0.7 even for small structures such as the habenular, centromedian, and lateral and medial geniculate nuclei. Volume similarity indices ranged from 0.82 for the smaller nuclei to 0.97 for the larger nuclei. Volumetry revealed that the volumes of the right anteroventral, right ventral posterior lateral, and both right and left pulvinar nuclei were significantly lower in MS patients compared to controls, after adjusting for age, sex and intracranial volume. Lastly, we evaluated the potential of this method for targeting the Vim nucleus for deep brain surgery and focused ultrasound thalamotomy by overlaying the Vim nucleus segmented from pre-operative data on post-operative data. The locations of the ablated region and active DBS contact corresponded well with the segmented Vim nucleus. Our fast, direct structural MRI based segmentation method opens the door for MRI guided intra-operative procedures like thalamotomy and asleep DBS electrode placement as well as for accurate quantification of thalamic nuclear volumes to follow progression of neurological disorders.
Collapse
|
14
|
Dual-modality multi-atlas segmentation of torso organs from [ 18F]FDG-PET/CT images. Int J Comput Assist Radiol Surg 2018; 14:473-482. [PMID: 30390179 DOI: 10.1007/s11548-018-1879-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Accepted: 10/23/2018] [Indexed: 11/28/2022]
Abstract
PURPOSE Automated segmentation of torso organs from positron emission tomography/computed tomography (PET/CT) images is a prerequisite step for nuclear medicine image analysis. However, accurate organ segmentation from clinical PET/CT is challenging due to the poor soft tissue contrast in the low-dose CT image and the low spatial resolution of the PET image. To overcome these challenges, we developed a multi-atlas segmentation (MAS) framework for torso organ segmentation from 2-deoxy-2-[18F]fluoro-D-glucose PET/CT images. METHOD Our key idea is to use PET information to compensate for the imperfect CT contrast and use surface-based atlas fusion to overcome the low PET resolution. First, all the organs are segmented from CT using a conventional MAS method, and then the abdomen region of the PET image is automatically cropped. Focusing on the cropped PET image, a refined MAS segmentation of the abdominal organs is performed, using a surface-based atlas fusion approach to reach subvoxel accuracy. RESULTS This method was validated based on 69 PET/CT images. The Dice coefficients of the target organs were between 0.80 and 0.96, and the average surface distances were between 1.58 and 2.44 mm. Compared to the CT-based segmentation, the PET-based segmentation gained a Dice increase of 0.06 and an ASD decrease of 0.38 mm. The surface-based atlas fusion leads to significant accuracy improvement for the liver and kidneys and saved ~ 10 min computation time compared to volumetric atlas fusion. CONCLUSIONS The presented method achieves better segmentation accuracy than conventional MAS method within acceptable computation time for clinical applications.
Collapse
|
15
|
Fast anatomy segmentation by combining coarse scale multi-atlas label fusion with fine scale corrective learning. Comput Med Imaging Graph 2018; 68:16-24. [PMID: 29870822 DOI: 10.1016/j.compmedimag.2018.05.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Accepted: 05/18/2018] [Indexed: 01/18/2023]
Abstract
Deformable registration based multi-atlas segmentation has been successfully applied in a broad range of anatomy segmentation applications. However, the excellent performance comes with a high computational burden due to the requirement for deformable image registration and voxel-wise label fusion. To address this problem, we investigate the role of corrective learning (Wang et al., 2011) in speeding up multi-atlas segmentation. We propose to combine multi-atlas segmentation with corrective learning in a multi-scale analysis fashion for faster speeds. First, multi-atlas segmentation is applied in a low spatial resolution. After resampling the segmentation result back to the native image space, learning-based error correction is applied to correct systematic errors due to performing multi-atlas segmentation in a low spatial resolution. In cardiac CT and brain MR segmentation experiments, we show that applying multi-atlas segmentation in a coarse scale followed by learning-based error correction in the native space can substantially reduce the overall computational cost, with only modest or no sacrificing segmentation accuracy.
Collapse
|
16
|
Superpixel and multi-atlas based fusion entropic model for the segmentation of X-ray images. Med Image Anal 2018; 48:58-74. [PMID: 29852311 DOI: 10.1016/j.media.2018.05.006] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Revised: 05/09/2018] [Accepted: 05/11/2018] [Indexed: 11/15/2022]
Abstract
X-ray image segmentation is an important and crucial step for three-dimensional (3D) bone reconstruction whose final goal remains to increase effectiveness of computer-aided diagnosis, surgery and treatment plannings. However, this segmentation task is rather challenging, particularly when dealing with complicated human structures in the lower limb such as the patella, talus and pelvis. In this work, we present a multi-atlas fusion framework for the automatic segmentation of these complex bone regions from a single X-ray view. The first originality of the proposed approach lies in the use of a (training) dataset of co-registered/pre-segmented X-ray images of these aforementioned bone regions (or multi-atlas) to estimate a collection of superpixels allowing us to take into account all the nonlinear and local variability of bone regions existing in the training dataset and also to simplify the superpixel map pruning process related to our strategy. The second originality is to introduce a novel label propagation step based on the entropy concept for refining the resulting segmentation map into the most likely internal regions to the final consensus segmentation. In this framework, a leave-one-out cross-validation process was performed on 31 manually segmented radiographic image dataset for each bone structure in order to rigorously evaluate the efficiency of the proposed method. The proposed method resulted in more accurate segmentations compared to the probabilistic patch-based label fusion model (PB) and the classical patch-based majority voting fusion scheme (MV) using different registration strategies. Comparison with manual (gold standard) segmentations revealed that the good classification accuracy of our unsupervised segmentation scheme is, respectively, 93.79% for the patella, 88.3% for the talus and 85.02% for the pelvis; a score that falls within the range of accuracy levels of manual segmentations (due to the intra inter/observer variability).
Collapse
|
17
|
Supervoxel based method for multi-atlas segmentation of brain MR images. Neuroimage 2018; 175:201-214. [PMID: 29625235 DOI: 10.1016/j.neuroimage.2018.04.001] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Revised: 03/30/2018] [Accepted: 04/01/2018] [Indexed: 01/01/2023] Open
Abstract
Multi-atlas segmentation has been widely applied to the analysis of brain MR images. However, the state-of-the-art techniques in multi-atlas segmentation, including both patch-based and learning-based methods, are strongly dependent on the pairwise registration or exhibit huge spatial inconsistency. The paper proposes a new segmentation framework based on supervoxels to solve the existing challenges of previous methods. The supervoxel is an aggregation of voxels with similar attributes, which can be used to replace the voxel grid. By formulating the segmentation as a tissue labeling problem associated with a maximum-a-posteriori inference in Markov random field, the problem is solved via a graphical model with supervoxels being considered as the nodes. In addition, a dense labeling scheme is developed to refine the supervoxel labeling results, and the spatial consistency is incorporated in the proposed method. The proposed approach is robust to the pairwise registration errors and of high computational efficiency. Extensive experimental evaluations on three publically available brain MR datasets demonstrate the effectiveness and superior performance of the proposed approach.
Collapse
|
18
|
Longitudinally and inter-site consistent multi-atlas based parcellation of brain anatomy using harmonized atlases. Neuroimage 2018; 166:71-78. [PMID: 29107121 PMCID: PMC5748021 DOI: 10.1016/j.neuroimage.2017.10.026] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2017] [Revised: 09/29/2017] [Accepted: 10/13/2017] [Indexed: 11/17/2022] Open
Abstract
As longitudinal and multi-site studies become increasingly frequent in neuroimaging, maintaining longitudinal and inter-scanner consistency of brain parcellation has become a major challenge due to variation in scanner models and/or image acquisition protocols across scanners and sites. We present a new automated segmentation method specifically designed to achieve a consistent parcellation of anatomical brain structures in such heterogeneous datasets. Our method combines a site-specific atlas creation strategy with a state-of-the-art multi-atlas anatomical label fusion framework. Site-specific atlases are computed such that they preserve image intensity characteristics of each site's scanner and acquisition protocol, while atlas pairs share anatomical labels in a way consistent with inter-scanner acquisition variations. This harmonization of atlases improves inter-study and longitudinal consistency of segmentations in the subsequent consensus labeling step. We tested this approach on a large sample of older adults from the Baltimore Longitudinal Study of Aging (BLSA) who had longitudinal scans acquired using two scanners that vary with respect to vendor and image acquisition protocol. We compared the proposed method to standard multi-atlas segmentation for both cross-sectional and longitudinal analyses. The harmonization significantly reduced scanner-related differences in the age trends of ROI volumes, improved longitudinal consistency of segmentations, and resulted in higher across-scanner intra-class correlations, particularly in the white matter.
Collapse
|
19
|
Abstract
Transesophageal echocardiography is the primary imaging modality for preoperative assessment of mitral valves with ischemic mitral regurgitation (IMR). While there are well known echocardiographic insights into the 3D morphology of mitral valves with IMR, such as annular dilation and leaflet tethering, less is understood about how quantification of valve dynamics can inform surgical treatment of IMR or predict short-term recurrence of the disease. As a step towards filling this knowledge gap, we present a novel framework for 4D segmentation and geometric modeling of the mitral valve in real-time 3D echocardiography (rt-3DE). The framework integrates multi-atlas label fusion and template-based medial modeling to generate quantitatively descriptive models of valve dynamics. The novelty of this work is that temporal consistency in the rt-3DE segmentations is enforced during both the segmentation and modeling stages with the use of groupwise label fusion and Kalman filtering. The algorithm is evaluated on rt-3DE data series from 10 patients: five with normal mitral valve morphology and five with severe IMR. In these 10 data series that total 207 individual 3DE images, each 3DE segmentation is validated against manual tracing and temporal consistency between segmentations is demonstrated. The ultimate goal is to generate accurate and consistent representations of valve dynamics that can both visually and quantitatively provide insight into normal and pathological valve function.
Collapse
|
20
|
Learning non-linear patch embeddings with neural networks for label fusion. Med Image Anal 2017; 44:143-155. [PMID: 29247877 DOI: 10.1016/j.media.2017.11.013] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2017] [Revised: 10/05/2017] [Accepted: 11/27/2017] [Indexed: 12/29/2022]
Abstract
In brain structural segmentation, multi-atlas strategies are increasingly being used over single-atlas strategies because of their ability to fit a wider anatomical variability. Patch-based label fusion (PBLF) is a type of such multi-atlas approaches that labels each target point as a weighted combination of neighboring atlas labels, where atlas points with higher local similarity to the target contribute more strongly to label fusion. PBLF can be potentially improved by increasing the discriminative capabilities of the local image similarity measurements. We propose a framework to compute patch embeddings using neural networks so as to increase discriminative abilities of similarity-based weighted voting in PBLF. As particular cases, our framework includes embeddings with different complexities, namely, a simple scaling, an affine transformation, and non-linear transformations. We compare our method with state-of-the-art alternatives in whole hippocampus and hippocampal subfields segmentation experiments using publicly available datasets. Results show that even the simplest versions of our method outperform standard PBLF, thus evidencing the benefits of discriminative learning. More complex transformation models tended to achieve better results than simpler ones, obtaining a considerable increase in average Dice score compared to standard PBLF.
Collapse
|
21
|
Discriminative confidence estimation for probabilistic multi-atlas label fusion. Med Image Anal 2017; 42:274-287. [PMID: 28888171 DOI: 10.1016/j.media.2017.08.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Revised: 06/26/2017] [Accepted: 08/29/2017] [Indexed: 12/31/2022]
Abstract
Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors.
Collapse
|
22
|
Image Segmentation and Modeling of the Pediatric Tricuspid Valve in Hypoplastic Left Heart Syndrome. FUNCTIONAL IMAGING AND MODELING OF THE HEART : ... INTERNATIONAL WORKSHOP, FIMH ..., PROCEEDINGS. FIMH 2017; 10263:95-105. [PMID: 29756127 DOI: 10.1007/978-3-319-59448-4_10] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Hypoplastic left heart syndrome (HLHS) is a single-ventricle congenital heart disease that is fatal if left unpalliated. In HLHS patients, the tricuspid valve is the only functioning atrioventricular valve, and its competence is therefore critical. This work demonstrates the first automated strategy for segmentation, modeling, and morphometry of the tricuspid valve in transthoracic 3D echocardiographic (3DE) images of pediatric patients with HLHS. After initial landmark placement, the automated segmentation step uses multi-atlas label fusion and the modeling approach uses deformable modeling with medial axis representation to produce patient-specific models of the tricuspid valve that can be comprehensively and quantitatively assessed. In a group of 16 pediatric patients, valve segmentation and modeling attains an accuracy (mean boundary displacement) of 0.8 ± 0.2 mm relative to manual tracing and shows consistency in annular and leaflet measurements. In the future, such image-based tools have the potential to improve understanding and evaluation of tricuspid valve morphology in HLHS and guide strategies for patient care.
Collapse
|
23
|
Abstract
Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.
Collapse
|
24
|
Cardiac atlas development and validation for automatic segmentation of cardiac substructures. Radiother Oncol 2016; 122:66-71. [PMID: 27939201 DOI: 10.1016/j.radonc.2016.11.016] [Citation(s) in RCA: 69] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2016] [Revised: 11/16/2016] [Accepted: 11/21/2016] [Indexed: 12/25/2022]
Abstract
PURPOSE To develop and validate a set of atlases for auto-contouring cardiac substructures. METHODS Eight radiation oncologists manually and independently delineated 15 cardiac substructures from noncontrast CT images of 6 patients by referring to their respective fused contrast CT images. Individual contours were fused together for each structure, edited by 2 physicians, and became atlases to delineate other 6 patients. The auto-delineated contours of the 6 additional patients became templates for manual contouring. These 12 patients with well-defined contours composed the final atlases for multi-atlas segmentation. RESULTS The average time for manually contouring the 15 cardiac substructures was about 40min. Inter-observer variability was small for the heart, the chambers, and the aorta compared with that for other structures that were not clearly distinguishable in CT images. The mean dice similarity coefficient and mean surface distance of auto-segmented contours were within one standard deviation of expert contouring variability. Good agreement between auto-segmented and manual contours was observed for the heart, the chambers, and the great vessels. Independent validation on other 19 patients showed reasonable agreement for the heart chambers. CONCLUSIONS A set of cardiac atlases was created for auto-contouring from noncontrast CT images. The accuracy of auto-contouring for the heart, chambers, and great vessels was validated for potential clinical use.
Collapse
|
25
|
Multi-atlas and unsupervised learning approach to perirectal space segmentation in CT images. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2016; 39:933-941. [PMID: 27844331 DOI: 10.1007/s13246-016-0496-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2016] [Accepted: 10/31/2016] [Indexed: 11/27/2022]
Abstract
Perirectal space segmentation in computed tomography images aids in quantifying radiation dose received by healthy tissues and toxicity during the course of radiation therapy treatment of the prostate. Radiation dose normalised by tissue volume facilitates predicting outcomes or possible harmful side effects of radiation therapy treatment. Manual segmentation of the perirectal space is time consuming and challenging in the presence of inter-patient anatomical variability and may suffer from inter- and intra-observer variabilities. However automatic or semi-automatic segmentation of the perirectal space in CT images is a challenging task due to inter patient anatomical variability, contrast variability and imaging artifacts. In the model presented here, a volume of interest is obtained in a multi-atlas based segmentation approach. Un-supervised learning in the volume of interest with a Gaussian-mixture-modeling based clustering approach is adopted to achieve a soft segmentation of the perirectal space. Probabilities from soft clustering are further refined by rigid registration of the multi-atlas mask in a probabilistic domain. A maximum a posteriori approach is adopted to achieve a binary segmentation from the refined probabilities. A mean volume similarity value of 97% and a mean surface difference of 3.06 ± 0.51 mm is achieved in a leave-one-patient-out validation framework with a subset of a clinical trial dataset. Qualitative results show a good approximation of the perirectal space volume compared to the ground truth.
Collapse
|
26
|
Improving Spleen Volume Estimation Via Computer-assisted Segmentation on Clinically Acquired CT Scans. Acad Radiol 2016; 23:1214-20. [PMID: 27519156 DOI: 10.1016/j.acra.2016.05.015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Revised: 04/26/2016] [Accepted: 05/04/2016] [Indexed: 12/11/2022]
Abstract
OBJECTIVES Multi-atlas fusion is a promising approach for computer-assisted segmentation of anatomic structures. The purpose of this study was to evaluate the accuracy and time efficiency of multi-atlas segmentation for estimating spleen volumes on clinically acquired computed tomography (CT) scans. MATERIALS AND METHODS Under an institutional review board approval, we obtained 294 de-identified (Health Insurance Portability and Accountability Act-compliant) abdominal CT scans on 78 subjects from a recent clinical trial. We compared five pipelines for obtaining splenic volumes: Pipeline 1 - manual segmentation of all scans, Pipeline 2 - automated segmentation of all scans, Pipeline 3 - automated segmentation of all scans with manual segmentation for outliers on a rudimentary visual quality check, and Pipelines 4 and 5 - volumes derived from a unidimensional measurement of craniocaudal spleen length and three-dimensional splenic index measurements, respectively. Using Pipeline 1 results as ground truth, the accuracies of Pipelines 2-5 (Dice similarity coefficient, Pearson correlation, R-squared, and percent and absolute deviation of volume from ground truth) were compared for point estimates of splenic volume and for change in splenic volume over time. Time cost was also compared for Pipelines 1-5. RESULTS Pipeline 3 was dominant in terms of both accuracy and time cost. With a Pearson correlation coefficient of 0.99, average absolute volume deviation of 23.7 cm(3), and time cost of 1 minute per scan, Pipeline 3 yielded the best results. The second-best approach was Pipeline 5, with a Pearson correlation coefficient of 0.98, absolute deviation of 46.92 cm(3), and time cost of 1 minute 30 seconds per scan. Manual segmentation (Pipeline 1) required 11 minutes per scan. CONCLUSION A computer-automated segmentation approach with manual correction of outliers generated accurate splenic volumes with reasonable time efficiency.
Collapse
|
27
|
A dynamic tree-based registration could handle possible large deformations among MR brain images. Comput Med Imaging Graph 2016; 52:1-7. [PMID: 27235894 PMCID: PMC4930896 DOI: 10.1016/j.compmedimag.2016.04.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2016] [Revised: 04/18/2016] [Accepted: 04/27/2016] [Indexed: 11/16/2022]
Abstract
Multi-atlas segmentation is a powerful approach to automated anatomy delineation via fusing label information from a set of spatially normalized atlases. For simplicity, many existing methods perform pairwise image registration, leading to inaccurate segmentation especially when shape variation is large. In this paper, we propose a dynamic tree-based strategy for effective large-deformation registration and multi-atlas segmentation. To deal with local minima caused by large shape variation, coarse estimates of deformations are first obtained via alignment of automatically localized landmark points. The dynamic tree capturing the structural relationships between images is then employed to further reduce misalignment errors. Evaluation based on two real human brain datasets, ADNI and LPBA40, shows that our method significantly improves registration and segmentation accuracy.
Collapse
|
28
|
Consistent cortical reconstruction and multi-atlas brain segmentation. Neuroimage 2016; 138:197-210. [PMID: 27184203 DOI: 10.1016/j.neuroimage.2016.05.030] [Citation(s) in RCA: 71] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Accepted: 05/10/2016] [Indexed: 01/14/2023] Open
Abstract
Whole brain segmentation and cortical surface reconstruction are two essential techniques for investigating the human brain. Spatial inconsistences, which can hinder further integrated analyses of brain structure, can result due to these two tasks typically being conducted independently of each other. FreeSurfer obtains self-consistent whole brain segmentations and cortical surfaces. It starts with subcortical segmentation, then carries out cortical surface reconstruction, and ends with cortical segmentation and labeling. However, this "segmentation to surface to parcellation" strategy has shown limitations in various cohorts such as older populations with large ventricles. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. A modification called MaCRUISE(+) is designed to perform well when white matter lesions are present. Comparing to the benchmarks CRUISE and FreeSurfer, the surface accuracy of MaCRUISE and MaCRUISE(+) is validated using two independent datasets with expertly placed cortical landmarks. A third independent dataset with expertly delineated volumetric labels is employed to compare segmentation performance. Finally, 200MR volumetric images from an older adult sample are used to assess the robustness of MaCRUISE and FreeSurfer. The advantages of MaCRUISE are: (1) MaCRUISE constructs self-consistent voxelwise segmentations and cortical surfaces, while MaCRUISE(+) is robust to white matter pathology. (2) MaCRUISE achieves more accurate whole brain segmentations than independently conducting the multi-atlas segmentation. (3) MaCRUISE is comparable in accuracy to FreeSurfer (when FreeSurfer does not exhibit global failures) while achieving greater robustness across an older adult population. MaCRUISE has been made freely available in open source.
Collapse
|
29
|
MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection. Neuroimage 2016; 127:186-195. [PMID: 26679328 PMCID: PMC4806537 DOI: 10.1016/j.neuroimage.2015.11.073] [Citation(s) in RCA: 171] [Impact Index Per Article: 21.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2015] [Revised: 11/30/2015] [Accepted: 11/30/2015] [Indexed: 11/21/2022] Open
Abstract
Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images.
Collapse
|
30
|
Performance of single and multi-atlas based automated landmarking methods compared to expert annotations in volumetric microCT datasets of mouse mandibles. Front Zool 2015; 12:33. [PMID: 26628903 PMCID: PMC4666065 DOI: 10.1186/s12983-015-0127-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2015] [Accepted: 11/19/2015] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND Here we present an application of advanced registration and atlas building framework DRAMMS to the automated annotation of mouse mandibles through a series of tests using single and multi-atlas segmentation paradigms and compare the outcomes to the current gold standard, manual annotation. RESULTS Our results showed multi-atlas annotation procedure yields landmark precisions within the human observer error range. The mean shape estimates from gold standard and multi-atlas annotation procedure were statistically indistinguishable for both Euclidean Distance Matrix Analysis (mean form matrix) and Generalized Procrustes Analysis (Goodall F-test). Further research needs to be done to validate the consistency of variance-covariance matrix estimates from both methods with larger sample sizes. CONCLUSION Multi-atlas annotation procedure shows promise as a framework to facilitate truly high-throughput phenomic analyses by channeling investigators efforts to annotate only a small portion of their datasets.
Collapse
|
31
|
Multi-atlas learner fusion: An efficient segmentation approach for large-scale data. Med Image Anal 2015; 26:82-91. [PMID: 26363845 DOI: 10.1016/j.media.2015.08.010] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2014] [Revised: 07/24/2015] [Accepted: 08/20/2015] [Indexed: 12/01/2022]
Abstract
We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information.
Collapse
|
32
|
Multi-atlas segmentation of biomedical images: A survey. Med Image Anal 2015; 24:205-219. [PMID: 26201875 PMCID: PMC4532640 DOI: 10.1016/j.media.2015.06.012] [Citation(s) in RCA: 353] [Impact Index Per Article: 39.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Revised: 06/12/2015] [Accepted: 06/15/2015] [Indexed: 10/23/2022]
Abstract
Multi-atlas segmentation (MAS), first introduced and popularized by the pioneering work of Rohlfing, et al. (2004), Klein, et al. (2005), and Heckemann, et al. (2006), is becoming one of the most widely-used and successful image segmentation techniques in biomedical applications. By manipulating and utilizing the entire dataset of "atlases" (training images that have been previously labeled, e.g., manually by an expert), rather than some model-based average representation, MAS has the flexibility to better capture anatomical variation, thus offering superior segmentation accuracy. This benefit, however, typically comes at a high computational cost. Recent advancements in computer hardware and image processing software have been instrumental in addressing this challenge and facilitated the wide adoption of MAS. Today, MAS has come a long way and the approach includes a wide array of sophisticated algorithms that employ ideas from machine learning, probabilistic modeling, optimization, and computer vision, among other fields. This paper presents a survey of published MAS algorithms and studies that have applied these methods to various biomedical problems. In writing this survey, we have three distinct aims. Our primary goal is to document how MAS was originally conceived, later evolved, and now relates to alternative methods. Second, this paper is intended to be a detailed reference of past research activity in MAS, which now spans over a decade (2003-2014) and entails novel methodological developments and application-specific solutions. Finally, our goal is to also present a perspective on the future of MAS, which, we believe, will be one of the dominant approaches in biomedical image segmentation.
Collapse
|
33
|
Efficient multi-atlas abdominal segmentation on clinically acquired CT with SIMPLE context learning. Med Image Anal 2015; 24:18-27. [PMID: 26046403 DOI: 10.1016/j.media.2015.05.009] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2014] [Revised: 04/14/2015] [Accepted: 05/13/2015] [Indexed: 11/16/2022]
Abstract
Abdominal segmentation on clinically acquired computed tomography (CT) has been a challenging problem given the inter-subject variance of human abdomens and complex 3-D relationships among organs. Multi-atlas segmentation (MAS) provides a potentially robust solution by leveraging label atlases via image registration and statistical fusion. We posit that the efficiency of atlas selection requires further exploration in the context of substantial registration errors. The selective and iterative method for performance level estimation (SIMPLE) method is a MAS technique integrating atlas selection and label fusion that has proven effective for prostate radiotherapy planning. Herein, we revisit atlas selection and fusion techniques for segmenting 12 abdominal structures using clinically acquired CT. Using a re-derived SIMPLE algorithm, we show that performance on multi-organ classification can be improved by accounting for exogenous information through Bayesian priors (so called context learning). These innovations are integrated with the joint label fusion (JLF) approach to reduce the impact of correlated errors among selected atlases for each organ, and a graph cut technique is used to regularize the combined segmentation. In a study of 100 subjects, the proposed method outperformed other comparable MAS approaches, including majority vote, SIMPLE, JLF, and the Wolz locally weighted vote technique. The proposed technique provides consistent improvement over state-of-the-art approaches (median improvement of 7.0% and 16.2% in DSC over JLF and Wolz, respectively) and moves toward efficient segmentation of large-scale clinically acquired CT data for biomarker screening, surgical navigation, and data mining.
Collapse
|
34
|
Robust whole-brain segmentation: application to traumatic brain injury. Med Image Anal 2014; 21:40-58. [PMID: 25596765 DOI: 10.1016/j.media.2014.12.003] [Citation(s) in RCA: 95] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2014] [Revised: 12/14/2014] [Accepted: 12/15/2014] [Indexed: 11/23/2022]
Abstract
We propose a framework for the robust and fully-automatic segmentation of magnetic resonance (MR) brain images called "Multi-Atlas Label Propagation with Expectation-Maximisation based refinement" (MALP-EM). The presented approach is based on a robust registration approach (MAPER), highly performant label fusion (joint label fusion) and intensity-based label refinement using EM. We further adapt this framework to be applicable for the segmentation of brain images with gross changes in anatomy. We propose to account for consistent registration errors by relaxing anatomical priors obtained by multi-atlas propagation and a weighting scheme to locally combine anatomical atlas priors and intensity-refined posterior probabilities. The method is evaluated on a benchmark dataset used in a recent MICCAI segmentation challenge. In this context we show that MALP-EM is competitive for the segmentation of MR brain scans of healthy adults when compared to state-of-the-art automatic labelling techniques. To demonstrate the versatility of the proposed approach, we employed MALP-EM to segment 125 MR brain images into 134 regions from subjects who had sustained traumatic brain injury (TBI). We employ a protocol to assess segmentation quality if no manual reference labels are available. Based on this protocol, three independent, blinded raters confirmed on 13 MR brain scans with pathology that MALP-EM is superior to established label fusion techniques. We visually confirm the robustness of our segmentation approach on the full cohort and investigate the potential of derived symmetry-based imaging biomarkers that correlate with and predict clinically relevant variables in TBI such as the Marshall Classification (MC) or Glasgow Outcome Score (GOS). Specifically, we show that we are able to stratify TBI patients with favourable outcomes from non-favourable outcomes with 64.7% accuracy using acute-phase MR images and 66.8% accuracy using follow-up MR images. Furthermore, we are able to differentiate subjects with the presence of a mass lesion or midline shift from those with diffuse brain injury with 76.0% accuracy. The thalamus, putamen, pallidum and hippocampus are particularly affected. Their involvement predicts TBI disease progression.
Collapse
|
35
|
Multi-atlas segmentation with augmented features for cardiac MR images. Med Image Anal 2014; 19:98-109. [PMID: 25299433 DOI: 10.1016/j.media.2014.09.005] [Citation(s) in RCA: 70] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2014] [Revised: 09/08/2014] [Accepted: 09/09/2014] [Indexed: 02/07/2023]
Abstract
Multi-atlas segmentation infers the target image segmentation by combining prior anatomical knowledge encoded in multiple atlases. It has been quite successfully applied to medical image segmentation in the recent years, resulting in highly accurate and robust segmentation for many anatomical structures. However, to guide the label fusion process, most existing multi-atlas segmentation methods only utilise the intensity information within a small patch during the label fusion process and may neglect other useful information such as gradient and contextual information (the appearance of surrounding regions). This paper proposes to combine the intensity, gradient and contextual information into an augmented feature vector and incorporate it into multi-atlas segmentation. Also, it explores the alternative to the K nearest neighbour (KNN) classifier in performing multi-atlas label fusion, by using the support vector machine (SVM) for label fusion instead. Experimental results on a short-axis cardiac MR data set of 83 subjects have demonstrated that the accuracy of multi-atlas segmentation can be significantly improved by using the augmented feature vector. The mean Dice metric of the proposed segmentation framework is 0.81 for the left ventricular myocardium on this data set, compared to 0.79 given by the conventional multi-atlas patch-based segmentation (Coupé et al., 2011; Rousseau et al., 2011). A major contribution of this paper is that it demonstrates that the performance of non-local patch-based segmentation can be improved by using augmented features.
Collapse
|
36
|
Hierarchical performance estimation in the statistical label fusion framework. Med Image Anal 2014; 18:1070-81. [PMID: 25033470 DOI: 10.1016/j.media.2014.06.005] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2014] [Revised: 04/17/2014] [Accepted: 06/16/2014] [Indexed: 10/25/2022]
Abstract
Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally - fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. The proposed approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, the primary contributions of this manuscript are: (1) we provide a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) confusion matrices for each rater, (2) we highlight the amenability of the proposed hierarchical formulation to many of the state-of-the-art advancements to the statistical fusion framework, and (3) we demonstrate statistically significant improvement on both simulated and empirical data. Specifically, both theoretically and empirically, we show that the proposed hierarchical performance model provides substantial and significant accuracy benefits when applied to two disparate multi-atlas segmentation tasks: (1) 133 label whole-brain anatomy on structural MR, and (2) orbital anatomy on CT.
Collapse
|
37
|
Groupwise multi-atlas segmentation of the spinal cord's internal structure. Med Image Anal 2014; 18:460-71. [PMID: 24556080 DOI: 10.1016/j.media.2014.01.003] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Revised: 10/31/2013] [Accepted: 01/21/2014] [Indexed: 12/14/2022]
Abstract
The spinal cord is an essential and vulnerable component of the central nervous system. Differentiating and localizing the spinal cord internal structure (i.e., gray matter vs. white matter) is critical for assessment of therapeutic impacts and determining prognosis of relevant conditions. Fortunately, new magnetic resonance imaging (MRI) sequences enable clinical study of the in vivo spinal cord's internal structure. Yet, low contrast-to-noise ratio, artifacts, and imaging distortions have limited the applicability of tissue segmentation techniques pioneered elsewhere in the central nervous system. Additionally, due to the inter-subject variability exhibited on cervical MRI, typical deformable volumetric registrations perform poorly, limiting the applicability of a typical multi-atlas segmentation framework. Thus, to date, no automated algorithms have been presented for the spinal cord's internal structure. Herein, we present a novel slice-based groupwise registration framework for robustly segmenting cervical spinal cord MRI. Specifically, we provide a method for (1) pre-aligning the slice-based atlases into a groupwise-consistent space, (2) constructing a model of spinal cord variability, (3) projecting the target slice into the low-dimensional space using a model-specific registration cost function, and (4) estimating robust segmentation susing geodesically appropriate atlas information. Moreover, the proposed framework provides a natural mechanism for performing atlas selection and initializing the free model parameters in an informed manner. In a cross-validation experiment using 67 MR volumes of the cervical spinal cord, we demonstrate sub-millimetric accuracy, significant quantitative and qualitative improvement over comparable multi-atlas frameworks, and provide insight into the sensitivity of the associated model parameters.
Collapse
|
38
|
Fully automatic segmentation of the mitral leaflets in 3D transesophageal echocardiographic images using multi-atlas joint label fusion and deformable medial modeling. Med Image Anal 2014; 18:118-29. [PMID: 24184435 PMCID: PMC3897209 DOI: 10.1016/j.media.2013.10.001] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2012] [Revised: 09/18/2013] [Accepted: 10/02/2013] [Indexed: 10/26/2022]
Abstract
Comprehensive visual and quantitative analysis of in vivo human mitral valve morphology is central to the diagnosis and surgical treatment of mitral valve disease. Real-time 3D transesophageal echocardiography (3D TEE) is a practical, highly informative imaging modality for examining the mitral valve in a clinical setting. To facilitate visual and quantitative 3D TEE image analysis, we describe a fully automated method for segmenting the mitral leaflets in 3D TEE image data. The algorithm integrates complementary probabilistic segmentation and shape modeling techniques (multi-atlas joint label fusion and deformable modeling with continuous medial representation) to automatically generate 3D geometric models of the mitral leaflets from 3D TEE image data. These models are unique in that they establish a shape-based coordinate system on the valves of different subjects and represent the leaflets volumetrically, as structures with locally varying thickness. In this work, expert image analysis is the gold standard for evaluating automatic segmentation. Without any user interaction, we demonstrate that the automatic segmentation method accurately captures patient-specific leaflet geometry at both systole and diastole in 3D TEE data acquired from a mixed population of subjects with normal valve morphology and mitral valve disease.
Collapse
|