1
|
Xu K, Khan MS, Li T, Gao R, Antic SL, Huo Y, Sandler KL, Maldonado F, Landman BA. Stratification of Lung Cancer Risk with Thoracic Imaging Phenotypes. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2023; 12464:1246407. [PMID: 37465098 PMCID: PMC10353831 DOI: 10.1117/12.2654018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
In lung cancer screening, estimation of future lung cancer risk is usually guided by demographics and smoking status. The role of constitutional profiles of human body, a.k.a. body habitus, is increasingly understood to be important, but has not been integrated into risk models. Chest low dose computed tomography (LDCT) is the standard imaging study in lung cancer screening, with the capability to discriminate differences in body composition and organ arrangement in the thorax. We hypothesize that the primary phenotypes identified using lung screening chest LDCT can form a representation of body habitus and add predictive power for lung cancer risk stratification. In this pilot study, we evaluated the feasibility of body habitus image-based phenotyping on a large lung screening LDCT dataset. A thoracic imaging manifold was estimated based on an intensity-based pairwise (dis)similarity metric for pairs of spatial normalized chest LDCT images. We applied the hierarchical clustering method on this manifold to identify the primary phenotypes. Body habitus features of each identified phenotype were evaluated and associated with future lung cancer risk using time-to-event analysis. We evaluated the method on the baseline LDCT scans of 1,200 male subjects sampled from National Lung Screening Trial. Five primary phenotypes were identified, which were associated with highly distinguishable clinical and body habitus features. Time-to-event analysis against future lung cancer incidences showed two of the five identified phenotypes were associated with elevated future lung cancer risks (HR=1.61, 95% CI = [1.08, 2.38], p=0.019; HR=1.67, 95% CI = [0.98, 2.86], p=0.057). These results indicated that it is feasible to capture the body habitus by image-base phenotyping using lung screening LDCT and the learned body habitus representation can potentially add value for future lung cancer risk stratification.
Collapse
Affiliation(s)
- Kaiwen Xu
- Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| | - Mirza S Khan
- Vanderbilt University Medical Center, 1211 Medical Center Dr, Nashville, TN, USA 37232
| | - Thomas Li
- Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| | - Riqiang Gao
- Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| | - Sanja L Antic
- Vanderbilt University Medical Center, 1211 Medical Center Dr, Nashville, TN, USA 37232
| | - Yuankai Huo
- Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| | - Kim L Sandler
- Vanderbilt University Medical Center, 1211 Medical Center Dr, Nashville, TN, USA 37232
| | - Fabien Maldonado
- Vanderbilt University Medical Center, 1211 Medical Center Dr, Nashville, TN, USA 37232
| | - Bennett A Landman
- Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
- Vanderbilt University Medical Center, 1211 Medical Center Dr, Nashville, TN, USA 37232
| |
Collapse
|
2
|
Markello RD, Hansen JY, Liu ZQ, Bazinet V, Shafiei G, Suárez LE, Blostein N, Seidlitz J, Baillet S, Satterthwaite TD, Chakravarty MM, Raznahan A, Misic B. neuromaps: structural and functional interpretation of brain maps. Nat Methods 2022; 19:1472-1479. [PMID: 36203018 PMCID: PMC9636018 DOI: 10.1038/s41592-022-01625-w] [Citation(s) in RCA: 148] [Impact Index Per Article: 49.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Accepted: 08/24/2022] [Indexed: 11/09/2022]
Abstract
Imaging technologies are increasingly used to generate high-resolution reference maps of brain structure and function. Comparing experimentally generated maps to these reference maps facilitates cross-disciplinary scientific discovery. Although recent data sharing initiatives increase the accessibility of brain maps, data are often shared in disparate coordinate systems, precluding systematic and accurate comparisons. Here we introduce neuromaps, a toolbox for accessing, transforming and analyzing structural and functional brain annotations. We implement functionalities for generating high-quality transformations between four standard coordinate systems. The toolbox includes curated reference maps and biological ontologies of the human brain, such as molecular, microstructural, electrophysiological, developmental and functional ontologies. Robust quantitative assessment of map-to-map similarity is enabled via a suite of spatial autocorrelation-preserving null models. neuromaps combines open-access data with transparent functionality for standardizing and comparing brain maps, providing a systematic workflow for comprehensive structural and functional annotation enrichment analysis of the human brain.
Collapse
Affiliation(s)
- Ross D Markello
- Montréal Neurological Institute, McGill University, Montréal, Quebec, Canada
| | - Justine Y Hansen
- Montréal Neurological Institute, McGill University, Montréal, Quebec, Canada
| | - Zhen-Qi Liu
- Montréal Neurological Institute, McGill University, Montréal, Quebec, Canada
| | - Vincent Bazinet
- Montréal Neurological Institute, McGill University, Montréal, Quebec, Canada
| | - Golia Shafiei
- Montréal Neurological Institute, McGill University, Montréal, Quebec, Canada
| | - Laura E Suárez
- Montréal Neurological Institute, McGill University, Montréal, Quebec, Canada
| | - Nadia Blostein
- Cerebral Imaging Center, Douglas Mental Health University Institute, McGill University, Montréal, Quebec, Canada
| | - Jakob Seidlitz
- Lifespan Informatics and Neuroimaging Center, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Sylvain Baillet
- Montréal Neurological Institute, McGill University, Montréal, Quebec, Canada
| | - Theodore D Satterthwaite
- Lifespan Informatics and Neuroimaging Center, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - M Mallar Chakravarty
- Cerebral Imaging Center, Douglas Mental Health University Institute, McGill University, Montréal, Quebec, Canada
| | - Armin Raznahan
- Section of Developmental Neurogenomics, National Institute of Mental Health, Bethesda, MD, USA
| | - Bratislav Misic
- Montréal Neurological Institute, McGill University, Montréal, Quebec, Canada.
| |
Collapse
|
3
|
Agier R, Valette S, Kéchichian R, Fanton L, Prost R. Hubless keypoint-based 3D deformable groupwise registration. Med Image Anal 2019; 59:101564. [PMID: 31590032 DOI: 10.1016/j.media.2019.101564] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 08/05/2019] [Accepted: 09/19/2019] [Indexed: 11/30/2022]
Abstract
We present a novel algorithm for Fast Registration Of image Groups (FROG), applied to large 3D image groups. Our approach extracts 3D SURF keypoints from images, computes matched pairs of keypoints and registers the group by minimizing pair distances in a hubless way i.e. without computing any central mean image. Using keypoints significantly reduces the problem complexity compared to voxel-based approaches, and enables us to provide an in-core global optimization, similar to the Bundle Adjustment for 3D reconstruction. As we aim to register images of different patients, the matching step yields many outliers. Then we propose a new EM-weighting algorithm which efficiently discards outliers. Global optimization is carried out with a fast gradient descent algorithm. This allows our approach to robustly register large datasets. The result is a set of diffeomorphic half transforms which link the volumes together and can be subsequently exploited for computational anatomy and landmark detection. We show experimental results on whole-body CT scans, with groups of up to 103 volumes. On a benchmark based on anatomical landmarks, our algorithm compares favorably with the star-groupwise voxel-based ANTs and NiftyReg approaches while being much faster. We also discuss the limitations of our approach for lower resolution images such as brain MRI.
Collapse
Affiliation(s)
- R Agier
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France
| | - S Valette
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France.
| | - R Kéchichian
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France
| | - L Fanton
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France; Hospices Civils de Lyon, GHC, Hôpital Edouard-Herriot, Service de médecine légale, LYON 69003, FRANCE
| | - R Prost
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France
| |
Collapse
|
4
|
Seo K, Pan R, Lee D, Thiyyagura P, Chen K. Visualizing Alzheimer's disease progression in low dimensional manifolds. Heliyon 2019; 5:e02216. [PMID: 31406946 PMCID: PMC6684517 DOI: 10.1016/j.heliyon.2019.e02216] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 01/05/2019] [Accepted: 07/30/2019] [Indexed: 01/18/2023] Open
Abstract
While tomographic neuroimaging data is information rich, objective, and with high sensitivity in the study of brain diseases such as Alzheimer's disease (AD), its direct use in clinical practice and in regulated clinical trial (CT) still has many challenges. Taking CT as an example, unless the relevant policy and the perception of the primary outcome measures change, the need to construct univariate indices (out of the 3-D imaging data) to serve as CT's primary outcome measures will remain the focus of active research. More relevant to this current study, an overall global index that summarizes multiple complicated features from neuroimages should be developed in order to provide high diagnostic accuracy and sensitivity in tracking AD progression over time in clinical setting. Such index should also be practically intuitive and logically explainable to patients and their families. In this research, we propose a new visualization tool, derived from the manifold-based nonlinear dimension reduction of brain MRI features, to track AD progression over time. In specific, we investigate the locally linear embedding (LLE) method using a dataset from Alzheimer's Disease Neuroimaging Initiative (ADNI), which includes the longitudinal MRIs from 562 subjects. About 20% of them progressed to the next stage of dementia. Using only the baseline data of cognitively unimpaired (CU) and AD subjects, LLE reduces the feature dimension to two and a subject's AD progression path can be plotted in this low dimensional LLE feature space. In addition, the likelihood of being categorized to AD is indicated by color. This LLE map is a new data visualization tool that can assist in tracking AD progression over time.
Collapse
Affiliation(s)
- Kangwon Seo
- Department of Industrial and Manufacturing Systems Engineering and Department of Statistics, University of Missouri, USA
| | - Rong Pan
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, USA
| | - Dongjin Lee
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, USA
| | | | | | | |
Collapse
|
5
|
Li J, Yang Y. Clinical Study of Diffusion-Weighted Imaging in the Diagnosis of Liver Focal Lesion. J Med Syst 2019; 43:43. [PMID: 30649629 DOI: 10.1007/s10916-019-1164-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2018] [Accepted: 01/09/2019] [Indexed: 10/27/2022]
Abstract
Apparent diffusion coefficient (ADC), derived from diffusion-weighted magnetic resonance images (DW-MRI), measures the motion of water molecules in vivo and can be used to quantify tumor response so as to determine the best therapy approach. In this paper, our goal was to determine whether the DW-MRI can be used for qualitative and quantitative liver cancer analysis, where an automated method will be proposed for improving the accuracy of liver segmentation in DW-MRI to increase the ability of diagnosis of disease. We firstly analyzed the research status of liver cancer diagnosis, especially on the issues of liver image segmentation technology in MRI. Then, the imaging mechanism and image features of the DW-MRI were analyzed, and the initial DW-MRI slice was segmented by graph-cut algorithm. Finally, our obtained result from the liver DW-MRI image is quantitatively and qualitatively analyzed. Experimental results show that DW-MRI has a great advantage in the diagnosis, the DWI images of benign lesion group was lower than that of malignant lesion, thus DW-MRI is segmented by graph-cut algorithm can provide important additional information regarding differential diagnosis of specific liver cancer to some extend.
Collapse
Affiliation(s)
| | - Yue Yang
- Tongde hospital of Zhejiang province, Zhejiang, 310012, Hangzhou, China.
| |
Collapse
|
6
|
Andersson T, Borga M, Dahlqvist Leinhard O. Geodesic registration for interactive atlas-based segmentation using learned multi-scale anatomical manifolds. Pattern Recognit Lett 2018. [DOI: 10.1016/j.patrec.2018.04.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
7
|
Wu J, Ngo GH, Greve D, Li J, He T, Fischl B, Eickhoff SB, Yeo BTT. Accurate nonlinear mapping between MNI volumetric and FreeSurfer surface coordinate systems. Hum Brain Mapp 2018; 39:3793-3808. [PMID: 29770530 PMCID: PMC6239990 DOI: 10.1002/hbm.24213] [Citation(s) in RCA: 66] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2018] [Revised: 04/07/2018] [Accepted: 05/02/2018] [Indexed: 12/21/2022] Open
Abstract
The results of most neuroimaging studies are reported in volumetric (e.g., MNI152) or surface (e.g., fsaverage) coordinate systems. Accurate mappings between volumetric and surface coordinate systems can facilitate many applications, such as projecting fMRI group analyses from MNI152/Colin27 to fsaverage for visualization or projecting resting‐state fMRI parcellations from fsaverage to MNI152/Colin27 for volumetric analysis of new data. However, there has been surprisingly little research on this topic. Here, we evaluated three approaches for mapping data between MNI152/Colin27 and fsaverage coordinate systems by simulating the above applications: projection of group‐average data from MNI152/Colin27 to fsaverage and projection of fsaverage parcellations to MNI152/Colin27. Two of the approaches are currently widely used. A third approach (registration fusion) was previously proposed, but not widely adopted. Two implementations of the registration fusion (RF) approach were considered, with one implementation utilizing the Advanced Normalization Tools (ANTs). We found that RF‐ANTs performed the best for mapping between fsaverage and MNI152/Colin27, even for new subjects registered to MNI152/Colin27 using a different software tool (FSL FNIRT). This suggests that RF‐ANTs would be useful even for researchers not using ANTs. Finally, it is worth emphasizing that the most optimal approach for mapping data to a coordinate system (e.g., fsaverage) is to register individual subjects directly to the coordinate system, rather than via another coordinate system. Only in scenarios where the optimal approach is not possible (e.g., mapping previously published results from MNI152 to fsaverage), should the approaches evaluated in this manuscript be considered. In these scenarios, we recommend RF‐ANTs (https://github.com/ThomasYeoLab/CBIG/tree/master/stable_projects/registration/Wu2017_RegistrationFusion).
Collapse
Affiliation(s)
- Jianxiao Wu
- Department of Electrical and Computer Engineering, ASTAR-NUS Clinical Imaging Research Centre, Singapore Institute for Neurotechnology and Memory Networks Program, National University of Singapore, Singapore City, Singapore
| | - Gia H Ngo
- Department of Electrical and Computer Engineering, ASTAR-NUS Clinical Imaging Research Centre, Singapore Institute for Neurotechnology and Memory Networks Program, National University of Singapore, Singapore City, Singapore
| | - Douglas Greve
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts.,Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Jingwei Li
- Department of Electrical and Computer Engineering, ASTAR-NUS Clinical Imaging Research Centre, Singapore Institute for Neurotechnology and Memory Networks Program, National University of Singapore, Singapore City, Singapore
| | - Tong He
- Department of Electrical and Computer Engineering, ASTAR-NUS Clinical Imaging Research Centre, Singapore Institute for Neurotechnology and Memory Networks Program, National University of Singapore, Singapore City, Singapore
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts.,Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts.,Harvard-MIT Division of Health Sciences and Technology, Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts
| | - Simon B Eickhoff
- Medical Faculty, Heinrich-Heine University Düsseldorf, Institute for Systems Neuroscience, Düsseldorf, Germany.,Institute of Neuroscience and Medicine (INM-7), Research Centre Jülich, Jülich, Germany
| | - B T Thomas Yeo
- Department of Electrical and Computer Engineering, ASTAR-NUS Clinical Imaging Research Centre, Singapore Institute for Neurotechnology and Memory Networks Program, National University of Singapore, Singapore City, Singapore.,Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts.,Center for Cognitive Neuroscience, Duke-NUS Medical School, Singapore, Singapore
| |
Collapse
|
8
|
PCANet-Based Structural Representation for Nonrigid Multimodal Medical Image Registration. SENSORS 2018; 18:s18051477. [PMID: 29738512 PMCID: PMC5982469 DOI: 10.3390/s18051477] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2018] [Revised: 05/04/2018] [Accepted: 05/05/2018] [Indexed: 11/17/2022]
Abstract
Nonrigid multimodal image registration remains a challenging task in medical image processing and analysis. The structural representation (SR)-based registration methods have attracted much attention recently. However, the existing SR methods cannot provide satisfactory registration accuracy due to the utilization of hand-designed features for structural representation. To address this problem, the structural representation method based on the improved version of the simple deep learning network named PCANet is proposed for medical image registration. In the proposed method, PCANet is firstly trained on numerous medical images to learn convolution kernels for this network. Then, a pair of input medical images to be registered is processed by the learned PCANet. The features extracted by various layers in the PCANet are fused to produce multilevel features. The structural representation images are constructed for two input images based on nonlinear transformation of these multilevel features. The Euclidean distance between structural representation images is calculated and used as the similarity metrics. The objective function defined by the similarity metrics is optimized by L-BFGS method to obtain parameters of the free-form deformation (FFD) model. Extensive experiments on simulated and real multimodal image datasets show that compared with the state-of-the-art registration methods, such as modality-independent neighborhood descriptor (MIND), normalized mutual information (NMI), Weber local descriptor (WLD), and the sum of squared differences on entropy images (ESSD), the proposed method provides better registration performance in terms of target registration error (TRE) and subjective human vision.
Collapse
|
9
|
Comparison of image registration methods for composing spectral retinal images. Biomed Signal Process Control 2017. [DOI: 10.1016/j.bspc.2017.03.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
10
|
Pei Y, Ma G, Chen G, Zhang X, Xu T, Zha H. Superimposition of Cone-Beam Computed Tomography Images by Joint Embedding. IEEE Trans Biomed Eng 2017; 64:1218-1227. [DOI: 10.1109/tbme.2016.2598584] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
11
|
Zhang J, Zhang L, Xiang L, Shao Y, Wu G, Zhou X, Shen D, Wang Q. Brain Atlas Fusion from High-Thickness Diagnostic Magnetic Resonance Images by Learning-Based Super-Resolution. PATTERN RECOGNITION 2017; 63:531-541. [PMID: 29062159 PMCID: PMC5650249 DOI: 10.1016/j.patcog.2016.09.019] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images.
Collapse
Affiliation(s)
- Jinpeng Zhang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Lichi Zhang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Lei Xiang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yeqin Shao
- Nantong University, Nantong, Jiangsu 226019, China
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Xiaodong Zhou
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201815, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Qian Wang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
12
|
Xie L, Pluta JB, Das SR, Wisse LEM, Wang H, Mancuso L, Kliot D, Avants BB, Ding SL, Manjón JV, Wolk DA, Yushkevich PA. Multi-template analysis of human perirhinal cortex in brain MRI: Explicitly accounting for anatomical variability. Neuroimage 2016; 144:183-202. [PMID: 27702610 DOI: 10.1016/j.neuroimage.2016.09.070] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2016] [Revised: 09/28/2016] [Accepted: 09/30/2016] [Indexed: 01/05/2023] Open
Abstract
RATIONAL The human perirhinal cortex (PRC) plays critical roles in episodic and semantic memory and visual perception. The PRC consists of Brodmann areas 35 and 36 (BA35, BA36). In Alzheimer's disease (AD), BA35 is the first cortical site affected by neurofibrillary tangle pathology, which is closely linked to neural injury in AD. Large anatomical variability, manifested in the form of different cortical folding and branching patterns, makes it difficult to segment the PRC in MRI scans. Pathology studies have found that in ~97% of specimens, the PRC falls into one of three discrete anatomical variants. However, current methods for PRC segmentation and morphometry in MRI are based on single-template approaches, which may not be able to accurately model these discrete variants METHODS: A multi-template analysis pipeline that explicitly accounts for anatomical variability is used to automatically label the PRC and measure its thickness in T2-weighted MRI scans. The pipeline uses multi-atlas segmentation to automatically label medial temporal lobe cortices including entorhinal cortex, PRC and the parahippocampal cortex. Pairwise registration between label maps and clustering based on residual dissimilarity after registration are used to construct separate templates for the anatomical variants of the PRC. An optimal path of deformations linking these templates is used to establish correspondences between all the subjects. Experimental evaluation focuses on the ability of single-template and multi-template analyses to detect differences in the thickness of medial temporal lobe cortices between patients with amnestic mild cognitive impairment (aMCI, n=41) and age-matched controls (n=44). RESULTS The proposed technique is able to generate templates that recover the three dominant discrete variants of PRC and establish more meaningful correspondences between subjects than a single-template approach. The largest reduction in thickness associated with aMCI, in absolute terms, was found in left BA35 using both regional and summary thickness measures. Further, statistical maps of regional thickness difference between aMCI and controls revealed different patterns for the three anatomical variants.
Collapse
Affiliation(s)
- Long Xie
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA; Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA.
| | - John B Pluta
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - Sandhitsu R Das
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA; Department of Neurology, University of Pennsylvania, Philadelphia, USA; Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - Laura E M Wisse
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | | | - Lauren Mancuso
- Penn Memory Center, University of Pennsylvania, Philadelphia, PA, USA; Department of Neurology, University of Pennsylvania, Philadelphia, USA
| | - Dasha Kliot
- Penn Memory Center, University of Pennsylvania, Philadelphia, PA, USA; Department of Neurology, University of Pennsylvania, Philadelphia, USA
| | - Brian B Avants
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - Song-Lin Ding
- Allen Institute for Brain Science, Seattle, USA; School of Basic Sciences, Guangzhou Medical University, Guangzhou, China
| | - José V Manjón
- Instituto de Aplicaciones de las Tecnologías de la Información y de las Comunicaciones Avanzadas (ITACA), Universidad Politécnica de Valencia, Camino de Vera s/n, Valencia, Spain
| | - David A Wolk
- Penn Memory Center, University of Pennsylvania, Philadelphia, PA, USA; Department of Neurology, University of Pennsylvania, Philadelphia, USA
| | - Paul A Yushkevich
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, University of Pennsylvania, Philadelphia, USA
| |
Collapse
|
13
|
Feng Q, Zhou Y, Li X, Mei Y, Lu Z, Zhang Y, Feng Y, Liu Y, Yang W, Chen W. Liver DCE-MRI Registration in Manifold Space Based on Robust Principal Component Analysis. Sci Rep 2016; 6:34461. [PMID: 27681452 PMCID: PMC5041095 DOI: 10.1038/srep34461] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2016] [Accepted: 09/08/2016] [Indexed: 11/24/2022] Open
Abstract
A technical challenge in the registration of dynamic contrast-enhanced magnetic resonance (DCE-MR) imaging in the liver is intensity variations caused by contrast agents. Such variations lead to the failure of the traditional intensity-based registration method. To address this problem, a manifold-based registration framework for liver DCE-MR time series is proposed. We assume that liver DCE-MR time series are located on a low-dimensional manifold and determine intrinsic similarities between frames. Based on the obtained manifold, the large deformation of two dissimilar images can be decomposed into a series of small deformations between adjacent images on the manifold through gradual deformation of each frame to the template image along the geodesic path. Furthermore, manifold construction is important in automating the selection of the template image, which is an approximation of the geodesic mean. Robust principal component analysis is performed to separate motion components from intensity changes induced by contrast agents; the components caused by motion are used to guide registration in eliminating the effect of contrast enhancement. Visual inspection and quantitative assessment are further performed on clinical dataset registration. Experiments show that the proposed method effectively reduces movements while preserving the topology of contrast-enhancing structures and provides improved registration performance.
Collapse
Affiliation(s)
- Qianjin Feng
- School of biomedical engineering, Southern Medical University, Guangzhou 510515, China
| | - Yujia Zhou
- School of biomedical engineering, Southern Medical University, Guangzhou 510515, China
| | - Xueli Li
- School of biomedical engineering, Southern Medical University, Guangzhou 510515, China
| | - Yingjie Mei
- School of biomedical engineering, Southern Medical University, Guangzhou 510515, China
| | - Zhentai Lu
- School of biomedical engineering, Southern Medical University, Guangzhou 510515, China
| | - Yu Zhang
- School of biomedical engineering, Southern Medical University, Guangzhou 510515, China
| | - Yanqiu Feng
- School of biomedical engineering, Southern Medical University, Guangzhou 510515, China
| | - Yaqin Liu
- School of biomedical engineering, Southern Medical University, Guangzhou 510515, China
| | - Wei Yang
- School of biomedical engineering, Southern Medical University, Guangzhou 510515, China
| | - Wufan Chen
- School of biomedical engineering, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
14
|
Wu G, Kim M, Wang Q, Munsell BC, Shen D. Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning. IEEE Trans Biomed Eng 2016; 63:1505-1516. [PMID: 26552069 DOI: 10.1016/b978-0-12-810408-8.00015-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked autoencoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework, image registration experiments were conducted on 7.0-T brain MR images. In all experiments, the results showed that the new image registration framework consistently demonstrated more accurate registration results when compared to state of the art.
Collapse
|
15
|
Accurate Learning with Few Atlases (ALFA): an algorithm for MRI neonatal brain extraction and comparison with 11 publicly available methods. Sci Rep 2016; 6:23470. [PMID: 27010238 PMCID: PMC4806304 DOI: 10.1038/srep23470] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2015] [Accepted: 03/08/2016] [Indexed: 02/04/2023] Open
Abstract
Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases 'uniformly' distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.
Collapse
|
16
|
Veeraraghavan H, Do RKG, Reidy DL, Deasy JO. Simultaneous segmentation and iterative registration method for computing ADC with reduced artifacts from DW-MRI. Med Phys 2016; 42:2249-60. [PMID: 25979019 DOI: 10.1118/1.4916799] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Apparent diffusion coefficient (ADC), derived from diffusion-weighted magnetic resonance images (DW-MRI), measures the motion of water molecules in vivo and can be used to quantify tumor response to therapy. The accurate measurement of ADC can be adversely affected by organ motion and imaging artifacts. In this paper, the authors' goal was to develop an automated method for reducing artifacts and thereby improve the accuracy of ADC measurements in moving organs such as liver. METHODS The authors developed a novel method of computing ADC with fewer artifacts, through simultaneous image segmentation and iterative registration (SSIR) of multiple b-value DW-MRI. The authors' approach reduces artifacts by automatically finding the best possible alignment between the individual b-value images and a reference DW image using a sequence of transformations. It selects such a sequence by an iterative choice of b-value DW images based on the accuracy of their alignment with the reference DW image. The authors' approach quantifies the accuracy of alignment between a pair of images using modified Hausdroff distance computed between the structures of interest. The structures of interest are identified by a user through strokes drawn in one or more slices in the reference DW image, which are then volumetrically segmented using GrowCut. The same structures are segmented in the remaining b-value images by transforming the user-drawn strokes through registration. The ADC values are computed from all the aligned b-value images. The images are aligned by using affine registration followed by deformable B-spline registration with cubic B-spline resampling. RESULTS The authors compared the results of ADC computed using their approach with ADC computed (a) without registration and (b) with basic affine registration of all b-value images to a chosen reference. The authors' approach was the most effective in reducing artifacts compared to the other two methods. It resulted in a mean artifact ratio (fraction of voxels in a structure with negative ADC over total number of voxels in the structure) of 2.7% versus 5.4% for affine registration and 32% for no registration for >200 tumors. The authors' approach also resulted in the lowest median standard deviation in the computed mean ADC for all tumors [0.05,0.09,0.07,0.58] compared to those from affine image registration [0.02, 0.14, 0.58, 0.79] and no image registration [0.64, 0.83, 0.83, 1.09] on tests where random displacement [8,10,12,16] pixels were introduced in multiple trials in the b-value images. CONCLUSIONS The authors developed a novel approach for reducing artifacts in ADC maps through simultaneous registration and segmentation of multiple b-value DW images. The authors' method explicitly employs a registration quality metric to align images. When compared to basic affine and no image registrations, the authors' approach produces registrations of greater accuracy with lowest artifact ratio and median standard deviation of the computed mean ADC values for a wide range of displacements.
Collapse
Affiliation(s)
- Harini Veeraraghavan
- Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065
| | - Richard K G Do
- Radiology, Memorial Sloan Kettering Cancer Center, New York, New York 10065
| | - Diane L Reidy
- Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, New York 10065
| | - Joseph O Deasy
- Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065
| |
Collapse
|
17
|
Onofrey JA, Staib LH, Papademetris X. Learning intervention-induced deformations for non-rigid MR-CT registration and electrode localization in epilepsy patients. Neuroimage Clin 2015; 10:291-301. [PMID: 26900569 PMCID: PMC4724039 DOI: 10.1016/j.nicl.2015.12.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2015] [Revised: 11/08/2015] [Accepted: 12/03/2015] [Indexed: 11/02/2022]
Abstract
This paper describes a framework for learning a statistical model of non-rigid deformations induced by interventional procedures. We make use of this learned model to perform constrained non-rigid registration of pre-procedural and post-procedural imaging. We demonstrate results applying this framework to non-rigidly register post-surgical computed tomography (CT) brain images to pre-surgical magnetic resonance images (MRIs) of epilepsy patients who had intra-cranial electroencephalography electrodes surgically implanted. Deformations caused by this surgical procedure, imaging artifacts caused by the electrodes, and the use of multi-modal imaging data make non-rigid registration challenging. Our results show that the use of our proposed framework to constrain the non-rigid registration process results in significantly improved and more robust registration performance compared to using standard rigid and non-rigid registration methods.
Collapse
Affiliation(s)
- John A. Onofrey
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Lawrence H. Staib
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Electrical Engineering, Yale University, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Xenophon Papademetris
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| |
Collapse
|
18
|
Shalbaf A, AlizadehSani Z, Behnam H. Echocardiography without electrocardiogram using nonlinear dimensionality reduction methods. J Med Ultrason (2001) 2015; 42:137-49. [PMID: 26576567 DOI: 10.1007/s10396-014-0588-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2014] [Accepted: 10/08/2014] [Indexed: 11/25/2022]
Abstract
PURPOSE The aim of this study is to evaluate the efficiency of a new automatic image processing technique, based on nonlinear dimensionality reduction (NLDR) to separate a cardiac cycle and also detect end-diastole (ED) (cardiac cycle start) and end-systole (ES) frames on an echocardiography system without using ECG. METHODS Isometric feature mapping (Isomap) and locally linear embeddings (LLE) are the most popular NLDR algorithms. First, Isomap algorithm is applied on recorded echocardiography images. By this approach, the nonlinear embedded information in sequential images is represented in a two-dimensional manifold and each image is characterized by a symbol on the constructed manifold. Cyclicity analysis of the resultant manifold, which is derived from the cyclic nature of the heart motion, is used to perform cardiac cycle length estimation. Then, LLE algorithm is applied on extracted left ventricle (LV) echocardiography images of one cardiac cycle. Finally, the relationship between consecutive symbols of the resultant manifold by the LLE algorithm, which is based on LV volume changes, is used to estimate ED (cycle start) and ES frames. The proposed algorithms are quantitatively compared to those obtained by a highly experienced echocardiographer from ECG as a reference in 20 healthy volunteers and 12 subjects with pathology. RESULTS Mean difference in cardiac cycle length, ED, and ES frame estimation between our method and ECG detection by the experienced echocardiographer is approximately 7, 17, and 17 ms (0.4, 1, and 1 frame), respectively. CONCLUSION The proposed image-based method, based on NLDR, can be used as a useful tool for estimation of cardiac cycle length, ED and ES frames in echocardiography systems, with good agreement to ECG assessment by an experienced echocardiographer in routine clinical evaluation.
Collapse
Affiliation(s)
- Ahmad Shalbaf
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran.
| | - Zahra AlizadehSani
- Cardiovascular Imaging, Shaheed Rajaei Cardiovascular Medical and Research Center, Iran University of Medical Science, Tehran, Iran.
| | - Hamid Behnam
- Department of Biomedical Engineering, School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran.
| |
Collapse
|
19
|
|
20
|
Cardoso MJ, Modat M, Wolz R, Melbourne A, Cash D, Rueckert D, Ourselin S. Geodesic Information Flows: Spatially-Variant Graphs and Their Application to Segmentation and Fusion. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1976-88. [PMID: 25879909 DOI: 10.1109/tmi.2015.2418298] [Citation(s) in RCA: 234] [Impact Index Per Article: 23.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Clinical annotations, such as voxel-wise binary or probabilistic tissue segmentations, structural parcellations, pathological regions-of-interest and anatomical landmarks are key to many clinical studies. However, due to the time consuming nature of manually generating these annotations, they tend to be scarce and limited to small subsets of data. This work explores a novel framework to propagate voxel-wise annotations between morphologically dissimilar images by diffusing and mapping the available examples through intermediate steps. A spatially-variant graph structure connecting morphologically similar subjects is introduced over a database of images, enabling the gradual diffusion of information to all the subjects, even in the presence of large-scale morphological variability. We illustrate the utility of the proposed framework on two example applications: brain parcellation using categorical labels and tissue segmentation using probabilistic features. The application of the proposed method to categorical label fusion showed highly statistically significant improvements when compared to state-of-the-art methodologies. Significant improvements were also observed when applying the proposed framework to probabilistic tissue segmentation of both synthetic and real data, mainly in the presence of large morphological variability.
Collapse
|
21
|
Onofrey JA, Papademetris X, Staib LH. Low-Dimensional Non-Rigid Image Registration Using Statistical Deformation Models From Semi-Supervised Training Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1522-1532. [PMID: 25720017 PMCID: PMC8802338 DOI: 10.1109/tmi.2015.2404572] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Accurate and robust image registration is a fundamental task in medical image analysis applications, and requires non-rigid transformations with a large number of degrees of freedom. Statistical deformation models (SDMs) attempt to learn the distribution of non-rigid deformations, and can be used both to reduce the transformation dimensionality and to constrain the registration process. However, high-dimensional SDMs are difficult to train given orders of magnitude fewer training samples. In this paper, we utilize both a small set of annotated imaging data and a large set of unlabeled data to effectively learn an SDM of non-rigid transformations in a semi-supervised training (SST) framework. We demonstrate results applying this framework towards inter-subject registration of skull-stripped, magnetic resonance (MR) brain images. Our approach makes use of 39 labeled MR datasets to create a set of supervised registrations, which we augment with a set of over 1200 unsupervised registrations using unlabeled MRIs. Through leave-one-out cross validation, we show that SST of a non-rigid SDM results in a robust registration algorithm with significantly improved accuracy compared to standard, intensity-based registration, and does so with a 99% reduction in transformation dimensionality.
Collapse
Affiliation(s)
- John A. Onofrey
- Department of Diagnostic Radiology, Yale University, New Haven, CT 06520 USA
| | - Xenophon Papademetris
- Departments of Diagnostic Radiology and Biomedical Engineering, Yale University, New Haven, CT 06520 USA
| | - Lawrence H. Staib
- Departments of Diagnostic Radiology, Electrical Engineering, and Biomedical Engineering, Yale University, New Haven, CT 06520 USA
| |
Collapse
|
22
|
Li XW, Li QL, Li SY, Li DY. Local manifold learning for multiatlas segmentation: application to hippocampal segmentation in healthy population and Alzheimer's disease. CNS Neurosci Ther 2015; 21:826-36. [PMID: 26122409 DOI: 10.1111/cns.12415] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2015] [Revised: 05/06/2015] [Accepted: 05/06/2015] [Indexed: 12/01/2022] Open
Abstract
AIMS Automated hippocampal segmentation is an important issue in many neuroscience studies. METHODS We presented and evaluated a novel segmentation method that utilized a manifold learning technique under the multiatlas-based segmentation scenario. A manifold representation of local patches for each voxel was achieved by applying an Isomap algorithm, which can then be used to obtain spatially local weights of atlases for label fusion. The obtained atlas weights potentially depended on all pairwise similarities of the population, which is in contrast to most existing label fusion methods that only rely on similarities between the target image and the atlases. The performance of the proposed method was evaluated for hippocampal segmentation and compared with two representative local weighted label fusion methods, that is, local majority voting and local weighted inverse distance voting, on an in-house dataset of 28 healthy adolescents (age range: 10-17 years) and two ADNI datasets of 100 participants (age range: 60-89 years). We also implemented hippocampal volumetric analysis and evaluated segmentation performance using atlases from a different dataset. RESULTS The median Dice similarities obtained by our proposed method were approximately 0.90 for healthy subjects and above 0.88 for two mixed diagnostic groups of ADNI subjects. CONCLUSION The experimental results demonstrated that the proposed method could obtain consistent and significant improvements over label fusion strategies that are implemented in the original space.
Collapse
Affiliation(s)
- Xin-Wei Li
- State Key Laboratory of Software Development Environment, Beihang University, Beijing, China.,Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Biological Science & Medical Engineering, Beihang University, Beijing, China
| | - Qiong-Ling Li
- State Key Laboratory of Software Development Environment, Beihang University, Beijing, China.,Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Biological Science & Medical Engineering, Beihang University, Beijing, China
| | - Shu-Yu Li
- State Key Laboratory of Software Development Environment, Beihang University, Beijing, China.,Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Biological Science & Medical Engineering, Beihang University, Beijing, China
| | - De-Yu Li
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Biological Science & Medical Engineering, Beihang University, Beijing, China
| | | |
Collapse
|
23
|
Yan P, Cao Y, Yuan Y, Turkbey B, Choyke PL. Label image constrained multiatlas selection. IEEE TRANSACTIONS ON CYBERNETICS 2015; 45:1158-68. [PMID: 25415994 PMCID: PMC8323590 DOI: 10.1109/tcyb.2014.2346394] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Multiatlas based method is commonly used in medical image segmentation. In multiatlas based image segmentation, atlas selection and combination are considered as two key factors affecting the performance. Recently, manifold learning based atlas selection methods have emerged as very promising methods. However, due to the complexity of prostate structures in raw images, it is difficult to get accurate atlas selection results by only measuring the distance between raw images on the manifolds. Although the distance between the regions to be segmented across images can be readily obtained by the label images, it is infeasible to directly compute the distance between the test image (gray) and the label images (binary). This paper tries to address this problem by proposing a label image constrained atlas selection method, which exploits the label images to constrain the manifold projection of raw images. Analyzing the data point distribution of the selected atlases in the manifold subspace, a novel weight computation method for atlas combination is proposed. Compared with other related existing methods, the experimental results on prostate segmentation from T2w MRI showed that the selected atlases are closer to the target structure and more accurate segmentation were obtained by using our proposed method.
Collapse
|
24
|
Bansal R, Hao X, Peterson BS. Morphological covariance in anatomical MRI scans can identify discrete neural pathways in the brain and their disturbances in persons with neuropsychiatric disorders. Neuroimage 2015; 111:215-27. [PMID: 25700952 DOI: 10.1016/j.neuroimage.2015.02.022] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2015] [Accepted: 02/10/2015] [Indexed: 01/06/2023] Open
Abstract
We hypothesize that coordinated functional activity within discrete neural circuits induces morphological organization and plasticity within those circuits. Identifying regions of morphological covariation that are independent of morphological covariation in other regions therefore may therefore allow us to identify discrete neural systems within the brain. Comparing the magnitude of these variations in individuals who have psychiatric disorders with the magnitude of variations in healthy controls may allow us to identify aberrant neural pathways in psychiatric illnesses. We measured surface morphological features by applying nonlinear, high-dimensional warping algorithms to manually defined brain regions. We transferred those measures onto the surface of a unit sphere via conformal mapping and then used spherical wavelets and their scaling coefficients to simplify the data structure representing these surface morphological features of each brain region. We used principal component analysis (PCA) to calculate covariation in these morphological measures, as represented by their scaling coefficients, across several brain regions. We then assessed whether brain subregions that covaried in morphology, as identified by large eigenvalues in the PCA, identified specific neural pathways of the brain. To do so, we spatially registered the subnuclei for each eigenvector into the coordinate space of a Diffusion Tensor Imaging dataset; we used these subnuclei as seed regions to track and compare fiber pathways with known fiber pathways identified in neuroanatomical atlases. We applied these procedures to anatomical MRI data in a cohort of 82 healthy participants (42 children, 18 males, age 10.5 ± 2.43 years; 40 adults, 22 males, age 32.42 ± 10.7 years) and 107 participants with Tourette's Syndrome (TS) (71 children, 59 males, age 11.19 ± 2.2 years; 36 adults, 21 males, age 37.34 ± 10.9 years). We evaluated the construct validity of the identified covariation in morphology using DTI data from a different set of 20 healthy adults (10 males, mean age 29.7 ± 7.7 years). The PCA identified portions of structures that covaried across the brain, the eigenvalues measuring the magnitude of the covariation in morphology along the respective eigenvectors. Our results showed that the eigenvectors, and the DTI fibers tracked from their associated brain regions, corresponded with known neural pathways in the brain. In addition, the eigenvectors that captured morphological covariation across regions, and the principal components along those eigenvectors, identified neural pathways with aberrant morphological features associated with TS. These findings suggest that covariations in brain morphology can identify aberrant neural pathways in specific neuropsychiatric disorders.
Collapse
Affiliation(s)
- Ravi Bansal
- Institute for the Developing Mind, Children's Hospital Los Angeles, Los Angeles CA, USA; Keck School of Medicine, University of Southern California, Los Angeles, CA 90027, USA.
| | - Xuejun Hao
- Department of Psychiatry, Columbia University, New York, NY 10032, USA; New York State Psychiatric Institute, New York, NY 10032, USA
| | - Bradley S Peterson
- Institute for the Developing Mind, Children's Hospital Los Angeles, Los Angeles CA, USA; Keck School of Medicine, University of Southern California, Los Angeles, CA 90027, USA
| |
Collapse
|
25
|
Wachinger C, Golland P, Kremen W, Fischl B, Reuter M. BrainPrint: a discriminative characterization of brain morphology. Neuroimage 2015; 109:232-48. [PMID: 25613439 PMCID: PMC4340729 DOI: 10.1016/j.neuroimage.2015.01.032] [Citation(s) in RCA: 77] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2014] [Revised: 11/29/2014] [Accepted: 01/10/2015] [Indexed: 01/18/2023] Open
Abstract
We introduce BrainPrint, a compact and discriminative representation of brain morphology. BrainPrint captures shape information of an ensemble of cortical and subcortical structures by solving the eigenvalue problem of the 2D and 3D Laplace-Beltrami operator on triangular (boundary) and tetrahedral (volumetric) meshes. This discriminative characterization enables new ways to study the similarity between brains; the focus can either be on a specific brain structure of interest or on the overall brain similarity. We highlight four applications for BrainPrint in this article: (i) subject identification, (ii) age and sex prediction, (iii) brain asymmetry analysis, and (iv) potential genetic influences on brain morphology. The properties of BrainPrint require the derivation of new algorithms to account for the heterogeneous mix of brain structures with varying discriminative power. We conduct experiments on three datasets, including over 3000 MRI scans from the ADNI database, 436 MRI scans from the OASIS dataset, and 236 MRI scans from the VETSA twin study. All processing steps for obtaining the compact representation are fully automated, making this processing framework particularly attractive for handling large datasets.
Collapse
Affiliation(s)
- Christian Wachinger
- Computer Science and Artificial Intelligence Lab, MIT, USA; Massachusetts General Hospital, Harvard Medical School, USA.
| | - Polina Golland
- Computer Science and Artificial Intelligence Lab, MIT, USA
| | - William Kremen
- University of California, San Diego, USA; VA San Diego, Center of Excellence for Stress and Mental Health, USA
| | - Bruce Fischl
- Computer Science and Artificial Intelligence Lab, MIT, USA; Massachusetts General Hospital, Harvard Medical School, USA
| | - Martin Reuter
- Computer Science and Artificial Intelligence Lab, MIT, USA; Massachusetts General Hospital, Harvard Medical School, USA
| |
Collapse
|
26
|
Toews M, Wachinger C, Estepar RSJ, Wells WM. A Feature-Based Approach to Big Data Analysis of Medical Images. INFORMATION PROCESSING IN MEDICAL IMAGING : PROCEEDINGS OF THE ... CONFERENCE 2015. [PMID: 26221685 DOI: 10.1007/978-3-319-19992-4_26] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.
Collapse
|
27
|
Abstract
The reconstruction of 4D images from 2D navigator and data slices requires sufficient observations per motion state to avoid blurred images and motion artifacts between slices. Especially images from rare motion states, like deep inhalations during free-breathing, suffer from too few observations. To address this problem, we propose to actively generate more suitable images instead of only selecting from the available images. The method is based on learning the relationship between navigator and data-slice motion by linear regression after dimensionality reduction. This can then be used to predict new data slices for a given navigator by warping existing data slices by their predicted displacement field. The method was evaluated for 4D-MRIs of the liver under free-breathing, where sliding boundaries pose an additional challenge for image registration. Leave-one-out tests for five short sequences of ten volunteers showed that the proposed prediction method improved on average the residual mean (95%) motion between the ground truth and predicted data slice from 0.9mm (1.9mm) to 0.8mm (1.6mm) in comparison to the best selection method. The approach was particularly suited for unusual motion states, where the mean error was reduced by 40% (2.2mm vs. 1.3mm).
Collapse
|
28
|
Wang Q, Kim M, Shi Y, Wu G, Shen D. Predict brain MR image registration via sparse learning of appearance and transformation. Med Image Anal 2014; 20:61-75. [PMID: 25476412 DOI: 10.1016/j.media.2014.10.007] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2014] [Revised: 10/11/2014] [Accepted: 10/23/2014] [Indexed: 10/24/2022]
Abstract
We propose a new approach to register the subject image with the template by leveraging a set of intermediate images that are pre-aligned to the template. We argue that, if points in the subject and the intermediate images share similar local appearances, they may have common correspondence in the template. In this way, we learn the sparse representation of a certain subject point to reveal several similar candidate points in the intermediate images. Each selected intermediate candidate can bridge the correspondence from the subject point to the template space, thus predicting the transformation associated with the subject point at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point, instead of allowing only a single correspondence. Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. We further embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we refine our estimated transformation field via existing registration method in effective manners. We apply our method to registering brain MR images, and conclude that the proposed framework is competent to improve registration performances substantially.
Collapse
Affiliation(s)
- Qian Wang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China; Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Minjeong Kim
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Yonghong Shi
- Digital Medical Research Center, Shanghai Key Lab of MICCAI, School of Basic Medical Sciences, Fudan University, Shanghai, China
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States; Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
| | | |
Collapse
|
29
|
Ou Y, Akbari H, Bilello M, Da X, Davatzikos C. Comparative evaluation of registration algorithms in different brain databases with varying difficulty: results and insights. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:2039-65. [PMID: 24951685 PMCID: PMC4371548 DOI: 10.1109/tmi.2014.2330355] [Citation(s) in RCA: 111] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms' similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations.
Collapse
|
30
|
Manifold population modeling as a neuro-imaging biomarker: Application to ADNI and ADNI-GO. Neuroimage 2014; 94:275-286. [PMID: 24657351 DOI: 10.1016/j.neuroimage.2014.03.036] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2013] [Revised: 01/21/2014] [Accepted: 03/12/2014] [Indexed: 01/18/2023] Open
|
31
|
Piella G. Diffusion maps for multimodal registration. SENSORS 2014; 14:10562-77. [PMID: 24936947 PMCID: PMC4118417 DOI: 10.3390/s140610562] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2014] [Revised: 06/06/2014] [Accepted: 06/06/2014] [Indexed: 11/16/2022]
Abstract
Multimodal image registration is a difficult task, due to the significant intensity variations between the images. A common approach is to use sophisticated similarity measures, such as mutual information, that are robust to those intensity variations. However, these similarity measures are computationally expensive and, moreover, often fail to capture the geometry and the associated dynamics linked with the images. Another approach is the transformation of the images into a common space where modalities can be directly compared. Within this approach, we propose to register multimodal images by using diffusion maps to describe the geometric and spectral properties of the data. Through diffusion maps, the multimodal data is transformed into a new set of canonical coordinates that reflect its geometry uniformly across modalities, so that meaningful correspondences can be established between them. Images in this new representation can then be registered using a simple Euclidean distance as a similarity measure. Registration accuracy was evaluated on both real and simulated brain images with known ground-truth for both rigid and non-rigid registration. Results showed that the proposed approach achieved higher accuracy than the conventional approach using mutual information.
Collapse
Affiliation(s)
- Gemma Piella
- Department of Information & Communication Technologies, Universitat Pompeu Fabra, Barcelona 08018, Spain.
| |
Collapse
|
32
|
Ye DH, Desjardins B, Hamm J, Litt H, Pohl KM. Regional manifold learning for disease classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:1236-1247. [PMID: 24893254 PMCID: PMC5450500 DOI: 10.1109/tmi.2014.2305751] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
While manifold learning from images itself has become widely used in medical image analysis, the accuracy of existing implementations suffers from viewing each image as a single data point. To address this issue, we parcellate images into regions and then separately learn the manifold for each region. We use the regional manifolds as low-dimensional descriptors of high-dimensional morphological image features, which are then fed into a classifier to identify regions affected by disease. We produce a single ensemble decision for each scan by the weighted combination of these regional classification results. Each weight is determined by the regional accuracy of detecting the disease. When applied to cardiac magnetic resonance imaging of 50 normal controls and 50 patients with reconstructive surgery of Tetralogy of Fallot, our method achieves significantly better classification accuracy than approaches learning a single manifold across the entire image domain.
Collapse
Affiliation(s)
| | - Benoit Desjardins
- Department of Radiology, University of Pennsylvania, Philadelphia,
PA 19104 USA
| | - Jihun Hamm
- Department of Computer Science and Engineering, Ohio State
University, Columbus, OH 43210 USA
| | - Harold Litt
- Department of Radiology, University of Pennsylvania, Philadelphia,
PA 19104 USA
| | - Kilian M. Pohl
- Center for Health Sciences, SRI International, Menlo Park, CA 94025
USA, and also with the Department of Psychiatry and Behavioral Sciences,
Stanford University, Stanford, CA 94304 USA
| |
Collapse
|
33
|
Ye DH, Desjardins B, Ferrari V, Metaxas D, Pohl KM. AUTO-ENCODING OF DISCRIMINATING MORPHOMETRY FROM CARDIAC MRI. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2014; 2014:217-221. [PMID: 28593032 PMCID: PMC5459374 DOI: 10.1109/isbi.2014.6867848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We propose a fully-automatic morphometric encoding targeted towards differentiating diseased from healthy cardiac MRI. Existing encodings rely on accurate segmentations of each scan. Segmentation generally includes labour-intensive editing and increases the risk associated with intra- and inter-rater variability. Our morphometric framework only requires the segmentation of a template scan. This template is non-rigidly registered to the other scans. We then confine the resulting deformation maps to the regions outlined by the segmentations. We learn a manifold for each region and identify the most informative coordinates with respect to distinguishing diseased from healthy scans. Compared with volumetric measurements and a deformation-based score, this encoding is much more accurate in capturing morphometric patterns distinguishing healthy subjects from those with Tetralogy of Fallot, diastolic dysfunction, and hypertrophic cardiomyopathy.
Collapse
Affiliation(s)
- Dong Hye Ye
- Department of Electrical and Computer Engineering, Purdue University
| | | | | | | | - Kilian M Pohl
- SRI International & Department of Psychiatry and Behavioral Sciences, Stanford University
| |
Collapse
|
34
|
Lee J, Lyu I, Styner M. Multi-atlas segmentation with particle-based group-wise image registration. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2014; 9034:903447. [PMID: 25075158 PMCID: PMC4112129 DOI: 10.1117/12.2043333] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
We propose a novel multi-atlas segmentation method that employs a group-wise image registration method for the brain segmentation on rodent magnetic resonance (MR) images. The core element of the proposed segmentation is the use of a particle-guided image registration method that extends the concept of particle correspondence into the volumetric image domain. The registration method performs a group-wise image registration that simultaneously registers a set of images toward the space defined by the average of particles. The particle-guided image registration method is robust with low signal-to-noise ratio images as well as differing sizes and shapes observed in the developing rodent brain. Also, the use of an implicit common reference frame can prevent potential bias induced by the use of a single template in the segmentation process. We show that the use of a particle guided-image registration method can be naturally extended to a novel multi-atlas segmentation method and improves the registration method to explicitly use the provided template labels as an additional constraint. In the experiment, we show that our segmentation algorithm provides more accuracy with multi-atlas label fusion and stability against pair-wise image registration. The comparison with previous group-wise registration method is provided as well.
Collapse
Affiliation(s)
- Joohwi Lee
- University of North Carolina at Chapel Hill, Department of Computer Science
| | - Ilwoo Lyu
- University of North Carolina at Chapel Hill, Department of Computer Science
| | - Martin Styner
- University of North Carolina at Chapel Hill, Department of Computer Science
- University of North Carolina at Chapel Hill, Department of Psychiatry
| |
Collapse
|
35
|
Erus G, Zacharaki EI, Davatzikos C. Individualized statistical learning from medical image databases: application to identification of brain lesions. Med Image Anal 2014; 18:542-54. [PMID: 24607564 DOI: 10.1016/j.media.2014.02.003] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2012] [Revised: 11/27/2013] [Accepted: 02/08/2014] [Indexed: 11/25/2022]
Abstract
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated.
Collapse
Affiliation(s)
- Guray Erus
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA.
| | - Evangelia I Zacharaki
- Department of Medical Physics, School of Medicine, University of Patras, Patras, Greece
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
36
|
Wang J, Vachet C, Rumple A, Gouttard S, Ouziel C, Perrot E, Du G, Huang X, Gerig G, Styner M. Multi-atlas segmentation of subcortical brain structures via the AutoSeg software pipeline. Front Neuroinform 2014; 8:7. [PMID: 24567717 PMCID: PMC3915103 DOI: 10.3389/fninf.2014.00007] [Citation(s) in RCA: 82] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2013] [Accepted: 01/16/2014] [Indexed: 11/13/2022] Open
Abstract
Automated segmenting and labeling of individual brain anatomical regions, in MRI are challenging, due to the issue of individual structural variability. Although atlas-based segmentation has shown its potential for both tissue and structure segmentation, due to the inherent natural variability as well as disease-related changes in MR appearance, a single atlas image is often inappropriate to represent the full population of datasets processed in a given neuroimaging study. As an alternative for the case of single atlas segmentation, the use of multiple atlases alongside label fusion techniques has been introduced using a set of individual “atlases” that encompasses the expected variability in the studied population. In our study, we proposed a multi-atlas segmentation scheme with a novel graph-based atlas selection technique. We first paired and co-registered all atlases and the subject MR scans. A directed graph with edge weights based on intensity and shape similarity between all MR scans is then computed. The set of neighboring templates is selected via clustering of the graph. Finally, weighted majority voting is employed to create the final segmentation over the selected atlases. This multi-atlas segmentation scheme is used to extend a single-atlas-based segmentation toolkit entitled AutoSeg, which is an open-source, extensible C++ based software pipeline employing BatchMake for its pipeline scripting, developed at the Neuro Image Research and Analysis Laboratories of the University of North Carolina at Chapel Hill. AutoSeg performs N4 intensity inhomogeneity correction, rigid registration to a common template space, automated brain tissue classification based skull-stripping, and the multi-atlas segmentation. The multi-atlas-based AutoSeg has been evaluated on subcortical structure segmentation with a testing dataset of 20 adult brain MRI scans and 15 atlas MRI scans. The AutoSeg achieved mean Dice coefficients of 81.73% for the subcortical structures.
Collapse
Affiliation(s)
- Jiahui Wang
- Department of Psychiatry, University of North Carolina Chapel Hill, NC, USA
| | - Clement Vachet
- Scientific Computing and Imaging Institute, University of Utah Salt Lake City, UT, USA
| | - Ashley Rumple
- Department of Psychiatry, University of North Carolina Chapel Hill, NC, USA
| | | | - Clémentine Ouziel
- Department of Psychiatry, University of North Carolina Chapel Hill, NC, USA
| | - Emilie Perrot
- Department of Psychiatry, University of North Carolina Chapel Hill, NC, USA
| | - Guangwei Du
- Department of Neurology, Neurosurgery and Radiology, Pennsylvania State University Milton Hershey Medical Center Hershey, PA, USA
| | - Xuemei Huang
- Department of Neurology, Neurosurgery and Radiology, Pennsylvania State University Milton Hershey Medical Center Hershey, PA, USA
| | - Guido Gerig
- Scientific Computing and Imaging Institute, University of Utah Salt Lake City, UT, USA
| | - Martin Styner
- Department of Psychiatry, University of North Carolina Chapel Hill, NC, USA ; Department of Computer Science, University of North Carolina Chapel Hill, NC, USA
| |
Collapse
|
37
|
Cerveri P, Manzotti A, Vanzulli A, Baroni G. Local Shape Similarity and Mean-Shift Curvature for Deformable Surface Mapping of Anatomical Structures. IEEE Trans Biomed Eng 2014; 61:16-24. [DOI: 10.1109/tbme.2013.2274672] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
38
|
Wachinger C, Golland P, Reuter M. BrainPrint: identifying subjects by their brain. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2014; 17:41-8. [PMID: 25320780 PMCID: PMC4216735 DOI: 10.1007/978-3-319-10443-0_6] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Introducing BrainPrint, a compact and discriminative representation of anatomical structures in the brain. BrainPrint captures shape information of an ensemble of cortical and subcortical structures by solving the 2D and 3D Laplace-Beltrami operator on triangular (boundary) and tetrahedral (volumetric) meshes. We derive a robust classifier for this representation that identifies the subject in a new scan, based on a database of brain scans. In an example dataset containing over 3000 MRI scans, we show that BrainPrint captures unique information about the subject's anatomy and permits to correctly classify a scan with an accuracy of over 99.8%. All processing steps for obtaining the compact representation are fully automated making this processing framework particularly attractive for handling large datasets.
Collapse
|
39
|
Nasreddine K, Benzinou A, Fablet R. Geodesics-based image registration: applications to biological and medical images depicting concentric ring patterns. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:4436-4446. [PMID: 23880058 DOI: 10.1109/tip.2013.2273670] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
In many biological or medical applications, images that contain sequences of shapes are common. The existence of high inter-individual variability makes their interpretation complex. In this paper, we address the computer-assisted interpretation of such images and we investigate how we can remove or reduce these image variabilities. The proposed approach relies on the development of an efficient image registration technique. We first show the inadequacy of state-of-the-art intensity-based and feature-based registration techniques for the considered image datasets. Then, we propose a robust variational method which benefits from the geometrical information present in this type of images. In the proposed non-rigid geodesics-based registration, the successive shapes are represented by a level-set representation, which we rely on to carry out the registration. The successive level sets are regarded as elements in a shape space and the corresponding matching is that of the optimal geodesic path. The proposed registration scheme is tested on synthetic and real images. The comparison against results of state-of-the-art methods proves the relevance of the proposed method for this type of images.
Collapse
|
40
|
Konukoglu E, Glocker B, Zikic D, Criminisi A. Neighbourhood approximation using randomized forests. Med Image Anal 2013; 17:790-804. [DOI: 10.1016/j.media.2013.04.013] [Citation(s) in RCA: 51] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2012] [Revised: 04/23/2013] [Accepted: 04/24/2013] [Indexed: 11/29/2022]
|
41
|
Abstract
Magnetic resonance imaging has become an important noninvasive technique to gain insight into fetal brain development. Its capabilities go beyond ultrasound when diagnosing high-risk pregnancies. To summarize observations across a population in magnetic resonance imaging studies, reference systems such as atlases that establish correspondences across a cohort are key. In this article, we review the evolution of atlas-building methods in light of their relevance, limitations, and benefits for the modeling of human brain development. Starting with single anatomical templates to which brain scans where mapped to such as Talairach and Montreal Neurological Institute space, we explore the uses of atlases as a means to establish correspondences across a cohort and as a model that captures the population characteristics of the cases the atlas is built from. We discuss methods that capture features of increasingly heterogeneous populations and approaches that are able to generalize with only minimal annotation. The main focus of this review are methods that explicitly model the variability in the population with regard to time, such as in the modeling of disease progression and brain development. We highlight the applicability and limitations of state-of-the art approaches, how insights from the study of disease progression are helpful in developmental studies, and point to the directions of future research that is still necessary.
Collapse
|
42
|
Anderson R, Stenger B, Cipolla R. Using Bounded Diameter Minimum Spanning Trees to Build Dense Active Appearance Models. Int J Comput Vis 2013. [DOI: 10.1007/s11263-013-0661-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
43
|
Hoang Duc AK, Modat M, Leung KK, Cardoso MJ, Barnes J, Kadir T, Ourselin S. Using manifold learning for atlas selection in multi-atlas segmentation. PLoS One 2013; 8:e70059. [PMID: 23936376 PMCID: PMC3732273 DOI: 10.1371/journal.pone.0070059] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2012] [Accepted: 06/15/2013] [Indexed: 11/23/2022] Open
Abstract
Multi-atlas segmentation has been widely used to segment various anatomical structures. The success of this technique partly relies on the selection of atlases that are best mapped to a new target image after registration. Recently, manifold learning has been proposed as a method for atlas selection. Each manifold learning technique seeks to optimize a unique objective function. Therefore, different techniques produce different embeddings even when applied to the same data set. Previous studies used a single technique in their method and gave no reason for the choice of the manifold learning technique employed nor the theoretical grounds for the choice of the manifold parameters. In this study, we compare side-by-side the results given by 3 manifold learning techniques (Isomap, Laplacian Eigenmaps and Locally Linear Embedding) on the same data set. We assess the ability of those 3 different techniques to select the best atlases to combine in the framework of multi-atlas segmentation. First, a leave-one-out experiment is used to optimize our method on a set of 110 manually segmented atlases of hippocampi and find the manifold learning technique and associated manifold parameters that give the best segmentation accuracy. Then, the optimal parameters are used to automatically segment 30 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI). For our dataset, the selection of atlases with Locally Linear Embedding gives the best results. Our findings show that selection of atlases with manifold learning leads to segmentation accuracy close to or significantly higher than the state-of-the-art method and that accuracy can be increased by fine tuning the manifold learning process.
Collapse
Affiliation(s)
- Albert K Hoang Duc
- Centre for Medical Image Computing, University College London, London, United Kingdom.
| | | | | | | | | | | | | |
Collapse
|
44
|
Crum WR, Modo M, Vernon AC, Barker GJ, Williams SCR. Registration of challenging pre-clinical brain images. J Neurosci Methods 2013; 216:62-77. [PMID: 23558335 PMCID: PMC3683149 DOI: 10.1016/j.jneumeth.2013.03.015] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2012] [Revised: 02/27/2013] [Accepted: 03/24/2013] [Indexed: 01/15/2023]
Abstract
The size and complexity of brain imaging studies in pre-clinical populations are increasing, and automated image analysis pipelines are urgently required. Pre-clinical populations can be subjected to controlled interventions (e.g., targeted lesions), which significantly change the appearance of the brain obtained by imaging. Existing systems for registration (the systematic alignment of scans into a consistent anatomical coordinate system), which assume image similarity to a reference scan, may fail when applied to these images. However, affine registration is a particularly vital pre-processing step for subsequent image analysis which is assumed to be an effective procedure in recent literature describing sophisticated techniques such as manifold learning. Therefore, in this paper, we present an affine registration solution that uses a graphical model of a population to decompose difficult pairwise registrations into a composition of steps using other members of the population. We developed this methodology in the context of a pre-clinical model of stroke in which large, variable hyper-intense lesions significantly impact registration performance. We tested this technique systematically in a simulated human population of brain tumour images before applying it to pre-clinical models of Parkinson's disease and stroke.
Collapse
Affiliation(s)
- William R Crum
- Kings College London, Department of Neuroimaging, Institute of Psychiatry, De Crespigny Park, London SE5 8AF, United Kingdom.
| | | | | | | | | |
Collapse
|
45
|
Xie Y, Ho J, Vemuri BC. Multiple Atlas construction from a heterogeneous brain MR image collection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:628-35. [PMID: 23335665 PMCID: PMC3595350 DOI: 10.1109/tmi.2013.2239654] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
In this paper, we propose a novel framework for computing single or multiple atlases (templates) from a large population of images. Unlike many existing methods, our proposed approach is distinguished by its emphasis on the sharpness of the computed atlases and the requirement of rotational invariance. In particular, we argue that sharp atlas images that retain crucial and important anatomical features with high fidelity are more useful for many medical imaging applications when compared with the blurry and fuzzy atlas images computed by most existing methods. The geometric notion that underlies our approach is the idea of manifold learning in a quotient space, the quotient space of the image space by the rotations. We present an extension of the existing manifold learning approach to quotient spaces by using invariant metrics, and utilizing the manifold structure for partitioning the images into more homogeneous sub-collections, each of which can be represented by a single atlas image. Specifically, we propose a three-step algorithm. First, we partition the input images into subgroups using unsupervised or semi-supervised learning methods on manifolds. Then we formulate a convex optimization problem in each subgroup to locate the atlases and determine the crucial neighbors that are used in the realization step to form the template images. We have evaluated our algorithm using whole brain MR volumes from OASIS database. Experimental results demonstrate that the atlases computed using the proposed algorithm not only discover the brain structural changes in different age groups but also preserve important structural details and generally enjoy better image quality.
Collapse
Affiliation(s)
- Yuchen Xie
- Department of Computer and Information Science and Engineering (CISE), University of Florida, Gainesville, FL 32611, USA.
| | | | | |
Collapse
|
46
|
Li J, Shi Y, Dinov ID, Toga AW. Locally Weighted Multi-atlas Construction. MULTIMODAL BRAIN IMAGE ANALYSIS : THIRD INTERNATIONAL WORKSHOP, MBIA 2013, HELD IN CONJUNCTION WITH MICCAI 2013, NAGOYA, JAPAN, SEPTEMBER 22, 2013 : PROCEEDINGS. MBIA (WORKSHOP) (3RD : 2013 : NAGOYA-SHI, JAPAN) 2013; 8159:1-8. [PMID: 25392851 PMCID: PMC4225708 DOI: 10.1007/978-3-319-02126-3_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In image-based medical research, atlases are widely used in many tasks, for example, spatial normalization and segmentation. If atlases are regarded as representative patterns for a population of images, then multiple atlases are required for a heterogeneous population. In conventional atlas construction methods, the "unit" of representative patterns is images. Every input image is associated with its most similar atlas. As the number of subjects increases, the heterogeneity increases accordingly, and a big number of atlases may be needed. In this paper, we explore using region-wise, instead of image-wise, patterns to represent a population. Different parts of an input image is fuzzily associated with different atlases according to voxel-level association weights. In this way, regional structure patterns from different atlases can be combined together. Based on this model, we design a variational framework for multi-atlas construction. In the application to two T1-weighted MRI data sets, the method shows promising performance, in comparison with a conventional unbiased atlas construction method.
Collapse
|
47
|
Multi-atlas segmentation with robust label transfer and label fusion. INFORMATION PROCESSING IN MEDICAL IMAGING : PROCEEDINGS OF THE ... CONFERENCE 2013; 23:548-59. [PMID: 24683998 DOI: 10.1007/978-3-642-38868-2_46] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
Multi-atlas segmentation has been widely applied in medical image analysis. This technique relies on image registration to transfer segmentation labels from pre-labeled atlases to a novel target image and applies label fusion to reduce errors produced by registration-based label transfer. To improve the performance of registration-based label transfer against registration errors, our first contribution is to propose a label transfer scheme that generates multiple warped versions of each atlas to one target image through registration paths obtained by composing inter-atlas registrations and atlas-target registrations. The problem of decreasing quality of warped atlases caused by accumulative errors in composing multiple registrations is properly addressed by an atlas selection method that is guided by atlas segmentations. To improve the performance of label fusion against registration errors, our second contribution is to integrate the probabilistic correspondence model employed by the non-local mean approach with the joint label fusion technique, both of which have shown excellent performance for label fusion. Experiments on mitral-valve segmentation in 3D transesophageal echocardiography (TEE) show the effectiveness of the proposed techniques.
Collapse
|
48
|
Brosch T, Tam R. Manifold learning of brain MRIs by deep learning. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2013; 16:633-40. [PMID: 24579194 DOI: 10.1007/978-3-642-40763-5_78] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Manifold learning of medical images plays a potentially important role for modeling anatomical variability within a population with pplications that include segmentation, registration, and prediction of clinical parameters. This paper describes a novel method for learning the manifold of 3D brain images that, unlike most existing manifold learning methods, does not require the manifold space to be locally linear, and does not require a predefined similarity measure or a prebuilt proximity graph. Our manifold learning method is based on deep learning, a machine learning approach that uses layered networks (called deep belief networks, or DBNs) and has received much attention recently in the computer vision field due to their success in object recognition tasks. DBNs have traditionally been too computationally expensive for application to 3D images due to the large number of trainable parameters. Our primary contributions are (1) a much more computationally efficient training method for DBNs that makes training on 3D medical images with a resolution of up to 128 x 128 x 128 practical, and (2) the demonstration that DBNs can learn a low-dimensional manifold of brain volumes that detects modes of variations that correlate to demographic and disease parameters.
Collapse
Affiliation(s)
- Tom Brosch
- Electrical and Computer Engineering, MS/MRI Research Group, University of British Columbia, Vancouver, Canada
| | - Roger Tam
- Department of Radiology, MS/MRI Research Group, University of British Columbia, Vancouver, Canada
| | | |
Collapse
|
49
|
Gauthier JF, Varfalvy N, Tremblay D, Cyr MF, Archambault L. Characterization of lung tumors motion baseline using cone-beam computed tomography. Med Phys 2012; 39:7062-70. [DOI: 10.1118/1.4762563] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
50
|
Ye DH, Hamm J, Pohl KM. COMBINING REGIONAL METRICS FOR DISEASE-RELATED BRAIN POPULATION ANALYSIS. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2012; 2012:1515-1518. [PMID: 28593031 DOI: 10.1109/isbi.2012.6235860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In this paper, we present a new metric combining regional measurements to improve image based population studies that use manifold learning techniques. These studies currently rely on a single score over the whole brain image domain. Thus, they require large amount of training data to uncover spatially complex variation in the whole brain impacted by diseases. We reduce the impact of this issue by first computing pairwise measurements in local regions separately and then combining regional measurements into a single pairwise metric. We apply the new metric to learn the manifold of ADNI data and evaluate the resulting morphological representation by fitting multiple linear regression models to the mini-mental state examination (MMSE) score. The regression models show that the morphological representations from the proposed metric achieves higher estimation accuracy of MMSE score compared to those from the conventional global scores.
Collapse
Affiliation(s)
- Dong Hye Ye
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104
| | - Jihun Hamm
- Department of Computer Science and Engineering, Ohio State University, Columbus, OH, 43210
| | - Kilian M Pohl
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104
| |
Collapse
|