1
|
Zhang Y, Peng C, Wang Q, Song D, Li K, Kevin Zhou S. Unified Multi-Modal Image Synthesis for Missing Modality Imputation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:4-18. [PMID: 38976465 DOI: 10.1109/tmi.2024.3424785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
Multi-modal medical images provide complementary soft-tissue characteristics that aid in the screening and diagnosis of diseases. However, limited scanning time, image corruption and various imaging protocols often result in incomplete multi-modal images, thus limiting the usage of multi-modal data for clinical purposes. To address this issue, in this paper, we propose a novel unified multi-modal image synthesis method for missing modality imputation. Our method overall takes a generative adversarial architecture, which aims to synthesize missing modalities from any combination of available ones with a single model. To this end, we specifically design a Commonality- and Discrepancy-Sensitive Encoder for the generator to exploit both modality-invariant and specific information contained in input modalities. The incorporation of both types of information facilitates the generation of images with consistent anatomy and realistic details of the desired distribution. Besides, we propose a Dynamic Feature Unification Module to integrate information from a varying number of available modalities, which enables the network to be robust to random missing modalities. The module performs both hard integration and soft integration, ensuring the effectiveness of feature combination while avoiding information loss. Verified on two public multi-modal magnetic resonance datasets, the proposed method is effective in handling various synthesis tasks and shows superior performance compared to previous methods.
Collapse
|
2
|
Liu J, Pasumarthi S, Duffy B, Gong E, Datta K, Zaharchuk G. One Model to Synthesize Them All: Multi-Contrast Multi-Scale Transformer for Missing Data Imputation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2577-2591. [PMID: 37030684 PMCID: PMC10543020 DOI: 10.1109/tmi.2023.3261707] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Multi-contrast magnetic resonance imaging (MRI) is widely used in clinical practice as each contrast provides complementary information. However, the availability of each imaging contrast may vary amongst patients, which poses challenges to radiologists and automated image analysis algorithms. A general approach for tackling this problem is missing data imputation, which aims to synthesize the missing contrasts from existing ones. While several convolutional neural networks (CNN) based algorithms have been proposed, they suffer from the fundamental limitations of CNN models, such as the requirement for fixed numbers of input and output channels, the inability to capture long-range dependencies, and the lack of interpretability. In this work, we formulate missing data imputation as a sequence-to-sequence learning problem and propose a multi-contrast multi-scale Transformer (MMT), which can take any subset of input contrasts and synthesize those that are missing. MMT consists of a multi-scale Transformer encoder that builds hierarchical representations of inputs combined with a multi-scale Transformer decoder that generates the outputs in a coarse-to-fine fashion. The proposed multi-contrast Swin Transformer blocks can efficiently capture intra- and inter-contrast dependencies for accurate image synthesis. Moreover, MMT is inherently interpretable as it allows us to understand the importance of each input contrast in different regions by analyzing the in-built attention maps of Transformer blocks in the decoder. Extensive experiments on two large-scale multi-contrast MRI datasets demonstrate that MMT outperforms the state-of-the-art methods quantitatively and qualitatively.
Collapse
|
3
|
Bermudez C, Remedios SW, Ramadass K, McHugo M, Heckers S, Huo Y, Landman BA. Generalizing deep whole-brain segmentation for post-contrast MRI with transfer learning. J Med Imaging (Bellingham) 2020; 7:064004. [PMID: 33381612 PMCID: PMC7757519 DOI: 10.1117/1.jmi.7.6.064004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 12/01/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Generalizability is an important problem in deep neural networks, especially with variability of data acquisition in clinical magnetic resonance imaging (MRI). Recently, the spatially localized atlas network tiles (SLANT) can effectively segment whole brain, non-contrast T1w MRI with 132 volumetric labels. Transfer learning (TL) is a commonly used domain adaptation tool to update the neural network weights for local factors, yet risks degradation of performance on the original validation/test cohorts. Approach: We explore TL using unlabeled clinical data to address these concerns in the context of adapting SLANT to scanning protocol variations. We optimize whole-brain segmentation on heterogeneous clinical data by leveraging 480 unlabeled pairs of clinically acquired T1w MRI with and without intravenous contrast. We use labels generated on the pre-contrast image to train on the post-contrast image in a five-fold cross-validation framework. We further validated on a withheld test set of 29 paired scans over a different acquisition domain. Results: Using TL, we improve reproducibility across imaging pairs measured by the reproducibility Dice coefficient (rDSC) between the pre- and post-contrast image. We showed an increase over the original SLANT algorithm (rDSC 0.82 versus 0.72) and the FreeSurfer v6.0.1 segmentation pipeline ( rDSC = 0.53 ). We demonstrate the impact of this work decreasing the root-mean-squared error of volumetric estimates of the hippocampus between paired images of the same subject by 67%. Conclusion: This work demonstrates a pipeline for unlabeled clinical data to translate algorithms optimized for research data to generalize toward heterogeneous clinical acquisitions.
Collapse
Affiliation(s)
- Camilo Bermudez
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Samuel W. Remedios
- Henry Jackson Foundation, Center for Neuroscience and Regenerative Medicine, Bethesda, Maryland, United States
| | - Karthik Ramadass
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee, United States
| | - Maureen McHugo
- Vanderbilt University Medical Center, Department of Psychiatry and Behavioral Sciences, Nashville, Tennessee, United States
| | - Stephan Heckers
- Vanderbilt University Medical Center, Department of Psychiatry and Behavioral Sciences, Nashville, Tennessee, United States
| | - Yuankai Huo
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Bennett A. Landman
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee, United States
- Vanderbilt University Medical Center, Department of Psychiatry and Behavioral Sciences, Nashville, Tennessee, United States
| |
Collapse
|
4
|
Carass A, Roy S, Gherman A, Reinhold JC, Jesson A, Arbel T, Maier O, Handels H, Ghafoorian M, Platel B, Birenbaum A, Greenspan H, Pham DL, Crainiceanu CM, Calabresi PA, Prince JL, Roncal WRG, Shinohara RT, Oguz I. Evaluating White Matter Lesion Segmentations with Refined Sørensen-Dice Analysis. Sci Rep 2020; 10:8242. [PMID: 32427874 PMCID: PMC7237671 DOI: 10.1038/s41598-020-64803-w] [Citation(s) in RCA: 78] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Accepted: 04/20/2020] [Indexed: 11/09/2022] Open
Abstract
The Sørensen-Dice index (SDI) is a widely used measure for evaluating medical image segmentation algorithms. It offers a standardized measure of segmentation accuracy which has proven useful. However, it offers diminishing insight when the number of objects is unknown, such as in white matter lesion segmentation of multiple sclerosis (MS) patients. We present a refinement for finer grained parsing of SDI results in situations where the number of objects is unknown. We explore these ideas with two case studies showing what can be learned from our two presented studies. Our first study explores an inter-rater comparison, showing that smaller lesions cannot be reliably identified. In our second case study, we demonstrate fusing multiple MS lesion segmentation algorithms based on the insights into the algorithms provided by our analysis to generate a segmentation that exhibits improved performance. This work demonstrates the wealth of information that can be learned from refined analysis of medical image segmentations.
Collapse
Affiliation(s)
- Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, 21218, USA.
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD, 21218, USA.
| | - Snehashis Roy
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, 20817, USA
| | - Adrian Gherman
- Department of Biostatistics, The Johns Hopkins University, Baltimore, MD, 21205, USA
| | - Jacob C Reinhold
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Andrew Jesson
- Centre For Intelligent Machines, McGill University, Montréal, QC, H3A 0E9, Canada
| | - Tal Arbel
- Centre For Intelligent Machines, McGill University, Montréal, QC, H3A 0E9, Canada
| | - Oskar Maier
- Institute of Medical Informatics, University of Lübeck, 23538, Lübeck, Germany
| | - Heinz Handels
- Institute of Medical Informatics, University of Lübeck, 23538, Lübeck, Germany
| | - Mohsen Ghafoorian
- Institute for Computing and Information Sciences, Radboud University, 6525, HP, Nijmegen, Netherlands
| | - Bram Platel
- Diagnostic Image Analysis Group, Radboud University Medical Center, 6525, GA, Nijmegen, Netherlands
| | - Ariel Birenbaum
- Department of Electrical Engineering, Tel-Aviv University, Tel-Aviv, 69978, Israel
| | - Hayit Greenspan
- Department of Biomedical Engineering, Tel-Aviv University, Tel-Aviv, 69978, Israel
| | - Dzung L Pham
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, 20817, USA
| | - Ciprian M Crainiceanu
- Department of Biostatistics, The Johns Hopkins University, Baltimore, MD, 21205, USA
| | - Peter A Calabresi
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, 21287, USA
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, 21218, USA
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD, 21218, USA
| | - William R Gray Roncal
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Russell T Shinohara
- Penn Statistics in Imaging and Visualization Center, Department of Biostatistics & Epidemiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Ipek Oguz
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37203, USA
| |
Collapse
|
5
|
Dar SU, Yurt M, Karacan L, Erdem A, Erdem E, Cukur T. Image Synthesis in Multi-Contrast MRI With Conditional Generative Adversarial Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2375-2388. [PMID: 30835216 DOI: 10.1109/tmi.2019.2901750] [Citation(s) in RCA: 233] [Impact Index Per Article: 38.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Acquiring images of the same anatomy with multiple different contrasts increases the diversity of diagnostic information available in an MR exam. Yet, the scan time limitations may prohibit the acquisition of certain contrasts, and some contrasts may be corrupted by noise and artifacts. In such cases, the ability to synthesize unacquired or corrupted contrasts can improve diagnostic utility. For multi-contrast synthesis, the current methods learn a nonlinear intensity transformation between the source and target images, either via nonlinear regression or deterministic neural networks. These methods can, in turn, suffer from the loss of structural details in synthesized images. Here, in this paper, we propose a new approach for multi-contrast MRI synthesis based on conditional generative adversarial networks. The proposed approach preserves intermediate-to-high frequency details via an adversarial loss, and it offers enhanced synthesis performance via pixel-wise and perceptual losses for registered multi-contrast images and a cycle-consistency loss for unregistered images. Information from neighboring cross-sections are utilized to further improve synthesis quality. Demonstrations on T1- and T2- weighted images from healthy subjects and patients clearly indicate the superior performance of the proposed approach compared to the previous state-of-the-art methods. Our synthesis approach can help improve the quality and versatility of the multi-contrast MRI exams without the need for prolonged or repeated examinations.
Collapse
|
6
|
Wei W, Poirion E, Bodini B, Durrleman S, Colliot O, Stankoff B, Ayache N. Fluid-attenuated inversion recovery MRI synthesis from multisequence MRI using three-dimensional fully convolutional networks for multiple sclerosis. J Med Imaging (Bellingham) 2019; 6:014005. [PMID: 30820439 DOI: 10.1117/1.jmi.6.1.014005] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Accepted: 01/29/2019] [Indexed: 11/14/2022] Open
Abstract
Multiple sclerosis (MS) is a white matter (WM) disease characterized by the formation of WM lesions, which can be visualized by magnetic resonance imaging (MRI). The fluid-attenuated inversion recovery (FLAIR) MRI pulse sequence is used clinically and in research for the detection of WM lesions. However, in clinical settings, some MRI pulse sequences could be missed because of various constraints. The use of the three-dimensional fully convolutional neural networks is proposed to predict FLAIR pulse sequences from other MRI pulse sequences. In addition, the contribution of each input pulse sequence is evaluated with a pulse sequence-specific saliency map. This approach is tested on a real MS image dataset and evaluated by comparing this approach with other methods and by assessing the lesion contrast in the synthetic FLAIR pulse sequence. Both the qualitative and quantitative results show that this method is competitive for FLAIR synthesis.
Collapse
Affiliation(s)
- Wen Wei
- Université Côte d'Azur, Inria, Epione Project Team, Sophia Antipolis, France.,Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France.,Inria, Aramis Project Team, Paris, France
| | - Emilie Poirion
- Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France
| | - Benedetta Bodini
- Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France
| | - Stanley Durrleman
- Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France.,Inria, Aramis Project Team, Paris, France
| | - Olivier Colliot
- Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France.,Inria, Aramis Project Team, Paris, France
| | - Bruno Stankoff
- Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France
| | - Nicholas Ayache
- Université Côte d'Azur, Inria, Epione Project Team, Sophia Antipolis, France
| |
Collapse
|
7
|
Xiang L, Wang Q, Nie D, Zhang L, Jin X, Qiao Y, Shen D. Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image. Med Image Anal 2018; 47:31-44. [PMID: 29674235 PMCID: PMC6410565 DOI: 10.1016/j.media.2018.03.011] [Citation(s) in RCA: 112] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Revised: 03/17/2018] [Accepted: 03/26/2018] [Indexed: 02/01/2023]
Abstract
Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1-weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR-to-CT synthesis task by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis result back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeating this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate imaging datasets, by also comparing with the state-of-the-art methods. Experimental results suggest that our DECNN (with repeated embedding operations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image.
Collapse
Affiliation(s)
- Lei Xiang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Qian Wang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China.
| | - Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Lichi Zhang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Xiyao Jin
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Yu Qiao
- Shenzhen Key Lab of Computer Vision & Pattern Recognition, Shenzhen Institutes of Advanced Technology, CAS, Shenzhen, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
8
|
Bermudez C, Plassard AJ, Davis TL, Newton AT, Resnick SM, Landman BA. Learning Implicit Brain MRI Manifolds with Deep Learning. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10574:105741L. [PMID: 29887659 PMCID: PMC5990281 DOI: 10.1117/12.2293515] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
An important task in image processing and neuroimaging is to extract quantitative information from the acquired images in order to make observations about the presence of disease or markers of development in populations. Having a low-dimensional manifold of an image allows for easier statistical comparisons between groups and the synthesis of group representatives. Previous studies have sought to identify the best mapping of brain MRI to a low-dimensional manifold, but have been limited by assumptions of explicit similarity measures. In this work, we use deep learning techniques to investigate implicit manifolds of normal brains and generate new, high-quality images. We explore implicit manifolds by addressing the problems of image synthesis and image denoising as important tools in manifold learning. First, we propose the unsupervised synthesis of T1-weighted brain MRI using a Generative Adversarial Network (GAN) by learning from 528 examples of 2D axial slices of brain MRI. Synthesized images were first shown to be unique by performing a cross-correlation with the training set. Real and synthesized images were then assessed in a blinded manner by two imaging experts providing an image quality score of 1-5. The quality score of the synthetic image showed substantial overlap with that of the real images. Moreover, we use an autoencoder with skip connections for image denoising, showing that the proposed method results in higher PSNR than FSL SUSAN after denoising. This work shows the power of artificial networks to synthesize realistic imaging data, which can be used to improve image processing techniques and provide a quantitative framework to structural changes in the brain.
Collapse
Affiliation(s)
- Camilo Bermudez
- Department of Biomedical Engineering, Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| | - Andrew J Plassard
- Department of Computer Science, Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| | - Taylor L Davis
- Department of Radiology, Vanderbilt University Medical Center, 2201 West End Ave, Nashville, TN, USA 37235
| | - Allen T Newton
- Department of Radiology, Vanderbilt University Medical Center, 2201 West End Ave, Nashville, TN, USA 37235
| | - Susan M Resnick
- Department of Computer Science, Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| | - Bennett A Landman
- Department of Biomedical Engineering, Vanderbilt University, 2201 West End Ave, Nashville, TN, USA 37235
| |
Collapse
|
9
|
Cao X, Yang J, Gao Y, Guo Y, Wu G, Shen D. Dual-core steered non-rigid registration for multi-modal images via bi-directional image synthesis. Med Image Anal 2017; 41:18-31. [PMID: 28533050 PMCID: PMC5896773 DOI: 10.1016/j.media.2017.05.004] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Revised: 05/05/2017] [Accepted: 05/09/2017] [Indexed: 12/20/2022]
Abstract
In prostate cancer radiotherapy, computed tomography (CT) is widely used for dose planning purposes. However, because CT has low soft tissue contrast, it makes manual contouring difficult for major pelvic organs. In contrast, magnetic resonance imaging (MRI) provides high soft tissue contrast, which makes it ideal for accurate manual contouring. Therefore, the contouring accuracy on CT can be significantly improved if the contours in MRI can be mapped to CT domain by registering MRI with CT of the same subject, which would eventually lead to high treatment efficacy. In this paper, we propose a bi-directional image synthesis based approach for MRI-to-CT pelvic image registration. First, we use patch-wise random forest with auto-context model to learn the appearance mapping from CT to MRI domain, and then vice versa. Consequently, we can synthesize a pseudo-MRI whose anatomical structures are exactly same with CT but with MRI-like appearance, and a pseudo-CT as well. Then, our MRI-to-CT registration can be steered in a dual manner, by simultaneously estimating two deformation pathways: 1) one from the pseudo-CT to the actual CT and 2) another from actual MRI to the pseudo-MRI. Next, a dual-core deformation fusion framework is developed to iteratively and effectively combine these two registration pathways by using complementary information from both modalities. Experiments on a dataset with real pelvic CT and MRI have shown improved registration performance of the proposed method by comparing it to the conventional registration methods, thus indicating its high potential of translation to the routine radiation therapy.
Collapse
Affiliation(s)
- Xiaohuan Cao
- School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi, China; Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi, China
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Yanrong Guo
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
10
|
Bowles C, Qin C, Guerrero R, Gunn R, Hammers A, Dickie DA, Valdés Hernández M, Wardlaw J, Rueckert D. Brain lesion segmentation through image synthesis and outlier detection. Neuroimage Clin 2017; 16:643-658. [PMID: 29868438 PMCID: PMC5984574 DOI: 10.1016/j.nicl.2017.09.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Revised: 08/30/2017] [Accepted: 09/04/2017] [Indexed: 11/02/2022]
Abstract
Cerebral small vessel disease (SVD) can manifest in a number of ways. Many of these result in hyperintense regions visible on T2-weighted magnetic resonance (MR) images. The automatic segmentation of these lesions has been the focus of many studies. However, previous methods tended to be limited to certain types of pathology, as a consequence of either restricting the search to the white matter, or by training on an individual pathology. Here we present an unsupervised abnormality detection method which is able to detect abnormally hyperintense regions on FLAIR regardless of the underlying pathology or location. The method uses a combination of image synthesis, Gaussian mixture models and one class support vector machines, and needs only be trained on healthy tissue. We evaluate our method by comparing segmentation results from 127 subjects with SVD with three established methods and report significantly superior performance across a number of metrics.
Collapse
Affiliation(s)
| | - Chen Qin
- Department of Computing, Imperial College London, UK
| | | | - Roger Gunn
- Imanova Ltd., London, UK
- Department of Medicine, Imperial College London, UK
| | - Alexander Hammers
- Department of Computing, Imperial College London, UK
- King's College London & Guy's and St Thomas' PET Centre, Division of Imaging Sciences and Biomedical Engineering, St Thomas' Hospital, King's College London, UK
| | | | | | - Joanna Wardlaw
- Department of Neuroimaging Sciences, University of Edinburgh, UK
| | | |
Collapse
|
11
|
Bahrami K, Shi F, Rekik I, Gao Y, Shen D. 7T-guided super-resolution of 3T MRI. Med Phys 2017; 44:1661-1677. [PMID: 28177548 DOI: 10.1002/mp.12132] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2016] [Revised: 12/22/2016] [Accepted: 01/13/2017] [Indexed: 11/11/2022] Open
Abstract
PURPOSE High-resolution MR images can depict rich details of brain anatomical structures and show subtle changes in longitudinal data. 7T MRI scanners can acquire MR images with higher resolution and better tissue contrast than the routine 3T MRI scanners. However, 7T MRI scanners are currently more expensive and less available in clinical and research centers. To this end, we propose a method to generate super-resolution 3T MRI that resembles 7T MRI, which is called as 7T-like MR image in this paper. METHODS First, we propose a mapping from 3T MRI to 7T MRI space, using regression random forest. The mapped 3T MR images serve as intermediate results with similar appearance as 7T MR images. Second, we predict the final higher resolution 7T-like MR images based on sparse representation, using paired local dictionaries for both the mapped 3T MR images and 7T MR images. RESULTS Based on 15 subjects with both 3T and 7T MR images, the predicted 7T-like MR images by our method can best match the ground-truth 7T MR images, compared to other methods. Meanwhile, the experiment on brain tissue segmentation shows that our 7T-like MR images lead to the highest accuracy in the segmentation of WM, GM, and CSF brain tissues, compared to segmentations of 3T MR images as well as the reconstructed 7T-like MR images by other methods. CONCLUSIONS We propose a novel method for prediction of high-resolution 7T-like MR images from low-resolution 3T MR images. Our predicted 7T-like MR images demonstrate better spatial resolution compared to 3T MR images, as well as prediction results by other comparison methods. Such high-quality 7T-like MR images could better facilitate disease diagnosis and intervention.
Collapse
Affiliation(s)
- Khosro Bahrami
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27510, USA
| | - Feng Shi
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27510, USA
| | - Islem Rekik
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27510, USA
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27510, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27510, USA.,Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea
| |
Collapse
|
12
|
Carass A, Roy S, Jog A, Cuzzocreo JL, Magrath E, Gherman A, Button J, Nguyen J, Prados F, Sudre CH, Jorge Cardoso M, Cawley N, Ciccarelli O, Wheeler-Kingshott CAM, Ourselin S, Catanese L, Deshpande H, Maurel P, Commowick O, Barillot C, Tomas-Fernandez X, Warfield SK, Vaidya S, Chunduru A, Muthuganapathy R, Krishnamurthi G, Jesson A, Arbel T, Maier O, Handels H, Iheme LO, Unay D, Jain S, Sima DM, Smeets D, Ghafoorian M, Platel B, Birenbaum A, Greenspan H, Bazin PL, Calabresi PA, Crainiceanu CM, Ellingsen LM, Reich DS, Prince JL, Pham DL. Longitudinal multiple sclerosis lesion segmentation: Resource and challenge. Neuroimage 2017; 148:77-102. [PMID: 28087490 PMCID: PMC5344762 DOI: 10.1016/j.neuroimage.2016.12.064] [Citation(s) in RCA: 136] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2016] [Revised: 11/15/2016] [Accepted: 12/19/2016] [Indexed: 01/12/2023] Open
Abstract
In conjunction with the ISBI 2015 conference, we organized a longitudinal lesion segmentation challenge providing training and test data to registered participants. The training data consisted of five subjects with a mean of 4.4 time-points, and test data of fourteen subjects with a mean of 4.4 time-points. All 82 data sets had the white matter lesions associated with multiple sclerosis delineated by two human expert raters. Eleven teams submitted results using state-of-the-art lesion segmentation algorithms to the challenge, with ten teams presenting their results at the conference. We present a quantitative evaluation comparing the consistency of the two raters as well as exploring the performance of the eleven submitted results in addition to three other lesion segmentation algorithms. The challenge presented three unique opportunities: (1) the sharing of a rich data set; (2) collaboration and comparison of the various avenues of research being pursued in the community; and (3) a review and refinement of the evaluation metrics currently in use. We report on the performance of the challenge participants, as well as the construction and evaluation of a consensus delineation. The image data and manual delineations will continue to be available for download, through an evaluation website2 as a resource for future researchers in the area. This data resource provides a platform to compare existing methods in a fair and consistent manner to each other and multiple manual raters.
Collapse
Affiliation(s)
- Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Snehashis Roy
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD 20892, USA
| | - Amod Jog
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218, USA
| | - Jennifer L Cuzzocreo
- Department of Radiology, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Elizabeth Magrath
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD 20892, USA
| | - Adrian Gherman
- Department of Biostatistics, The Johns Hopkins University, Baltimore, MD 21205, USA
| | - Julia Button
- Department of Radiology, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - James Nguyen
- Department of Radiology, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Ferran Prados
- Translational Imaging Group, CMIC, UCL, NW1 2HE London, UK; NMR Research Unit, UCL Institute of Neurology, WC1N 3BG London, UK
| | - Carole H Sudre
- Translational Imaging Group, CMIC, UCL, NW1 2HE London, UK
| | - Manuel Jorge Cardoso
- Translational Imaging Group, CMIC, UCL, NW1 2HE London, UK; Dementia Research Centre, UCL Institute of Neurology, WC1N 3BG London, UK
| | - Niamh Cawley
- NMR Research Unit, UCL Institute of Neurology, WC1N 3BG London, UK
| | - Olga Ciccarelli
- NMR Research Unit, UCL Institute of Neurology, WC1N 3BG London, UK
| | | | - Sébastien Ourselin
- Translational Imaging Group, CMIC, UCL, NW1 2HE London, UK; Dementia Research Centre, UCL Institute of Neurology, WC1N 3BG London, UK
| | - Laurence Catanese
- VisAGeS: INSERM U746, CNRS UMR6074, INRIA, University of Rennes I, France
| | | | - Pierre Maurel
- VisAGeS: INSERM U746, CNRS UMR6074, INRIA, University of Rennes I, France
| | - Olivier Commowick
- VisAGeS: INSERM U746, CNRS UMR6074, INRIA, University of Rennes I, France
| | - Christian Barillot
- VisAGeS: INSERM U746, CNRS UMR6074, INRIA, University of Rennes I, France
| | - Xavier Tomas-Fernandez
- Computational Radiology Laboratory, Boston Childrens Hospital, Boston, MA 02115, USA; Harvard Medical School, Boston, MA 02115, USA
| | - Simon K Warfield
- Computational Radiology Laboratory, Boston Childrens Hospital, Boston, MA 02115, USA; Harvard Medical School, Boston, MA 02115, USA
| | - Suthirth Vaidya
- Biomedical Imaging Lab, Department of Engineering Design, Indian Institute of Technology, Chennai 600036, India
| | - Abhijith Chunduru
- Biomedical Imaging Lab, Department of Engineering Design, Indian Institute of Technology, Chennai 600036, India
| | - Ramanathan Muthuganapathy
- Biomedical Imaging Lab, Department of Engineering Design, Indian Institute of Technology, Chennai 600036, India
| | - Ganapathy Krishnamurthi
- Biomedical Imaging Lab, Department of Engineering Design, Indian Institute of Technology, Chennai 600036, India
| | - Andrew Jesson
- Centre For Intelligent Machines, McGill University, Montréal, QC H3A 0E9, Canada
| | - Tal Arbel
- Centre For Intelligent Machines, McGill University, Montréal, QC H3A 0E9, Canada
| | - Oskar Maier
- Institute of Medical Informatics, University of Lübeck, 23538 Lübeck, Germany
| | - Heinz Handels
- Institute of Medical Informatics, University of Lübeck, 23538 Lübeck, Germany
| | - Leonardo O Iheme
- Bahçeşehir University, Faculty of Engineering and Natural Sciences, 34349 Beşiktaş, Turkey
| | - Devrim Unay
- Bahçeşehir University, Faculty of Engineering and Natural Sciences, 34349 Beşiktaş, Turkey
| | | | | | | | - Mohsen Ghafoorian
- Institute for Computing and Information Sciences, Radboud University, 6525 HP Nijmegen, Netherlands
| | - Bram Platel
- Diagnostic Image Analysis Group, Radboud University Medical Center, 6525 GA Nijmegen, Netherlands
| | - Ariel Birenbaum
- Department of Electrical Engineering, Tel-Aviv University, Tel-Aviv 69978, Israel
| | - Hayit Greenspan
- Department of Biomedical Engineering, Tel-Aviv University, Tel-Aviv 69978, Israel
| | - Pierre-Louis Bazin
- Department of Neurophysics, Max Planck Institute, 04103 Leipzig, Germany
| | - Peter A Calabresi
- Department of Radiology, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | | | - Lotta M Ellingsen
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Electrical and Computer Engineering, University of Iceland, 107 Reykjavík, Iceland
| | - Daniel S Reich
- Department of Radiology, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA; Translational Neuroradiology Unit, National Institute of Neurological Disorders and Stroke, Bethesda, MD 20892, USA
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218, USA
| | - Dzung L Pham
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD 20892, USA
| |
Collapse
|
13
|
Chen M, Carass A, Jog A, Lee J, Roy S, Prince JL. Cross contrast multi-channel image registration using image synthesis for MR brain images. Med Image Anal 2017; 36:2-14. [PMID: 27816859 PMCID: PMC5239759 DOI: 10.1016/j.media.2016.10.005] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2015] [Revised: 10/13/2016] [Accepted: 10/17/2016] [Indexed: 11/21/2022]
Abstract
Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information.
Collapse
Affiliation(s)
- Min Chen
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA.
| | - Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA.
| | - Amod Jog
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA.
| | - Junghoon Lee
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Radiation Oncology and Molecular Radiation Sciences, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA.
| | - Snehashis Roy
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD 20892, USA.
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA; Radiation Oncology and Molecular Radiation Sciences, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA.
| |
Collapse
|
14
|
Roy S, Butman JA, Pham DL. Robust skull stripping using multiple MR image contrasts insensitive to pathology. Neuroimage 2017; 146:132-147. [PMID: 27864083 PMCID: PMC5321800 DOI: 10.1016/j.neuroimage.2016.11.017] [Citation(s) in RCA: 68] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2016] [Revised: 10/31/2016] [Accepted: 11/04/2016] [Indexed: 01/18/2023] Open
Abstract
Automatic skull-stripping or brain extraction of magnetic resonance (MR) images is often a fundamental step in many neuroimage processing pipelines. The accuracy of subsequent image processing relies on the accuracy of the skull-stripping. Although many automated stripping methods have been proposed in the past, it is still an active area of research particularly in the context of brain pathology. Most stripping methods are validated on T1-w MR images of normal brains, especially because high resolution T1-w sequences are widely acquired and ground truth manual brain mask segmentations are publicly available for normal brains. However, different MR acquisition protocols can provide complementary information about the brain tissues, which can be exploited for better distinction between brain, cerebrospinal fluid, and unwanted tissues such as skull, dura, marrow, or fat. This is especially true in the presence of pathology, where hemorrhages or other types of lesions can have similar intensities as skull in a T1-w image. In this paper, we propose a sparse patch based Multi-cONtrast brain STRipping method (MONSTR),2 where non-local patch information from one or more atlases, which contain multiple MR sequences and reference delineations of brain masks, are combined to generate a target brain mask. We compared MONSTR with four state-of-the-art, publicly available methods: BEaST, SPECTRE, ROBEX, and OptiBET. We evaluated the performance of these methods on 6 datasets consisting of both healthy subjects and patients with various pathologies. Three datasets (ADNI, MRBrainS, NAMIC) are publicly available, consisting of 44 healthy volunteers and 10 patients with schizophrenia. Other three in-house datasets, comprising 87 subjects in total, consisted of patients with mild to severe traumatic brain injury, brain tumors, and various movement disorders. A combination of T1-w, T2-w were used to skull-strip these datasets. We show significant improvement in stripping over the competing methods on both healthy and pathological brains. We also show that our multi-contrast framework is robust and maintains accurate performance across different types of acquisitions and scanners, even when using normal brains as atlases to strip pathological brains, demonstrating that our algorithm is applicable even when reference segmentations of pathological brains are not available to be used as atlases.
Collapse
Affiliation(s)
- Snehashis Roy
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation, United States.
| | - John A Butman
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation, United States; Diagnostic Radiology Department, National Institute of Health, United States
| | - Dzung L Pham
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation, United States
| |
Collapse
|
15
|
Jog A, Carass A, Roy S, Pham DL, Prince JL. Random forest regression for magnetic resonance image synthesis. Med Image Anal 2017; 35:475-488. [PMID: 27607469 PMCID: PMC5099106 DOI: 10.1016/j.media.2016.08.009] [Citation(s) in RCA: 85] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Revised: 08/24/2016] [Accepted: 08/26/2016] [Indexed: 02/02/2023]
Abstract
By choosing different pulse sequences and their parameters, magnetic resonance imaging (MRI) can generate a large variety of tissue contrasts. This very flexibility, however, can yield inconsistencies with MRI acquisitions across datasets or scanning sessions that can in turn cause inconsistent automated image analysis. Although image synthesis of MR images has been shown to be helpful in addressing this problem, an inability to synthesize both T2-weighted brain images that include the skull and FLuid Attenuated Inversion Recovery (FLAIR) images has been reported. The method described herein, called REPLICA, addresses these limitations. REPLICA is a supervised random forest image synthesis approach that learns a nonlinear regression to predict intensities of alternate tissue contrasts given specific input tissue contrasts. Experimental results include direct image comparisons between synthetic and real images, results from image analysis tasks on both synthetic and real images, and comparison against other state-of-the-art image synthesis methods. REPLICA is computationally fast, and is shown to be comparable to other methods on tasks they are able to perform. Additionally REPLICA has the capability to synthesize both T2-weighted images of the full head and FLAIR images, and perform intensity standardization between different imaging datasets.
Collapse
Affiliation(s)
- Amod Jog
- Dept. of Computer Science, The Johns Hopkins University, United States.
| | - Aaron Carass
- Dept. of Computer Science, The Johns Hopkins University, United States; Dept. of Electrical and Computer Engineering, The Johns Hopkins University, United States
| | - Snehashis Roy
- The Henry M. Jackson Foundation for the Advancement of Military Medicine, United States
| | - Dzung L Pham
- The Henry M. Jackson Foundation for the Advancement of Military Medicine, United States
| | - Jerry L Prince
- Dept. of Electrical and Computer Engineering, The Johns Hopkins University, United States
| |
Collapse
|
16
|
Cordier N, Delingette H, Le M, Ayache N. Extended Modality Propagation: Image Synthesis of Pathological Cases. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:2598-2608. [PMID: 27411217 DOI: 10.1109/tmi.2016.2589760] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper describes a novel generative model for the synthesis of multi-modal medical images of pathological cases based on a single label map. Our model builds upon i) a generative model commonly used for label fusion and multi-atlas patch-based segmentation of healthy anatomical structures, ii) the Modality Propagation iterative strategy used for a spatially-coherent synthesis of subject-specific scans of desired image modalities. The expression Extended Modality Propagation is coined to refer to the extension of Modality Propagation to the synthesis of images of pathological cases. Moreover, image synthesis uncertainty is estimated. An application to Magnetic Resonance Imaging synthesis of glioma-bearing brains is i) validated on the training dataset of a Multimodal Brain Tumor Image Segmentation challenge, ii) compared to the state-of-the-art in glioma image synthesis, and iii) illustrated using the output of two different tumor growth models. Such a generative model allows the generation of a large dataset of synthetic cases, which could prove useful for the training, validation, or benchmarking of image processing algorithms.
Collapse
|
17
|
Yang X, Han X, Park E, Aylward S, Kwitt R, Niethammer M. Registration of Pathological Images. ACTA ACUST UNITED AC 2016; 9968:97-107. [PMID: 29896582 DOI: 10.1007/978-3-319-46630-9_10] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/21/2023]
Abstract
This paper proposes an approach to improve atlas-to-image registration accuracy with large pathologies. Instead of directly registering an atlas to a pathological image, the method learns a mapping from the pathological image to a quasi-normal image, for which more accurate registration is possible. Specifically, the method uses a deep variational convolutional encoder-decoder network to learn the mapping. Furthermore, the method estimates local mapping uncertainty through network inference statistics and uses those estimates to down-weight the image registration similarity measure in areas of high uncertainty. The performance of the method is quantified using synthetic brain tumor images and images from the brain tumor segmentation challenge (BRATS 2015).
Collapse
Affiliation(s)
| | - Xu Han
- UNC Chapel Hill, Chapel Hill, USA
| | | | | | - Roland Kwitt
- Department of Computer Science, University of Salzburg, Austria
| | - Marc Niethammer
- UNC Chapel Hill, Chapel Hill, USA
- Biomedical Research Imaging Center, Chapel Hill, USA
| |
Collapse
|
18
|
Patch Based Synthesis of Whole Head MR Images: Application to EPI Distortion Correction. ACTA ACUST UNITED AC 2016; 9968:146-156. [PMID: 28367541 DOI: 10.1007/978-3-319-46630-9_15] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/21/2023]
Abstract
Different magnetic resonance imaging pulse sequences are used to generate image contrasts based on physical properties of tissues, which provide different and often complementary information about them. Therefore multiple image contrasts are useful for multimodal analysis of medical images. Often, medical image processing algorithms are optimized for particular image contrasts. If a desirable contrast is unavailable, contrast synthesis (or modality synthesis) methods try to "synthesize" the unavailable constrasts from the available ones. Most of the recent image synthesis methods generate synthetic brain images, while whole head magnetic resonance (MR) images can also be useful for many applications. We propose an atlas based patch matching algorithm to synthesize T2-w whole head (including brain, skull, eyes etc) images from T1-w images for the purpose of distortion correction of diffusion weighted MR images. The geometric distortion in diffusion MR images due to in-homogeneous B0 magnetic field are often corrected by non-linearly registering the corresponding b = 0 image with zero diffusion gradient to an undistorted T2-w image. We show that our synthetic T2-w images can be used as a template in absence of a real T2-w image. Our patch based method requires multiple atlases with T1 and T2 to be registeLowRes to a given target T1. Then for every patch on the target, multiple similar looking matching patches are found on the atlas T1 images and corresponding patches on the atlas T2 images are combined to generate a synthetic T2 of the target. We experimented on image data obtained from 44 patients with traumatic brain injury (TBI), and showed that our synthesized T2 images produce more accurate distortion correction than a state-of-the-art registration based image synthesis method.
Collapse
|
19
|
Huynh T, Gao Y, Kang J, Wang L, Zhang P, Lian J, Shen D. Estimating CT Image From MRI Data Using Structured Random Forest and Auto-Context Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:174-83. [PMID: 26241970 PMCID: PMC4703527 DOI: 10.1109/tmi.2015.2461533] [Citation(s) in RCA: 164] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Computed tomography (CT) imaging is an essential tool in various clinical diagnoses and radiotherapy treatment planning. Since CT image intensities are directly related to positron emission tomography (PET) attenuation coefficients, they are indispensable for attenuation correction (AC) of the PET images. However, due to the relatively high dose of radiation exposure in CT scan, it is advised to limit the acquisition of CT images. In addition, in the new PET and magnetic resonance (MR) imaging scanner, only MR images are available, which are unfortunately not directly applicable to AC. These issues greatly motivate the development of methods for reliable estimate of CT image from its corresponding MR image of the same subject. In this paper, we propose a learning-based method to tackle this challenging problem. Specifically, we first partition a given MR image into a set of patches. Then, for each patch, we use the structured random forest to directly predict a CT patch as a structured output, where a new ensemble model is also used to ensure the robust prediction. Image features are innovatively crafted to achieve multi-level sensitivity, with spatial information integrated through only rigid-body alignment to help avoiding the error-prone inter-subject deformable registration. Moreover, we use an auto-context model to iteratively refine the prediction. Finally, we combine all of the predicted CT patches to obtain the final prediction for the given MR image. We demonstrate the efficacy of our method on two datasets: human brain and prostate images. Experimental results show that our method can accurately predict CT images in various scenarios, even for the images undergoing large shape variation, and also outperforms two state-of-the-art methods.
Collapse
Affiliation(s)
- Tri Huynh
- IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Yaozong Gao
- Department of Computer Science, and also with the IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Jiayin Kang
- IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Li Wang
- IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Pei Zhang
- IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina at Chapel Hill, NC, USA
| | - Dinggang Shen
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 136-071, Korea
| |
Collapse
|
20
|
Kim WH, Bendlin BB, Chung MK, Johnson SC, Singh V. Statistical Inference Models for Image Datasets with Systematic Variations. PROCEEDINGS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2015; 2015:4795-4803. [PMID: 26989336 PMCID: PMC4792194 DOI: 10.1109/cvpr.2015.7299112] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Statistical analysis of longitudinal or cross sectional brain imaging data to identify effects of neurodegenerative diseases is a fundamental task in various studies in neuroscience. However, when there are systematic variations in the images due to parameter changes such as changes in the scanner protocol, hardware changes, or when combining data from multi-site studies, the statistical analysis becomes problematic. Motivated by this scenario, the goal of this paper is to develop a unified statistical solution to the problem of systematic variations in statistical image analysis. Based in part on recent literature in harmonic analysis on diffusion maps, we propose an algorithm which compares operators that are resilient to the systematic variations. These operators are derived from the empirical measurements of the image data and provide an efficient surrogate to capturing the actual changes across images. We also establish a connection between our method to the design of wavelets in non-Euclidean space. To evaluate the proposed ideas, we present various experimental results on detecting changes in simulations as well as show how the method offers improved statistical power in the analysis of real longitudinal PIB-PET imaging data acquired from participants at risk for Alzheimer's disease (AD).
Collapse
Affiliation(s)
- Won Hwa Kim
- Dept. of Computer Sciences, University of Wisconsin, Madison, WI ; Wisconsin Alzheimer's Disease Research Center, University of Wisconsin, Madison, WI
| | - Barbara B Bendlin
- GRECC, William S. Middleton VA Hospital, Madison, WI ; Wisconsin Alzheimer's Disease Research Center, University of Wisconsin, Madison, WI
| | - Moo K Chung
- Dept. of Biostatistics & Med. Informatics, University of Wisconsin, Madison, WI
| | - Sterling C Johnson
- GRECC, William S. Middleton VA Hospital, Madison, WI ; Wisconsin Alzheimer's Disease Research Center, University of Wisconsin, Madison, WI
| | - Vikas Singh
- Dept. of Biostatistics & Med. Informatics, University of Wisconsin, Madison, WI ; Dept. of Computer Sciences, University of Wisconsin, Madison, WI ; Wisconsin Alzheimer's Disease Research Center, University of Wisconsin, Madison, WI
| |
Collapse
|
21
|
MR image synthesis by contrast learning on neighborhood ensembles. Med Image Anal 2015; 24:63-76. [PMID: 26072167 DOI: 10.1016/j.media.2015.05.002] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2014] [Revised: 02/21/2015] [Accepted: 05/04/2015] [Indexed: 01/24/2023]
Abstract
Automatic processing of magnetic resonance images is a vital part of neuroscience research. Yet even the best and most widely used medical image processing methods will not produce consistent results when their input images are acquired with different pulse sequences. Although intensity standardization and image synthesis methods have been introduced to address this problem, their performance remains dependent on knowledge and consistency of the pulse sequences used to acquire the images. In this paper, an image synthesis approach that first estimates the pulse sequence parameters of the subject image is presented. The estimated parameters are then used with a collection of atlas or training images to generate a new atlas image having the same contrast as the subject image. This additional image provides an ideal source from which to synthesize any other target pulse sequence image contained in the atlas. In particular, a nonlinear regression intensity mapping is trained from the new atlas image to the target atlas image and then applied to the subject image to yield the particular target pulse sequence within the atlas. Both intensity standardization and synthesis of missing tissue contrasts can be achieved using this framework. The approach was evaluated on both simulated and real data, and shown to be superior in both intensity standardization and synthesis to other established methods.
Collapse
|
22
|
Jog A, Carass A, Pham DL, Prince JL. Multi-Output Decision Trees for Lesion Segmentation in Multiple Sclerosis. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2015; 9413. [PMID: 27695155 DOI: 10.1117/12.2082157] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Multiple Sclerosis (MS) is a disease of the central nervous system in which the protective myelin sheath of the neurons is damaged. MS leads to the formation of lesions, predominantly in the white matter of the brain and the spinal cord. The number and volume of lesions visible in magnetic resonance (MR) imaging (MRI) are important criteria for diagnosing and tracking the progression of MS. Locating and delineating lesions manually requires the tedious and expensive efforts of highly trained raters. In this paper, we propose an automated algorithm to segment lesions in MR images using multi-output decision trees. We evaluated our algorithm on the publicly available MICCAI 2008 MS Lesion Segmentation Challenge training dataset of 20 subjects, and showed improved results in comparison to state-of-the-art methods. We also evaluated our algorithm on an in-house dataset of 49 subjects with a true positive rate of 0.41 and a positive predictive value 0.36.
Collapse
Affiliation(s)
- Amod Jog
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| | - Aaron Carass
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| | - Dzung L Pham
- Henry M. Jackson Foundation for the Advancement of Military Medicine
| | - Jerry L Prince
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| |
Collapse
|
23
|
Jog A, Carass A, Pham DL, Prince JL. RANDOM FOREST FLAIR RECONSTRUCTION FROM T1, T2, AND PD -WEIGHTED MRI. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2014; 2014:1079-1082. [PMID: 25405002 DOI: 10.1109/isbi.2014.6868061] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Fluid Attenuated Inversion Recovery (FLAIR) is a commonly acquired pulse sequence for multiple sclerosis (MS) patients. MS white matter lesions appear hyperintense in FLAIR images and have excellent contrast with the surrounding tissue. Hence, FLAIR images are commonly used in automated lesion segmentation algorithms to easily and quickly delineate the lesions. This expedites the lesion load computation and correlation with disease progression. Unfortunately for numerous reasons the acquired FLAIR images can be of a poor quality and suffer from various artifacts. In the most extreme cases the data is absent, which poses a problem when consistently processing a large data set. We propose to fill in this gap by reconstructing a FLAIR image given the corresponding T1-weighted, T2-weighted, and PD -weighted images of the same subject using random forest regression. We show that the images we produce are similar to true high quality FLAIR images and also provide a good surrogate for tissue segmentation.
Collapse
Affiliation(s)
- Amod Jog
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| | - Aaron Carass
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| | - Dzung L Pham
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation for the Advancement of Military Medicine
| | - Jerry L Prince
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| |
Collapse
|
24
|
Roy S, He Q, Carass A, Jog A, Cuzzocreo JL, Reich DS, Prince J, Pham D. Example Based Lesion Segmentation. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2014; 9034. [PMID: 27795605 DOI: 10.1117/12.2043917] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Automatic and accurate detection of white matter lesions is a significant step toward understanding the progression of many diseases, like Alzheimer's disease or multiple sclerosis. Multi-modal MR images are often used to segment T2 white matter lesions that can represent regions of demyelination or ischemia. Some automated lesion segmentation methods describe the lesion intensities using generative models, and then classify the lesions with some combination of heuristics and cost minimization. In contrast, we propose a patch-based method, in which lesions are found using examples from an atlas containing multi-modal MR images and corresponding manual delineations of lesions. Patches from subject MR images are matched to patches from the atlas and lesion memberships are found based on patch similarity weights. We experiment on 43 subjects with MS, whose scans show various levels of lesion-load. We demonstrate significant improvement in Dice coefficient and total lesion volume compared to a state of the art model-based lesion segmentation method, indicating more accurate delineation of lesions.
Collapse
Affiliation(s)
- Snehashis Roy
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation for the Advancement of Military Medicine, USA
| | - Qing He
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation for the Advancement of Military Medicine, USA
| | - Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, USA
| | - Amod Jog
- Department of Electrical and Computer Engineering, The Johns Hopkins University, USA
| | | | - Daniel S Reich
- Translational Neuroradiology Unit, Neuroimmunology Branch, National Institute of Neurological Disorders and Stroke, USA
| | - Jerry Prince
- Department of Electrical and Computer Engineering, The Johns Hopkins University, USA
| | - Dzung Pham
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation for the Advancement of Military Medicine, USA
| |
Collapse
|
25
|
Modality propagation: coherent synthesis of subject-specific scans with data-driven regularization. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2014; 16:606-13. [PMID: 24505717 DOI: 10.1007/978-3-642-40811-3_76] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
We propose a general database-driven framework for coherent synthesis of subject-specific scans of desired modality, which adopts and generalizes the patch-based label propagation (LP) strategy. While modality synthesis has received increased attention lately, current methods are mainly tailored to specific applications. On the other hand, the LP framework has been extremely successful for certain segmentation tasks, however, so far it has not been used for estimation of entities other than categorical segmentation labels. We approach the synthesis task as a modality propagation, and demonstrate that with certain modifications the LP framework can be generalized to continuous settings providing coherent synthesis of different modalities, beyond segmentation labels. To achieve high-quality estimates we introduce a new data-driven regularization scheme, in which we integrate intermediate estimates within an iterative search-and-synthesis strategy. To efficiently leverage population data and ensure coherent synthesis, we employ a spatio-population search space restriction. In experiments, we demonstrate the quality of synthesis of different MRI signals (T2 and DTI-FA) from a T1 input, and show a novel application of modality synthesis for abnormality detection in multi-channel MRI of brain tumor patients.
Collapse
|
26
|
Roy S, Carass A, Prince JL. Magnetic Resonance Image Example-Based Contrast Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:2348-63. [PMID: 24058022 PMCID: PMC3955746 DOI: 10.1109/tmi.2013.2282126] [Citation(s) in RCA: 70] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
The performance of image analysis algorithms applied to magnetic resonance images is strongly influenced by the pulse sequences used to acquire the images. Algorithms are typically optimized for a targeted tissue contrast obtained from a particular implementation of a pulse sequence on a specific scanner. There are many practical situations, including multi-institution trials, rapid emergency scans, and scientific use of historical data, where the images are not acquired according to an optimal protocol or the desired tissue contrast is entirely missing. This paper introduces an image restoration technique that recovers images with both the desired tissue contrast and a normalized intensity profile. This is done using patches in the acquired images and an atlas containing patches of the acquired and desired tissue contrasts. The method is an example-based approach relying on sparse reconstruction from image patches. Its performance in demonstrated using several examples, including image intensity normalization, missing tissue contrast recovery, automatic segmentation, and multimodal registration. These examples demonstrate potential practical uses and also illustrate limitations of our approach.
Collapse
Affiliation(s)
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Johns Hopkins University, USA
| | - Jerry L. Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, USA
| |
Collapse
|
27
|
Iglesias JE, Konukoglu E, Zikic D, Glocker B, Van Leemput K, Fischl B. Is synthesizing MRI contrast useful for inter-modality analysis? MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2013; 16:631-8. [PMID: 24505720 DOI: 10.1007/978-3-642-40811-3_79] [Citation(s) in RCA: 58] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
Availability of multi-modal magnetic resonance imaging (MRI) databases opens up the opportunity to synthesize different MRI contrasts without actually acquiring the images. In theory such synthetic images have the potential to reduce the amount of acquisitions to perform certain analyses. However, to what extent they can substitute real acquisitions in the respective analyses is an open question. In this study, we used a synthesis method based on patch matching to test whether synthetic images can be useful in segmentation and inter-modality cross-subject registration of brain MRI. Thirty-nine T1 scans with 36 manually labeled structures of interest were used in the registration and segmentation of eight proton density (PD) scans, for which ground truth T1 data were also available. The results show that synthesized T1 contrast can considerably enhance the quality of non-linear registration compared with using the original PD data, and it is only marginally worse than using the original T1 scans. In segmentation, the relative improvement with respect to using the PD is smaller, but still statistically significant.
Collapse
Affiliation(s)
| | - Ender Konukoglu
- Martinos Center for Biomedical Imaging, MGH, Harvard Medical School, USA
| | | | | | - Koen Van Leemput
- Martinos Center for Biomedical Imaging, MGH, Harvard Medical School, USA
| | - Bruce Fischl
- Martinos Center for Biomedical Imaging, MGH, Harvard Medical School, USA
| |
Collapse
|
28
|
Asman AJ, Landman BA. Non-local statistical label fusion for multi-atlas segmentation. Med Image Anal 2012; 17:194-208. [PMID: 23265798 DOI: 10.1016/j.media.2012.10.002] [Citation(s) in RCA: 152] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2012] [Revised: 10/19/2012] [Accepted: 10/29/2012] [Indexed: 11/19/2022]
Abstract
Multi-atlas segmentation provides a general purpose, fully-automated approach for transferring spatial information from an existing dataset ("atlases") to a previously unseen context ("target") through image registration. The method to resolve voxelwise label conflicts between the registered atlases ("label fusion") has a substantial impact on segmentation quality. Ideally, statistical fusion algorithms (e.g., STAPLE) would result in accurate segmentations as they provide a framework to elegantly integrate models of rater performance. The accuracy of statistical fusion hinges upon accurately modeling the underlying process of how raters err. Despite success on human raters, current approaches inaccurately model multi-atlas behavior as they fail to seamlessly incorporate exogenous intensity information into the estimation process. As a result, locally weighted voting algorithms represent the de facto standard fusion approach in clinical applications. Moreover, regardless of the approach, fusion algorithms are generally dependent upon large atlas sets and highly accurate registration as they implicitly assume that the registered atlases form a collectively unbiased representation of the target. Herein, we propose a novel statistical fusion algorithm, Non-Local STAPLE (NLS). NLS reformulates the STAPLE framework from a non-local means perspective in order to learn what label an atlas would have observed, given perfect correspondence. Through this reformulation, NLS (1) seamlessly integrates intensity into the estimation process, (2) provides a theoretically consistent model of multi-atlas observation error, and (3) largely diminishes the need for large atlas sets and very high-quality registrations. We assess the sensitivity and optimality of the approach and demonstrate significant improvement in two empirical multi-atlas experiments.
Collapse
Affiliation(s)
- Andrew J Asman
- Electrical Engineering, Vanderbilt University, Nashville, TN 37235-1679, USA.
| | | |
Collapse
|
29
|
Rousseau F, Habas PA, Studholme C. A supervised patch-based approach for human brain labeling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2011; 30:1852-62. [PMID: 21606021 PMCID: PMC3318921 DOI: 10.1109/tmi.2011.2156806] [Citation(s) in RCA: 166] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
We propose in this work a patch-based image labeling method relying on a label propagation framework. Based on image intensity similarities between the input image and an anatomy textbook, an original strategy which does not require any nonrigid registration is presented. Following recent developments in nonlocal image denoising, the similarity between images is represented by a weighted graph computed from an intensity-based distance between patches. Experiments on simulated and in vivo magnetic resonance images show that the proposed method is very successful in providing automated human brain labeling.
Collapse
Affiliation(s)
- Françcois Rousseau
- Laboratoire des Sciences de l’Image, de l’Informatique et de la Télédétection (LSIIT), UMR 7005 CNRS-University of Strasbourg, 67412 Illkirch, France.
| | | | | |
Collapse
|