1
|
Liu J, Pasumarthi S, Duffy B, Gong E, Datta K, Zaharchuk G. One Model to Synthesize Them All: Multi-Contrast Multi-Scale Transformer for Missing Data Imputation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2577-2591. [PMID: 37030684 PMCID: PMC10543020 DOI: 10.1109/tmi.2023.3261707] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Multi-contrast magnetic resonance imaging (MRI) is widely used in clinical practice as each contrast provides complementary information. However, the availability of each imaging contrast may vary amongst patients, which poses challenges to radiologists and automated image analysis algorithms. A general approach for tackling this problem is missing data imputation, which aims to synthesize the missing contrasts from existing ones. While several convolutional neural networks (CNN) based algorithms have been proposed, they suffer from the fundamental limitations of CNN models, such as the requirement for fixed numbers of input and output channels, the inability to capture long-range dependencies, and the lack of interpretability. In this work, we formulate missing data imputation as a sequence-to-sequence learning problem and propose a multi-contrast multi-scale Transformer (MMT), which can take any subset of input contrasts and synthesize those that are missing. MMT consists of a multi-scale Transformer encoder that builds hierarchical representations of inputs combined with a multi-scale Transformer decoder that generates the outputs in a coarse-to-fine fashion. The proposed multi-contrast Swin Transformer blocks can efficiently capture intra- and inter-contrast dependencies for accurate image synthesis. Moreover, MMT is inherently interpretable as it allows us to understand the importance of each input contrast in different regions by analyzing the in-built attention maps of Transformer blocks in the decoder. Extensive experiments on two large-scale multi-contrast MRI datasets demonstrate that MMT outperforms the state-of-the-art methods quantitatively and qualitatively.
Collapse
|
2
|
Zhang X, He X, Guo J, Ettehadi N, Aw N, Semanek D, Posner J, Laine A, Wang Y. PTNet3D: A 3D High-Resolution Longitudinal Infant Brain MRI Synthesizer Based on Transformers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2925-2940. [PMID: 35560070 PMCID: PMC9529847 DOI: 10.1109/tmi.2022.3174827] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
An increased interest in longitudinal neurodevelopment during the first few years after birth has emerged in recent years. Noninvasive magnetic resonance imaging (MRI) can provide crucial information about the development of brain structures in the early months of life. Despite the success of MRI collections and analysis for adults, it remains a challenge for researchers to collect high-quality multimodal MRIs from developing infant brains because of their irregular sleep pattern, limited attention, inability to follow instructions to stay still during scanning. In addition, there are limited analytic approaches available. These challenges often lead to a significant reduction of usable MRI scans and pose a problem for modeling neurodevelopmental trajectories. Researchers have explored solving this problem by synthesizing realistic MRIs to replace corrupted ones. Among synthesis methods, the convolutional neural network-based (CNN-based) generative adversarial networks (GANs) have demonstrated promising performance. In this study, we introduced a novel 3D MRI synthesis framework- pyramid transformer network (PTNet3D)- which relies on attention mechanisms through transformer and performer layers. We conducted extensive experiments on high-resolution Developing Human Connectome Project (dHCP) and longitudinal Baby Connectome Project (BCP) datasets. Compared with CNN-based GANs, PTNet3D consistently shows superior synthesis accuracy and superior generalization on two independent, large-scale infant brain MRI datasets. Notably, we demonstrate that PTNet3D synthesized more realistic scans than CNN-based models when the input is from multi-age subjects. Potential applications of PTNet3D include synthesizing corrupted or missing images. By replacing corrupted scans with synthesized ones, we observed significant improvement in infant whole brain segmentation.
Collapse
|
3
|
Pan Y, Liu M, Xia Y, Shen D. Disease-Image-Specific Learning for Diagnosis-Oriented Neuroimage Synthesis With Incomplete Multi-Modality Data. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:6839-6853. [PMID: 34156939 PMCID: PMC9297233 DOI: 10.1109/tpami.2021.3091214] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Incomplete data problem is commonly existing in classification tasks with multi-source data, particularly the disease diagnosis with multi-modality neuroimages, to track which, some methods have been proposed to utilize all available subjects by imputing missing neuroimages. However, these methods usually treat image synthesis and disease diagnosis as two standalone tasks, thus ignoring the specificity conveyed in different modalities, i.e., different modalities may highlight different disease-relevant regions in the brain. To this end, we propose a disease-image-specific deep learning (DSDL) framework for joint neuroimage synthesis and disease diagnosis using incomplete multi-modality neuroimages. Specifically, with each whole-brain scan as input, we first design a Disease-image-Specific Network (DSNet) with a spatial cosine module to implicitly model the disease-image specificity. We then develop a Feature-consistency Generative Adversarial Network (FGAN) to impute missing neuroimages, where feature maps (generated by DSNet) of a synthetic image and its respective real image are encouraged to be consistent while preserving the disease-image-specific information. Since our FGAN is correlated with DSNet, missing neuroimages can be synthesized in a diagnosis-oriented manner. Experimental results on three datasets suggest that our method can not only generate reasonable neuroimages, but also achieve state-of-the-art performance in both tasks of Alzheimer's disease identification and mild cognitive impairment conversion prediction.
Collapse
|
4
|
Bi-MGAN: Bidirectional T1-to-T2 MRI images prediction using multi-generative multi-adversarial nets. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
5
|
Tomar D, Lortkipanidze M, Vray G, Bozorgtabar B, Thiran JP. Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain Adaptation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2926-2938. [PMID: 33577450 DOI: 10.1109/tmi.2021.3059265] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Despite the successes of deep neural networks on many challenging vision tasks, they often fail to generalize to new test domains that are not distributed identically to the training data. The domain adaptation becomes more challenging for cross-modality medical data with a notable domain shift. Given that specific annotated imaging modalities may not be accessible nor complete. Our proposed solution is based on the cross-modality synthesis of medical images to reduce the costly annotation burden by radiologists and bridge the domain gap in radiological images. We present a novel approach for image-to-image translation in medical images, capable of supervised or unsupervised (unpaired image data) setups. Built upon adversarial training, we propose a learnable self-attentive spatial normalization of the deep convolutional generator network's intermediate activations. Unlike previous attention-based image-to-image translation approaches, which are either domain-specific or require distortion of the source domain's structures, we unearth the importance of the auxiliary semantic information to handle the geometric changes and preserve anatomical structures during image translation. We achieve superior results for cross-modality segmentation between unpaired MRI and CT data for multi-modality whole heart and multi-modal brain tumor MRI (T1/T2) datasets compared to the state-of-the-art methods. We also observe encouraging results in cross-modality conversion for paired MRI and CT images on a brain dataset. Furthermore, a detailed analysis of the cross-modality image translation, thorough ablation studies confirm our proposed method's efficacy.
Collapse
|
6
|
Fei Y, Zhan B, Hong M, Wu X, Zhou J, Wang Y. Deep learning-based multi-modal computing with feature disentanglement for MRI image synthesis. Med Phys 2021; 48:3778-3789. [PMID: 33959965 DOI: 10.1002/mp.14929] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 04/06/2021] [Accepted: 04/18/2021] [Indexed: 12/12/2022] Open
Abstract
PURPOSE Different Magnetic resonance imaging (MRI) modalities of the same anatomical structure are required to present different pathological information from the physical level for diagnostic needs. However, it is often difficult to obtain full-sequence MRI images of patients owing to limitations such as time consumption and high cost. The purpose of this work is to develop an algorithm for target MRI sequences prediction with high accuracy, and provide more information for clinical diagnosis. METHODS We propose a deep learning-based multi-modal computing model for MRI synthesis with feature disentanglement strategy. To take full advantage of the complementary information provided by different modalities, multi-modal MRI sequences are utilized as input. Notably, the proposed approach decomposes each input modality into modality-invariant space with shared information and modality-specific space with specific information, so that features are extracted separately to effectively process the input data. Subsequently, both of them are fused through the adaptive instance normalization (AdaIN) layer in the decoder. In addition, to address the lack of specific information of the target modality in the test phase, a local adaptive fusion (LAF) module is adopted to generate a modality-like pseudo-target with specific information similar to the ground truth. RESULTS To evaluate the synthesis performance, we verify our method on the BRATS2015 dataset of 164 subjects. The experimental results demonstrate our approach significantly outperforms the benchmark method and other state-of-the-art medical image synthesis methods in both quantitative and qualitative measures. Compared with the pix2pixGANs method, the PSNR improves from 23.68 to 24.8. Moreover the ablation studies have also verified the effectiveness of important components of the proposed method. CONCLUSION The proposed method could be effective in prediction of target MRI sequences, and useful for clinical diagnosis and treatment.
Collapse
Affiliation(s)
- Yuchen Fei
- School of Computer Science, Sichuan University, Chengdu, Sichuan, 610065, China
| | - Bo Zhan
- School of Computer Science, Sichuan University, Chengdu, Sichuan, 610065, China
| | - Mei Hong
- School of Computer Science, Sichuan University, Chengdu, Sichuan, 610065, China
| | - Xi Wu
- School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Jiliu Zhou
- School of Computer Science, Sichuan University, Chengdu, Sichuan, 610065, China.,School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Yan Wang
- School of Computer Science, Sichuan University, Chengdu, Sichuan, 610065, China
| |
Collapse
|
7
|
Wang C, Yang G, Papanastasiou G, Tsaftaris SA, Newby DE, Gray C, Macnaught G, MacGillivray TJ. DiCyc: GAN-based deformation invariant cross-domain information fusion for medical image synthesis. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2021; 67:147-160. [PMID: 33658909 PMCID: PMC7763495 DOI: 10.1016/j.inffus.2020.10.015] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 10/19/2020] [Accepted: 10/21/2020] [Indexed: 05/22/2023]
Abstract
Cycle-consistent generative adversarial network (CycleGAN) has been widely used for cross-domain medical image synthesis tasks particularly due to its ability to deal with unpaired data. However, most CycleGAN-based synthesis methods cannot achieve good alignment between the synthesized images and data from the source domain, even with additional image alignment losses. This is because the CycleGAN generator network can encode the relative deformations and noises associated to different domains. This can be detrimental for the downstream applications that rely on the synthesized images, such as generating pseudo-CT for PET-MR attenuation correction. In this paper, we present a deformation invariant cycle-consistency model that can filter out these domain-specific deformation. The deformation is globally parameterized by thin-plate-spline (TPS), and locally learned by modified deformable convolutional layers. Robustness to domain-specific deformations has been evaluated through experiments on multi-sequence brain MR data and multi-modality abdominal CT and MR data. Experiment results demonstrated that our method can achieve better alignment between the source and target data while maintaining superior image quality of signal compared to several state-of-the-art CycleGAN-based methods.
Collapse
Affiliation(s)
- Chengjia Wang
- BHF Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
- Corresponding author.
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, London, UK
| | | | - Sotirios A. Tsaftaris
- Institute for Digital Communications, School of Engineering, University of Edinburgh, Edinburgh, UK
| | - David E. Newby
- BHF Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
| | - Calum Gray
- Edinburgh Imaging Facility QMRI, University of Edinburgh, Edinburgh, UK
| | - Gillian Macnaught
- Edinburgh Imaging Facility QMRI, University of Edinburgh, Edinburgh, UK
| | | |
Collapse
|
8
|
Kawahara D, Nagata Y. T1-weighted and T2-weighted MRI image synthesis with convolutional generative adversarial networks. ACTA ACUST UNITED AC 2021; 26:35-42. [PMID: 33948300 DOI: 10.5603/rpor.a2021.0005] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Accepted: 07/22/2020] [Indexed: 11/25/2022]
Abstract
Background The objective of this study was to propose an optimal input image quality for a conditional generative adversarial network (GAN) in T1-weighted and T2-weighted magnetic resonance imaging (MRI) images. Materials and methods A total of 2,024 images scanned from 2017 to 2018 in 104 patients were used. The prediction framework of T1-weighted to T2-weighted MRI images and T2-weighted to T1-weighted MRI images were created with GAN. Two image sizes (512 × 512 and 256 × 256) and two grayscale level conversion method (simple and adaptive) were used for the input images. The images were converted from 16-bit to 8-bit by dividing with 256 levels in a simple conversion method. For the adaptive conversion method, the unused levels were eliminated in 16-bit images, which were converted to 8-bit images by dividing with the value obtained after dividing the maximum pixel value with 256. Results The relative mean absolute error (rMAE ) was 0.15 for T1-weighted to T2-weighted MRI images and 0.17 for T2-weighted to T1-weighted MRI images with an adaptive conversion method, which was the smallest. Moreover, the adaptive conversion method has a smallest mean square error (rMSE) and root mean square error (rRMSE), and the largest peak signal-to-noise ratio (PSNR) and mutual information (MI). The computation time depended on the image size. Conclusions Input resolution and image size affect the accuracy of prediction. The proposed model and approach of prediction framework can help improve the versatility and quality of multi-contrast MRI tests without the need for prolonged examinations.
Collapse
Affiliation(s)
- Daisuke Kawahara
- Department of Radiation Oncology, Institute of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan
| | - Yasushi Nagata
- Department of Radiation Oncology, Institute of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan.,Hiroshima High-Precision Radiotherapy Cancer Center, Hiroshima, Japan
| |
Collapse
|
9
|
Hermann I, Martínez-Heras E, Rieger B, Schmidt R, Golla AK, Hong JS, Lee WK, Yu-Te W, Nagtegaal M, Solana E, Llufriu S, Gass A, Schad LR, Weingärtner S, Zöllner FG. Accelerated white matter lesion analysis based on simultaneous T 1 and T 2 ∗ quantification using magnetic resonance fingerprinting and deep learning. Magn Reson Med 2021; 86:471-486. [PMID: 33547656 DOI: 10.1002/mrm.28688] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 12/27/2020] [Accepted: 12/28/2020] [Indexed: 02/06/2023]
Abstract
PURPOSE To develop an accelerated postprocessing pipeline for reproducible and efficient assessment of white matter lesions using quantitative magnetic resonance fingerprinting (MRF) and deep learning. METHODS MRF using echo-planar imaging (EPI) scans with varying repetition and echo times were acquired for whole brain quantification of T 1 and T 2 ∗ in 50 subjects with multiple sclerosis (MS) and 10 healthy volunteers along 2 centers. MRF T 1 and T 2 ∗ parametric maps were distortion corrected and denoised. A CNN was trained to reconstruct the T 1 and T 2 ∗ parametric maps, and the WM and GM probability maps. RESULTS Deep learning-based postprocessing reduced reconstruction and image processing times from hours to a few seconds while maintaining high accuracy, reliability, and precision. Mean absolute error performed the best for T 1 (deviations 5.6%) and the logarithmic hyperbolic cosinus loss the best for T 2 ∗ (deviations 6.0%). CONCLUSIONS MRF is a fast and robust tool for quantitative T 1 and T 2 ∗ mapping. Its long reconstruction and several postprocessing steps can be facilitated and accelerated using deep learning.
Collapse
Affiliation(s)
- Ingo Hermann
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.,Department of Imaging Physics, Delft University of Technology, Delft, the Netherlands
| | - Eloy Martínez-Heras
- Center of Neuroimmunology, Laboratory of Advanced Imaging in Neuroimmunological Diseases, Hospital Clinic Barcelona, Institut d'Investigacions Biomédiques August Pi i Sunyer (IDIBAPS) and Universitat de Barcelona, Barcelona, Spain
| | - Benedikt Rieger
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Ralf Schmidt
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Alena-Kathrin Golla
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.,Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Jia-Sheng Hong
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan
| | - Wei-Kai Lee
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan
| | - Wu Yu-Te
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan.,Institute of Biophotonics and Brain Research Center, National Yang-Ming University, Taipei, Taiwan
| | - Martijn Nagtegaal
- Department of Imaging Physics, Delft University of Technology, Delft, the Netherlands
| | - Elisabeth Solana
- Center of Neuroimmunology, Laboratory of Advanced Imaging in Neuroimmunological Diseases, Hospital Clinic Barcelona, Institut d'Investigacions Biomédiques August Pi i Sunyer (IDIBAPS) and Universitat de Barcelona, Barcelona, Spain
| | - Sara Llufriu
- Center of Neuroimmunology, Laboratory of Advanced Imaging in Neuroimmunological Diseases, Hospital Clinic Barcelona, Institut d'Investigacions Biomédiques August Pi i Sunyer (IDIBAPS) and Universitat de Barcelona, Barcelona, Spain
| | - Achim Gass
- Department of Neurology, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Lothar R Schad
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Sebastian Weingärtner
- Department of Imaging Physics, Delft University of Technology, Delft, the Netherlands
| | - Frank G Zöllner
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.,Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
10
|
Dai X, Lei Y, Fu Y, Curran WJ, Liu T, Mao H, Yang X. Multimodal MRI synthesis using unified generative adversarial networks. Med Phys 2020; 47:6343-6354. [PMID: 33053202 PMCID: PMC7796974 DOI: 10.1002/mp.14539] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 08/27/2020] [Accepted: 10/01/2020] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Complementary information obtained from multiple contrasts of tissue facilitates physicians assessing, diagnosing and planning treatment of a variety of diseases. However, acquiring multiple contrasts magnetic resonance images (MRI) for every patient using multiple pulse sequences is time-consuming and expensive, where, medical image synthesis has been demonstrated as an effective alternative. The purpose of this study is to develop a unified framework for multimodal MR image synthesis. METHODS A unified generative adversarial network consisting of only a single generator and a single discriminator was developed to learn the mappings among images of four different modalities. The generator took an image and its modality label as inputs and learned to synthesize the image in the target modality, while the discriminator was trained to distinguish between real and synthesized images and classify them to their corresponding modalities. The network was trained and tested using multimodal brain MRI consisting of four different contrasts which are T1-weighted (T1), T1-weighted and contrast-enhanced (T1c), T2-weighted (T2), and fluid-attenuated inversion recovery (Flair). Quantitative assessments of our proposed method were made through computing normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), structural similarity index measurement (SSIM), visual information fidelity (VIF), and naturalness image quality evaluator (NIQE). RESULTS The proposed model was trained and tested on a cohort of 274 glioma patients with well-aligned multi-types of MRI scans. After the model was trained, tests were conducted by using each of T1, T1c, T2, Flair as a single input modality to generate its respective rest modalities. Our proposed method shows high accuracy and robustness for image synthesis with arbitrary MRI modality that is available in the database as input. For example, with T1 as input modality, the NMAEs for the generated T1c, T2, Flair respectively are 0.034 ± 0.005, 0.041 ± 0.006, and 0.041 ± 0.006, the PSNRs respectively are 32.353 ± 2.525 dB, 30.016 ± 2.577 dB, and 29.091 ± 2.795 dB, the SSIMs are 0.974 ± 0.059, 0.969 ± 0.059, and 0.959 ± 0.059, the VIF are 0.750 ± 0.087, 0.706 ± 0.097, and 0.654 ± 0.062, and NIQE are 1.396 ± 0.401, 1.511 ± 0.460, and 1.259 ± 0.358, respectively. CONCLUSIONS We proposed a novel multimodal MR image synthesis method based on a unified generative adversarial network. The network takes an image and its modality label as inputs and synthesizes multimodal images in a single forward pass. The results demonstrate that the proposed method is able to accurately synthesize multimodal MR images from a single MR image.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| |
Collapse
|
11
|
Zhou T, Fu H, Chen G, Shen J, Shao L. Hi-Net: Hybrid-Fusion Network for Multi-Modal MR Image Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2772-2781. [PMID: 32086202 DOI: 10.1109/tmi.2020.2975344] [Citation(s) in RCA: 106] [Impact Index Per Article: 21.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Magnetic resonance imaging (MRI) is a widely used neuroimaging technique that can provide images of different contrasts (i.e., modalities). Fusing this multi-modal data has proven particularly effective for boosting model performance in many tasks. However, due to poor data quality and frequent patient dropout, collecting all modalities for every patient remains a challenge. Medical image synthesis has been proposed as an effective solution, where any missing modalities are synthesized from the existing ones. In this paper, we propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis, which learns a mapping from multi-modal source images (i.e., existing modalities) to target images (i.e., missing modalities). In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality, and a fusion network is employed to learn the common latent representation of multi-modal data. Then, a multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality, acting as a generator to synthesize the target images. Moreover, a layer-wise multi-modal fusion strategy effectively exploits the correlations among multiple modalities, where a Mixed Fusion Block (MFB) is proposed to adaptively weight different fusion strategies. Extensive experiments demonstrate the proposed model outperforms other state-of-the-art medical image synthesis methods.
Collapse
|
12
|
Kim S, Jang H, Jang J, Lee YH, Hwang D. Deep‐learned short tau inversion recovery imaging using multi‐contrast MR images. Magn Reson Med 2020; 84:2994-3008. [DOI: 10.1002/mrm.28327] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2020] [Revised: 04/27/2020] [Accepted: 04/27/2020] [Indexed: 01/22/2023]
Affiliation(s)
- Sewon Kim
- School of Electrical and Electronic Engineering Yonsei University Seoul Korea
| | - Hanbyol Jang
- School of Electrical and Electronic Engineering Yonsei University Seoul Korea
| | - Jinseong Jang
- School of Electrical and Electronic Engineering Yonsei University Seoul Korea
| | - Young Han Lee
- Department of Radiology and Center for Clinical Imaging Data Science (CCIDS) Yonsei University College of Medicine Seoul Korea
| | - Dosik Hwang
- School of Electrical and Electronic Engineering Yonsei University Seoul Korea
| |
Collapse
|
13
|
Sharma A, Hamarneh G. Missing MRI Pulse Sequence Synthesis Using Multi-Modal Generative Adversarial Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1170-1183. [PMID: 31603773 DOI: 10.1109/tmi.2019.2945521] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Magnetic resonance imaging (MRI) is being increasingly utilized to assess, diagnose, and plan treatment for a variety of diseases. The ability to visualize tissue in varied contrasts in the form of MR pulse sequences in a single scan provides valuable insights to physicians, as well as enabling automated systems performing downstream analysis. However, many issues like prohibitive scan time, image corruption, different acquisition protocols, or allergies to certain contrast materials may hinder the process of acquiring multiple sequences for a patient. This poses challenges to both physicians and automated systems since complementary information provided by the missing sequences is lost. In this paper, we propose a variant of generative adversarial network (GAN) capable of leveraging redundant information contained within multiple available sequences in order to generate one or more missing sequences for a patient scan. The proposed network is designed as a multi-input, multi-output network which combines information from all the available pulse sequences and synthesizes the missing ones in a single forward pass. We demonstrate and validate our method on two brain MRI datasets each with four sequences, and show the applicability of the proposed method in simultaneously synthesizing all missing sequences in any possible scenario where either one, two, or three of the four sequences may be missing. We compare our approach with competing unimodal and multi-modal methods, and show that we outperform both quantitatively and qualitatively.
Collapse
|
14
|
Yang Q, Li N, Zhao Z, Fan X, Chang EIC, Xu Y. MRI Cross-Modality Image-to-Image Translation. Sci Rep 2020; 10:3753. [PMID: 32111966 PMCID: PMC7048849 DOI: 10.1038/s41598-020-60520-6] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Accepted: 02/12/2020] [Indexed: 11/23/2022] Open
Abstract
We present a cross-modality generation framework that learns to generate translated modalities from given modalities in MR images. Our proposed method performs Image Modality Translation (abbreviated as IMT) by means of a deep learning model that leverages conditional generative adversarial networks (cGANs). Our framework jointly exploits the low-level features (pixel-wise information) and high-level representations (e.g. brain tumors, brain structure like gray matter, etc.) between cross modalities which are important for resolving the challenging complexity in brain structures. Our framework can serve as an auxiliary method in medical use and has great application potential. Based on our proposed framework, we first propose a method for cross-modality registration by fusing the deformation fields to adopt the cross-modality information from translated modalities. Second, we propose an approach for MRI segmentation, translated multichannel segmentation (TMS), where given modalities, along with translated modalities, are segmented by fully convolutional networks (FCN) in a multichannel manner. Both of these two methods successfully adopt the cross-modality information to improve the performance without adding any extra data. Experiments demonstrate that our proposed framework advances the state-of-the-art on five brain MRI datasets. We also observe encouraging results in cross-modality registration and segmentation on some widely adopted brain datasets. Overall, our work can serve as an auxiliary method in medical use and be applied to various tasks in medical fields.
Collapse
Grants
- This work is supported by Microsoft Research under the eHealth program, the National Natural Science Foundation in China under Grant 81771910, the National Science and Technology Major Project of the Ministry of Science and Technology in China under Grant 2017YFC0110903, the Beijing Natural Science Foundation in China under Grant 4152033, the Technology and Innovation Commission of Shenzhen in China under Grant shenfagai2016-627, Beijing Young Talent Project in China, the Fundamental Research Funds for the Central Universities of China under Grant SKLSDE-2017ZX-08 from the State Key Laboratory of Software Development Environment in Beihang University in China, the 111 Project in China under Grant B13003.
- This work is supported by the National Science and Technology Major Project of the Ministry of Science and Technology in China under Grant 2017YFC0110903, Microsoft Research under the eHealth program, the National Natural Science Foundation in China under Grant 81771910, the Beijing Natural Science Foundation in China under Grant 4152033, the Technology and Innovation Commission of Shenzhen in China under Grant shenfagai2016-627, Beijing Young Talent Project in China, the Fundamental Research Funds for the Central Universities of China under Grant SKLSDE-2017ZX-08 from the State Key Laboratory of Software Development Environment in Beihang University in China, the 111 Project in China under Grant B13003.
Collapse
Affiliation(s)
- Qianye Yang
- State Key Laboratory of Software Development Environment and Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and Research Institute of Beihang University in Shenzhen, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Nannan Li
- State Key Laboratory of Software Development Environment and Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and Research Institute of Beihang University in Shenzhen, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
- Ping An Technology (Shenzhen) Co., Ltd., Shanghai, 200030, China
| | - Zixu Zhao
- State Key Laboratory of Software Development Environment and Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and Research Institute of Beihang University in Shenzhen, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Xingyu Fan
- Bioengineering College of Chongqing University, Chongqing, 400044, China
| | | | - Yan Xu
- State Key Laboratory of Software Development Environment and Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and Research Institute of Beihang University in Shenzhen, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China.
- Microsoft Research Asia, Beijing, 100080, China.
| |
Collapse
|
15
|
Dar SU, Yurt M, Karacan L, Erdem A, Erdem E, Cukur T. Image Synthesis in Multi-Contrast MRI With Conditional Generative Adversarial Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2375-2388. [PMID: 30835216 DOI: 10.1109/tmi.2019.2901750] [Citation(s) in RCA: 233] [Impact Index Per Article: 38.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Acquiring images of the same anatomy with multiple different contrasts increases the diversity of diagnostic information available in an MR exam. Yet, the scan time limitations may prohibit the acquisition of certain contrasts, and some contrasts may be corrupted by noise and artifacts. In such cases, the ability to synthesize unacquired or corrupted contrasts can improve diagnostic utility. For multi-contrast synthesis, the current methods learn a nonlinear intensity transformation between the source and target images, either via nonlinear regression or deterministic neural networks. These methods can, in turn, suffer from the loss of structural details in synthesized images. Here, in this paper, we propose a new approach for multi-contrast MRI synthesis based on conditional generative adversarial networks. The proposed approach preserves intermediate-to-high frequency details via an adversarial loss, and it offers enhanced synthesis performance via pixel-wise and perceptual losses for registered multi-contrast images and a cycle-consistency loss for unregistered images. Information from neighboring cross-sections are utilized to further improve synthesis quality. Demonstrations on T1- and T2- weighted images from healthy subjects and patients clearly indicate the superior performance of the proposed approach compared to the previous state-of-the-art methods. Our synthesis approach can help improve the quality and versatility of the multi-contrast MRI exams without the need for prolonged or repeated examinations.
Collapse
|
16
|
Spira AP, An Y, Wu MN, Owusu JT, Simonsick EM, Bilgel M, Ferrucci L, Wong DF, Resnick SM. Excessive daytime sleepiness and napping in cognitively normal adults: associations with subsequent amyloid deposition measured by PiB PET. Sleep 2019; 41:5088807. [PMID: 30192978 DOI: 10.1093/sleep/zsy152] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2018] [Indexed: 11/14/2022] Open
Abstract
Study Objectives To determine the association of excessive daytime sleepiness (EDS) and napping with subsequent brain β-amyloid (Aβ) deposition in cognitively normal persons. Methods We studied 124 community-dwelling participants in the Baltimore Longitudinal Study of Aging Neuroimaging Substudy who completed self-report measures of EDS and napping at our study baseline and underwent [11C] Pittsburgh compound B positron emission tomography (PiB PET) scans of the brain, an average ±standard deviation of 15.7 ± 3.4 years later (range 6.9 to 24.6). Scans with a cortical distribution volume ratio of >1.06 were considered Aβ-positive. Results Participants were aged 60.1 ± 9.8 years (range 36.2 to 82.7) at study baseline; 24.4% had EDS and 28.5% napped. In unadjusted analyses, compared with participants without EDS, those with EDS had more than 3 times the odds of being Aβ+ at follow-up (odds ratio [OR] = 3.37, 95% confidence interval [CI]: 1.44, 7.90, p = 0.005), and 2.75 times the odds after adjustment for age, age2, sex, education, and body mass index (OR = 2.75, 95% CI: 1.09, 6.95, p = 0.033). There was a trend-level unadjusted association between napping and Aβ status (OR = 2.01, 95% CI: 0.90, 4.50, p = 0.091) that became nonsignificant after adjustment (OR = 1.86, 95% CI: 0.73, 4.75, p = 0.194). Conclusions EDS is associated with more than 2.5 times the odds of Aβ deposition an average of 15.7 years later. If common EDS causes (e.g., sleep-disordered breathing, insufficient sleep) are associated with temporally distal AD biomarkers, this could have important implications for AD prevention.
Collapse
Affiliation(s)
- Adam P Spira
- Department of Mental Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins School of Medicine, Baltimore, MD
- Johns Hopkins Center on Aging and Health, Baltimore, MD
| | - Yang An
- National Institute on Aging Intramural Research Program, Baltimore, MD
| | - Mark N Wu
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD
- Solomon Snyder Department of Neuroscience, Johns Hopkins School of Medicine, Baltimore, MD
| | - Jocelynn T Owusu
- Department of Mental Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
| | | | - Murat Bilgel
- National Institute on Aging Intramural Research Program, Baltimore, MD
| | - Luigi Ferrucci
- National Institute on Aging Intramural Research Program, Baltimore, MD
| | - Dean F Wong
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD
- Solomon Snyder Department of Neuroscience, Johns Hopkins School of Medicine, Baltimore, MD
- Russell H Morgan Department of Radiology, Division of Nuclear Medicine and Molecular Imaging/High Resolution Brain PET, Johns Hopkins School of Medicine, Baltimore, MD
- Department of Environmental Health and Engineering, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD
| | - Susan M Resnick
- National Institute on Aging Intramural Research Program, Baltimore, MD
| |
Collapse
|
17
|
Schilling KG, Blaber J, Huo Y, Newton A, Hansen C, Nath V, Shafer AT, Williams O, Resnick SM, Rogers B, Anderson AW, Landman BA. Synthesized b0 for diffusion distortion correction (Synb0-DisCo). Magn Reson Imaging 2019; 64:62-70. [PMID: 31075422 DOI: 10.1016/j.mri.2019.05.008] [Citation(s) in RCA: 120] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Revised: 04/02/2019] [Accepted: 05/04/2019] [Indexed: 02/07/2023]
Abstract
Diffusion magnetic resonance images typically suffer from spatial distortions due to susceptibility induced off-resonance fields, which may affect the geometric fidelity of the reconstructed volume and cause mismatches with anatomical images. State-of-the art susceptibility correction (for example, FSL's TOPUP algorithm) typically requires data acquired twice with reverse phase encoding directions, referred to as blip-up blip-down acquisitions, in order to estimate an undistorted volume. Unfortunately, not all imaging protocols include a blip-up blip-down acquisition, and cannot take advantage of the state-of-the art susceptibility and motion correction capabilities. In this study, we aim to enable TOPUP-like processing with historical and/or limited diffusion imaging data that include only a structural image and single blip diffusion image. We utilize deep learning to synthesize an undistorted non-diffusion weighted image from the structural image, and use the non-distorted synthetic image as an anatomical target for distortion correction. We evaluate the efficacy of this approach (named Synb0-DisCo) and show that our distortion correction process results in better matching of the geometry of undistorted anatomical images, reduces variation in diffusion modeling, and is practically equivalent to having both blip-up and blip-down non-diffusion weighted images.
Collapse
Affiliation(s)
- Kurt G Schilling
- Vanderbilt University Institute of Imaging Science, Vanderbilt University, Nashville, TN, United States of America.
| | - Justin Blaber
- Electrical Engineering & Computer Science, Vanderbilt University, Nashville, TN, United States of America
| | - Yuankai Huo
- Department of Electrical Engineering, Vanderbilt University, Nashville, TN, United States of America
| | - Allen Newton
- Vanderbilt University Institute of Imaging Science, Vanderbilt University, Nashville, TN, United States of America; Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, United States of America
| | - Colin Hansen
- Department of Electrical Engineering, Vanderbilt University, Nashville, TN, United States of America
| | - Vishwesh Nath
- Electrical Engineering & Computer Science, Vanderbilt University, Nashville, TN, United States of America
| | - Andrea T Shafer
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, Baltimore, MD, United States of America
| | - Owen Williams
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, Baltimore, MD, United States of America
| | - Susan M Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, Baltimore, MD, United States of America
| | - Baxter Rogers
- Vanderbilt University Institute of Imaging Science, Vanderbilt University, Nashville, TN, United States of America
| | - Adam W Anderson
- Vanderbilt University Institute of Imaging Science, Vanderbilt University, Nashville, TN, United States of America
| | - Bennett A Landman
- Vanderbilt University Institute of Imaging Science, Vanderbilt University, Nashville, TN, United States of America; Department of Electrical Engineering, Vanderbilt University, Nashville, TN, United States of America; Electrical Engineering & Computer Science, Vanderbilt University, Nashville, TN, United States of America
| |
Collapse
|
18
|
Chartsias A, Joyce T, Giuffrida MV, Tsaftaris SA. Multimodal MR Synthesis via Modality-Invariant Latent Representation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:803-814. [PMID: 29053447 PMCID: PMC5904017 DOI: 10.1109/tmi.2017.2764326] [Citation(s) in RCA: 131] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
We propose a multi-input multi-output fully convolutional neural network model for MRI synthesis. The model is robust to missing data, as it benefits from, but does not require, additional input modalities. The model is trained end-to-end, and learns to embed all input modalities into a shared modality-invariant latent space. These latent representations are then combined into a single fused representation, which is transformed into the target output modality with a learnt decoder. We avoid the need for curriculum learning by exploiting the fact that the various input modalities are highly correlated. We also show that by incorporating information from segmentation masks the model can both decrease its error and generate data with synthetic lesions. We evaluate our model on the ISLES and BRATS data sets and demonstrate statistically significant improvements over state-of-the-art methods for single input tasks. This improvement increases further when multiple input modalities are used, demonstrating the benefits of learning a common latent space, again resulting in a statistically significant improvement over the current best method. Finally, we demonstrate our approach on non skull-stripped brain images, producing a statistically significant improvement over the previous best method. Code is made publicly available at https://github.com/agis85/multimodal_brain_synthesis.
Collapse
Affiliation(s)
| | | | - Mario Valerio Giuffrida
- School of Engineering at The University of Edinburgh. Giuffrida and Tsaftaris are also with The Alan Turing Institute of London. Giuffrida is also with IMT Lucca
| | - Sotirios A. Tsaftaris
- School of Engineering at The University of Edinburgh. Giuffrida and Tsaftaris are also with The Alan Turing Institute of London. Giuffrida is also with IMT Lucca
| |
Collapse
|
19
|
Fleishman GM, Valcarcel A, Pham DL, Roy S, Calabresi PA, Yushkevich P, Shinohara RT, Oguz I. Joint Intensity Fusion Image Synthesis Applied to Multiple Sclerosis Lesion Segmentation. BRAINLESION : GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES. BRAINLES (WORKSHOP) 2018; 10670:43-54. [PMID: 29714357 PMCID: PMC5920684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We propose a new approach to Multiple Sclerosis lesion segmentation that utilizes synthesized images. A new method of image synthesis is considered: joint intensity fusion (JIF). JIF synthesizes an image from a library of deformably registered and intensity normalized atlases. Each location in the synthesized image is a weighted average of the registered atlases; atlas weights vary spatially. The weights are determined using the joint label fusion (JLF) framework. The primary methodological contribution is the application of JLF to MRI signal directly rather than labels. Synthesized images are then used as additional features in a lesion segmentation task using the OASIS classifier, a logistic regression model on intensities from multiple modalities. The addition of JIF synthesized images improved the Dice-Sorensen coefficient (relative to manually drawn gold standards) of lesion segmentations over the standard model segmentations by 0.0462 ± 0.0050 (mean ± standard deviation) at optimal threshold over all subjects and 10 separate training/testing folds.
Collapse
Affiliation(s)
- Greg M Fleishman
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Alessandra Valcarcel
- Department of Biostatistics and Epidemiology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Dzung L Pham
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD 20817, USA
| | - Snehashis Roy
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD 20817, USA
| | - Peter A Calabresi
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Paul Yushkevich
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Russell T Shinohara
- Department of Biostatistics and Epidemiology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Ipek Oguz
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
20
|
Cao X, Yang J, Gao Y, Guo Y, Wu G, Shen D. Dual-core steered non-rigid registration for multi-modal images via bi-directional image synthesis. Med Image Anal 2017; 41:18-31. [PMID: 28533050 PMCID: PMC5896773 DOI: 10.1016/j.media.2017.05.004] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Revised: 05/05/2017] [Accepted: 05/09/2017] [Indexed: 12/20/2022]
Abstract
In prostate cancer radiotherapy, computed tomography (CT) is widely used for dose planning purposes. However, because CT has low soft tissue contrast, it makes manual contouring difficult for major pelvic organs. In contrast, magnetic resonance imaging (MRI) provides high soft tissue contrast, which makes it ideal for accurate manual contouring. Therefore, the contouring accuracy on CT can be significantly improved if the contours in MRI can be mapped to CT domain by registering MRI with CT of the same subject, which would eventually lead to high treatment efficacy. In this paper, we propose a bi-directional image synthesis based approach for MRI-to-CT pelvic image registration. First, we use patch-wise random forest with auto-context model to learn the appearance mapping from CT to MRI domain, and then vice versa. Consequently, we can synthesize a pseudo-MRI whose anatomical structures are exactly same with CT but with MRI-like appearance, and a pseudo-CT as well. Then, our MRI-to-CT registration can be steered in a dual manner, by simultaneously estimating two deformation pathways: 1) one from the pseudo-CT to the actual CT and 2) another from actual MRI to the pseudo-MRI. Next, a dual-core deformation fusion framework is developed to iteratively and effectively combine these two registration pathways by using complementary information from both modalities. Experiments on a dataset with real pelvic CT and MRI have shown improved registration performance of the proposed method by comparing it to the conventional registration methods, thus indicating its high potential of translation to the routine radiation therapy.
Collapse
Affiliation(s)
- Xiaohuan Cao
- School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi, China; Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi, China
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Yanrong Guo
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
21
|
Bowles C, Qin C, Guerrero R, Gunn R, Hammers A, Dickie DA, Valdés Hernández M, Wardlaw J, Rueckert D. Brain lesion segmentation through image synthesis and outlier detection. Neuroimage Clin 2017; 16:643-658. [PMID: 29868438 PMCID: PMC5984574 DOI: 10.1016/j.nicl.2017.09.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Revised: 08/30/2017] [Accepted: 09/04/2017] [Indexed: 11/02/2022]
Abstract
Cerebral small vessel disease (SVD) can manifest in a number of ways. Many of these result in hyperintense regions visible on T2-weighted magnetic resonance (MR) images. The automatic segmentation of these lesions has been the focus of many studies. However, previous methods tended to be limited to certain types of pathology, as a consequence of either restricting the search to the white matter, or by training on an individual pathology. Here we present an unsupervised abnormality detection method which is able to detect abnormally hyperintense regions on FLAIR regardless of the underlying pathology or location. The method uses a combination of image synthesis, Gaussian mixture models and one class support vector machines, and needs only be trained on healthy tissue. We evaluate our method by comparing segmentation results from 127 subjects with SVD with three established methods and report significantly superior performance across a number of metrics.
Collapse
Affiliation(s)
| | - Chen Qin
- Department of Computing, Imperial College London, UK
| | | | - Roger Gunn
- Imanova Ltd., London, UK
- Department of Medicine, Imperial College London, UK
| | - Alexander Hammers
- Department of Computing, Imperial College London, UK
- King's College London & Guy's and St Thomas' PET Centre, Division of Imaging Sciences and Biomedical Engineering, St Thomas' Hospital, King's College London, UK
| | | | | | - Joanna Wardlaw
- Department of Neuroimaging Sciences, University of Edinburgh, UK
| | | |
Collapse
|
22
|
Bahrami K, Shi F, Rekik I, Gao Y, Shen D. 7T-guided super-resolution of 3T MRI. Med Phys 2017; 44:1661-1677. [PMID: 28177548 DOI: 10.1002/mp.12132] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2016] [Revised: 12/22/2016] [Accepted: 01/13/2017] [Indexed: 11/11/2022] Open
Abstract
PURPOSE High-resolution MR images can depict rich details of brain anatomical structures and show subtle changes in longitudinal data. 7T MRI scanners can acquire MR images with higher resolution and better tissue contrast than the routine 3T MRI scanners. However, 7T MRI scanners are currently more expensive and less available in clinical and research centers. To this end, we propose a method to generate super-resolution 3T MRI that resembles 7T MRI, which is called as 7T-like MR image in this paper. METHODS First, we propose a mapping from 3T MRI to 7T MRI space, using regression random forest. The mapped 3T MR images serve as intermediate results with similar appearance as 7T MR images. Second, we predict the final higher resolution 7T-like MR images based on sparse representation, using paired local dictionaries for both the mapped 3T MR images and 7T MR images. RESULTS Based on 15 subjects with both 3T and 7T MR images, the predicted 7T-like MR images by our method can best match the ground-truth 7T MR images, compared to other methods. Meanwhile, the experiment on brain tissue segmentation shows that our 7T-like MR images lead to the highest accuracy in the segmentation of WM, GM, and CSF brain tissues, compared to segmentations of 3T MR images as well as the reconstructed 7T-like MR images by other methods. CONCLUSIONS We propose a novel method for prediction of high-resolution 7T-like MR images from low-resolution 3T MR images. Our predicted 7T-like MR images demonstrate better spatial resolution compared to 3T MR images, as well as prediction results by other comparison methods. Such high-quality 7T-like MR images could better facilitate disease diagnosis and intervention.
Collapse
Affiliation(s)
- Khosro Bahrami
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27510, USA
| | - Feng Shi
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27510, USA
| | - Islem Rekik
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27510, USA
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27510, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina, 27510, USA.,Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea
| |
Collapse
|
23
|
Chen M, Carass A, Jog A, Lee J, Roy S, Prince JL. Cross contrast multi-channel image registration using image synthesis for MR brain images. Med Image Anal 2017; 36:2-14. [PMID: 27816859 PMCID: PMC5239759 DOI: 10.1016/j.media.2016.10.005] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2015] [Revised: 10/13/2016] [Accepted: 10/17/2016] [Indexed: 11/21/2022]
Abstract
Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information.
Collapse
Affiliation(s)
- Min Chen
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA.
| | - Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA.
| | - Amod Jog
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA.
| | - Junghoon Lee
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Radiation Oncology and Molecular Radiation Sciences, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA.
| | - Snehashis Roy
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD 20892, USA.
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA; Radiation Oncology and Molecular Radiation Sciences, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA.
| |
Collapse
|
24
|
Jog A, Carass A, Roy S, Pham DL, Prince JL. Random forest regression for magnetic resonance image synthesis. Med Image Anal 2017; 35:475-488. [PMID: 27607469 PMCID: PMC5099106 DOI: 10.1016/j.media.2016.08.009] [Citation(s) in RCA: 85] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Revised: 08/24/2016] [Accepted: 08/26/2016] [Indexed: 02/02/2023]
Abstract
By choosing different pulse sequences and their parameters, magnetic resonance imaging (MRI) can generate a large variety of tissue contrasts. This very flexibility, however, can yield inconsistencies with MRI acquisitions across datasets or scanning sessions that can in turn cause inconsistent automated image analysis. Although image synthesis of MR images has been shown to be helpful in addressing this problem, an inability to synthesize both T2-weighted brain images that include the skull and FLuid Attenuated Inversion Recovery (FLAIR) images has been reported. The method described herein, called REPLICA, addresses these limitations. REPLICA is a supervised random forest image synthesis approach that learns a nonlinear regression to predict intensities of alternate tissue contrasts given specific input tissue contrasts. Experimental results include direct image comparisons between synthetic and real images, results from image analysis tasks on both synthetic and real images, and comparison against other state-of-the-art image synthesis methods. REPLICA is computationally fast, and is shown to be comparable to other methods on tasks they are able to perform. Additionally REPLICA has the capability to synthesize both T2-weighted images of the full head and FLAIR images, and perform intensity standardization between different imaging datasets.
Collapse
Affiliation(s)
- Amod Jog
- Dept. of Computer Science, The Johns Hopkins University, United States.
| | - Aaron Carass
- Dept. of Computer Science, The Johns Hopkins University, United States; Dept. of Electrical and Computer Engineering, The Johns Hopkins University, United States
| | - Snehashis Roy
- The Henry M. Jackson Foundation for the Advancement of Military Medicine, United States
| | - Dzung L Pham
- The Henry M. Jackson Foundation for the Advancement of Military Medicine, United States
| | - Jerry L Prince
- Dept. of Electrical and Computer Engineering, The Johns Hopkins University, United States
| |
Collapse
|
25
|
Pomann GM, Staicu AM, Lobaton EJ, Mejia AF, Dewey BE, Reich DS, Sweeney EM, Shinohara RT. A LAG FUNCTIONAL LINEAR MODEL FOR PREDICTION OF MAGNETIZATION TRANSFER RATIO IN MULTIPLE SCLEROSIS LESIONS. Ann Appl Stat 2016; 10:2325-2348. [PMID: 35791328 PMCID: PMC9252322 DOI: 10.1214/16-aoas981] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/14/2023]
Abstract
We propose a lag functional linear model to predict a response using multiple functional predictors observed at discrete grids with noise. Two procedures are proposed to estimate the regression parameter functions: (1) an approach that ensures smoothness for each value of time using generalized cross-validation; and (2) a global smoothing approach using a restricted maximum likelihood framework. Numerical studies are presented to analyze predictive accuracy in many realistic scenarios. The methods are employed to estimate a magnetic resonance imaging (MRI)-based measure of tissue damage (the magnetization transfer ratio, or MTR) in multiple sclerosis (MS) lesions, a disease that causes damage to the myelin sheaths around axons in the central nervous system. Our method of estimation of MTR within lesions is useful retrospectively in research applications where MTR was not acquired, as well as in clinical practice settings where acquiring MTR is not currently part of the standard of care. The model facilitates the use of commonly acquired imaging modalities to estimate MTR within lesions, and outperforms cross-sectional models that do not account for temporal patterns of lesion development and repair.
Collapse
Affiliation(s)
- Gina-Maria Pomann
- Department of Biostatistics and Bioinformatics, Duke University, Durham, North Carolina 27710, USA
| | - Ana-Maria Staicu
- Department of Statistics, North Carolina State University, Raleigh, North Carolina 27695, USA
| | - Edgar J Lobaton
- Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, North Carolina 27695, USA
| | - Amanda F Mejia
- Department of Statistics, Indiana University Bloomington, Bloomington, Indiana 47405, USA
| | - Blake E Dewey
- National Institute of Neurological Disorders and Stroke NIH, Bethesda, Maryland 20892, USA
| | - Daniel S Reich
- National Institute of Neurological Disorders and Stroke NIH, Bethesda, Maryland 20892, USA
| | | | - Russell T Shinohara
- Department of Biostatistics and Epidemiology, Center for Clinical Epidemiology and Biostatisti Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| |
Collapse
|
26
|
Yang X, Han X, Park E, Aylward S, Kwitt R, Niethammer M. Registration of Pathological Images. ACTA ACUST UNITED AC 2016; 9968:97-107. [PMID: 29896582 DOI: 10.1007/978-3-319-46630-9_10] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/21/2023]
Abstract
This paper proposes an approach to improve atlas-to-image registration accuracy with large pathologies. Instead of directly registering an atlas to a pathological image, the method learns a mapping from the pathological image to a quasi-normal image, for which more accurate registration is possible. Specifically, the method uses a deep variational convolutional encoder-decoder network to learn the mapping. Furthermore, the method estimates local mapping uncertainty through network inference statistics and uses those estimates to down-weight the image registration similarity measure in areas of high uncertainty. The performance of the method is quantified using synthetic brain tumor images and images from the brain tumor segmentation challenge (BRATS 2015).
Collapse
Affiliation(s)
| | - Xu Han
- UNC Chapel Hill, Chapel Hill, USA
| | | | | | - Roland Kwitt
- Department of Computer Science, University of Salzburg, Austria
| | - Marc Niethammer
- UNC Chapel Hill, Chapel Hill, USA
- Biomedical Research Imaging Center, Chapel Hill, USA
| |
Collapse
|
27
|
Suttner LH, Mejia A, Dewey B, Sati P, Reich DS, Shinohara RT. Statistical estimation of white matter microstructure from conventional MRI. Neuroimage Clin 2016; 12:615-623. [PMID: 27722085 PMCID: PMC5048084 DOI: 10.1016/j.nicl.2016.09.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2016] [Revised: 08/29/2016] [Accepted: 09/10/2016] [Indexed: 12/11/2022]
Abstract
Diffusion tensor imaging (DTI) has become the predominant modality for studying white matter integrity in multiple sclerosis (MS) and other neurological disorders. Unfortunately, the use of DTI-based biomarkers in large multi-center studies is hindered by systematic biases that confound the study of disease-related changes. Furthermore, the site-to-site variability in multi-center studies is significantly higher for DTI than that for conventional MRI-based markers. In our study, we apply the Quantitative MR Estimation Employing Normalization (QuEEN) model to estimate the four DTI measures: MD, FA, RD, and AD. QuEEN uses a voxel-wise generalized additive regression model to relate the normalized intensities of one or more conventional MRI modalities to a quantitative modality, such as DTI. We assess the accuracy of the models by comparing the prediction error of estimated DTI images to the scan-rescan error in subjects with two sets of scans. Across the four DTI measures, the performance of the models is not consistent: Both MD and RD estimations appear to be quite accurate, while AD estimation is less accurate than MD and RD; the accuracy of FA estimation is poor. Thus, in some cases when assessing white matter integrity, it may be sufficient to acquire conventional MRI sequences alone.
Collapse
Affiliation(s)
- Leah H Suttner
- Department of Biostatistics and Epidemiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Amanda Mejia
- Department of Biostatistics, The Johns Hopkins University, Baltimore, MD 21205, United States
| | - Blake Dewey
- Translational Neuroradiology Unit, Division of Neuroimmunology and Neurovirology, National Institute of Neurological Disease and Stroke, National Institute of Health, Bethesda, MD 20892, United States
| | - Pascal Sati
- Translational Neuroradiology Unit, Division of Neuroimmunology and Neurovirology, National Institute of Neurological Disease and Stroke, National Institute of Health, Bethesda, MD 20892, United States
| | - Daniel S Reich
- Department of Biostatistics, The Johns Hopkins University, Baltimore, MD 21205, United States
- Translational Neuroradiology Unit, Division of Neuroimmunology and Neurovirology, National Institute of Neurological Disease and Stroke, National Institute of Health, Bethesda, MD 20892, United States
| | - Russell T Shinohara
- Department of Biostatistics and Epidemiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, United States
| |
Collapse
|
28
|
Sevetlidis V, Giuffrida MV, Tsaftaris SA. Whole Image Synthesis Using a Deep Encoder-Decoder Network. SIMULATION AND SYNTHESIS IN MEDICAL IMAGING 2016. [DOI: 10.1007/978-3-319-46630-9_13] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
29
|
MRBrainS Challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2015; 2015:813696. [PMID: 26759553 PMCID: PMC4680055 DOI: 10.1155/2015/813696] [Citation(s) in RCA: 114] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/10/2015] [Accepted: 08/19/2015] [Indexed: 12/03/2022]
Abstract
Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi)automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65–80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand.
Collapse
|
30
|
Bilgel M, An Y, Zhou Y, Wong DF, Prince JL, Ferrucci L, Resnick SM. Individual estimates of age at detectable amyloid onset for risk factor assessment. Alzheimers Dement 2015; 12:373-9. [PMID: 26588863 DOI: 10.1016/j.jalz.2015.08.166] [Citation(s) in RCA: 63] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2015] [Revised: 08/13/2015] [Accepted: 08/27/2015] [Indexed: 12/20/2022]
Abstract
INTRODUCTION Individualized estimates of age at detectable amyloid-beta (Aβ) accumulation, distinct from amyloid positivity, allow for analysis of onset age of Aβ accumulation as an outcome measure to understand risk factors. METHODS Using longitudinal Pittsburgh compound B (PiB) positron emission tomography data from Baltimore Longitudinal Study of Aging, we estimated the age at which each PiB+ individual began accumulating Aβ. We used survival analysis methods to quantify risk of accumulating Aβ and differences in onset age of Aβ accumulation in relation to APOE ε4 status and sex among 36 APOE ε4 carriers and 83 noncarriers. RESULTS Age at onset of Aβ accumulation for the APOE ε4- and ε4+ groups was 73.1 and 60.7, respectively. APOE ε4 positivity conferred a threefold risk of accumulating Aβ after adjusting for sex and education. DISCUSSION Estimation of onset age of amyloid accumulation may help gauge treatment efficacy in interventions to delay symptom onset in Alzheimer's disease.
Collapse
Affiliation(s)
- Murat Bilgel
- Laboratory of Behavioral Neuroscience, Intramural Research Program, National Institute on Aging, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| | - Yang An
- Laboratory of Behavioral Neuroscience, Intramural Research Program, National Institute on Aging, Baltimore, MD, USA
| | - Yun Zhou
- Department of Radiology, Johns Hopkins Medical Institutions, Baltimore, MD, USA
| | - Dean F Wong
- Department of Radiology, Johns Hopkins Medical Institutions, Baltimore, MD, USA
| | - Jerry L Prince
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA; Department of Radiology, Johns Hopkins Medical Institutions, Baltimore, MD, USA; Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Luigi Ferrucci
- Translational Gerontology Branch, Intramural Research Program, National Institute on Aging, Baltimore, MD, USA
| | - Susan M Resnick
- Laboratory of Behavioral Neuroscience, Intramural Research Program, National Institute on Aging, Baltimore, MD, USA
| |
Collapse
|
31
|
Roy S, He Q, Sweeney E, Carass A, Reich DS, Prince JL, Pham DL. Subject-Specific Sparse Dictionary Learning for Atlas-Based Brain MRI Segmentation. IEEE J Biomed Health Inform 2015; 19:1598-609. [PMID: 26340685 PMCID: PMC4562064 DOI: 10.1109/jbhi.2015.2439242] [Citation(s) in RCA: 59] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Quantitative measurements from segmentations of human brain magnetic resonance (MR) images provide important biomarkers for normal aging and disease progression. In this paper, we propose a patch-based tissue classification method from MR images that uses a sparse dictionary learning approach and atlas priors. Training data for the method consists of an atlas MR image, prior information maps depicting where different tissues are expected to be located, and a hard segmentation. Unlike most atlas-based classification methods that require deformable registration of the atlas priors to the subject, only affine registration is required between the subject and training atlas. A subject-specific patch dictionary is created by learning relevant patches from the atlas. Then the subject patches are modeled as sparse combinations of learned atlas patches leading to tissue memberships at each voxel. The combination of prior information in an example-based framework enables us to distinguish tissues having similar intensities but different spatial locations. We demonstrate the efficacy of the approach on the application of whole-brain tissue segmentation in subjects with healthy anatomy and normal pressure hydrocephalus, as well as lesion segmentation in multiple sclerosis patients. For each application, quantitative comparisons are made against publicly available state-of-the art approaches.
Collapse
|
32
|
MR image synthesis by contrast learning on neighborhood ensembles. Med Image Anal 2015; 24:63-76. [PMID: 26072167 DOI: 10.1016/j.media.2015.05.002] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2014] [Revised: 02/21/2015] [Accepted: 05/04/2015] [Indexed: 01/24/2023]
Abstract
Automatic processing of magnetic resonance images is a vital part of neuroscience research. Yet even the best and most widely used medical image processing methods will not produce consistent results when their input images are acquired with different pulse sequences. Although intensity standardization and image synthesis methods have been introduced to address this problem, their performance remains dependent on knowledge and consistency of the pulse sequences used to acquire the images. In this paper, an image synthesis approach that first estimates the pulse sequence parameters of the subject image is presented. The estimated parameters are then used with a collection of atlas or training images to generate a new atlas image having the same contrast as the subject image. This additional image provides an ideal source from which to synthesize any other target pulse sequence image contained in the atlas. In particular, a nonlinear regression intensity mapping is trained from the new atlas image to the target atlas image and then applied to the subject image to yield the particular target pulse sequence within the atlas. Both intensity standardization and synthesis of missing tissue contrasts can be achieved using this framework. The approach was evaluated on both simulated and real data, and shown to be superior in both intensity standardization and synthesis to other established methods.
Collapse
|
33
|
Chen M, Jog A, Carass A, Prince JL. Using image synthesis for multi-channel registration of different image modalities. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2015; 9413. [PMID: 26246653 DOI: 10.1117/12.2082373] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
This paper presents a multi-channel approach for performing registration between magnetic resonance (MR) images with different modalities. In general, a multi-channel registration cannot be used when the moving and target images do not have analogous modalities. In this work, we address this limitation by using a random forest regression technique to synthesize the missing modalities from the available ones. This allows a single channel registration between two different modalities to be converted into a multi-channel registration with two mono-modal channels. To validate our approach, two openly available registration algorithms and five cost functions were used to compare the label transfer accuracy of the registration with (and without) our multi-channel synthesis approach. Our results show that the proposed method produced statistically significant improvements in registration accuracy (at an α level of 0.001) for both algorithms and all cost functions when compared to a standard multi-modal registration using the same algorithms with mutual information.
Collapse
Affiliation(s)
- Min Chen
- Image Analysis and Communications Laboratory, Dept. of ECE, The Johns Hopkins University ; Translational Neuroradiology Unit, National Institute of Neurological Disorders and Stroke
| | - Amod Jog
- Image Analysis and Communications Laboratory, Dept. of ECE, The Johns Hopkins University
| | - Aaron Carass
- Image Analysis and Communications Laboratory, Dept. of ECE, The Johns Hopkins University ; Department of Computer Science, The Johns Hopkins University
| | - Jerry L Prince
- Image Analysis and Communications Laboratory, Dept. of ECE, The Johns Hopkins University
| |
Collapse
|
34
|
He Q, Roy S, Jog A, Pham DL. An Example-Based Brain MRI Simulation Framework. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2015; 9412:94120P. [PMID: 28366973 PMCID: PMC5374742 DOI: 10.1117/12.2075687] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.
Collapse
Affiliation(s)
- Qing He
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, USA
| | - Snehashis Roy
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, USA
| | - Amod Jog
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Dzung L Pham
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, USA
| |
Collapse
|
35
|
Jog A, Carass A, Pham DL, Prince JL. Tree-Encoded Conditional Random Fields for Image Synthesis. INFORMATION PROCESSING IN MEDICAL IMAGING : PROCEEDINGS OF THE ... CONFERENCE 2015; 24:733-45. [PMID: 26221716 PMCID: PMC4523797 DOI: 10.1007/978-3-319-19992-4_58] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/26/2023]
Abstract
Magnetic resonance imaging (MRI) is the dominant modality for neuroimaging in clinical and research domains. The tremendous versatility of MRI as a modality can lead to large variability in terms of image contrast, resolution, noise, and artifacts. Variability can also manifest itself as missing or corrupt imaging data. Image synthesis has been recently proposed to homogenize and/or enhance the quality of existing imaging data in order to make them more suitable as consistent inputs for processing. We frame the image synthesis problem as an inference problem on a 3-D continuous-valued conditional random field (CRF). We model the conditional distribution as a Gaussian by defining quadratic association and interaction potentials encoded in leaves of a regression tree. The parameters of these quadratic potentials are learned by maximizing the pseudo-likelihood of the training data. Final synthesis is done by inference on this model. We applied this method to synthesize T2-weighted images from T1-weighted images, showing improved synthesis quality as compared to current image synthesis approaches. We also synthesized Fluid Attenuated Inversion Recovery (FLAIR) images, showing similar segmentations to those obtained from real FLAIRs. Additionally, we generated super-resolution FLAIRs showing improved segmentation.
Collapse
Affiliation(s)
- Amod Jog
- Image Analysis and Communications Laboratory, The Johns Hopkins University, Baltimore, USA
| | - Aaron Carass
- Image Analysis and Communications Laboratory, The Johns Hopkins University, Baltimore, USA
| | - Dzung L. Pham
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, USA
| | - Jerry L. Prince
- Image Analysis and Communications Laboratory, The Johns Hopkins University, Baltimore, USA
| |
Collapse
|
36
|
Jog A, Carass A, Prince JL. IMPROVING MAGNETIC RESONANCE RESOLUTION WITH SUPERVISED LEARNING. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2014; 2014:987-990. [PMID: 25405001 DOI: 10.1109/isbi.2014.6868038] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Despite ongoing improvements in magnetic resonance (MR) imaging (MRI), considerable clinical and, to a lesser extent, research data is acquired at lower resolutions. For example 1 mm isotropic acquisition of T1-weighted (T1-w) Magnetization Prepared Rapid Gradient Echo (MPRAGE) is standard practice, however T2-weighted (T2-w)-because of its longer relaxation times (and thus longer scan time)-is still routinely acquired with slice thicknesses of 2-5 mm and in-plane resolution of 2-3 mm. This creates obvious fundamental problems when trying to process T1-w and T2-w data in concert. We present an automated supervised learning algorithm to generate high resolution data. The framework is similar to the brain hallucination work of Rousseau, taking advantage of new developments in regression based image reconstruction. We present validation on phantom and real data, demonstrating the improvement over state-of-the-art super-resolution techniques.
Collapse
Affiliation(s)
- Amod Jog
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| | - Aaron Carass
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| | - Jerry L Prince
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| |
Collapse
|
37
|
Jog A, Carass A, Pham DL, Prince JL. RANDOM FOREST FLAIR RECONSTRUCTION FROM T1, T2, AND PD -WEIGHTED MRI. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2014; 2014:1079-1082. [PMID: 25405002 DOI: 10.1109/isbi.2014.6868061] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Fluid Attenuated Inversion Recovery (FLAIR) is a commonly acquired pulse sequence for multiple sclerosis (MS) patients. MS white matter lesions appear hyperintense in FLAIR images and have excellent contrast with the surrounding tissue. Hence, FLAIR images are commonly used in automated lesion segmentation algorithms to easily and quickly delineate the lesions. This expedites the lesion load computation and correlation with disease progression. Unfortunately for numerous reasons the acquired FLAIR images can be of a poor quality and suffer from various artifacts. In the most extreme cases the data is absent, which poses a problem when consistently processing a large data set. We propose to fill in this gap by reconstructing a FLAIR image given the corresponding T1-weighted, T2-weighted, and PD -weighted images of the same subject using random forest regression. We show that the images we produce are similar to true high quality FLAIR images and also provide a good surrogate for tissue segmentation.
Collapse
Affiliation(s)
- Amod Jog
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| | - Aaron Carass
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| | - Dzung L Pham
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation for the Advancement of Military Medicine
| | - Jerry L Prince
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| |
Collapse
|