1
|
Hu S, Lei B, Wang S, Wang Y, Feng Z, Shen Y. Bidirectional Mapping Generative Adversarial Networks for Brain MR to PET Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:145-157. [PMID: 34428138 DOI: 10.1109/tmi.2021.3107013] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Fusing multi-modality medical images, such as magnetic resonance (MR) imaging and positron emission tomography (PET), can provide various anatomical and functional information about the human body. However, PET data is not always available for several reasons, such as high cost, radiation hazard, and other limitations. This paper proposes a 3D end-to-end synthesis network called Bidirectional Mapping Generative Adversarial Networks (BMGAN). Image contexts and latent vectors are effectively used for brain MR-to-PET synthesis. Specifically, a bidirectional mapping mechanism is designed to embed the semantic information of PET images into the high-dimensional latent space. Moreover, the 3D Dense-UNet generator architecture and the hybrid loss functions are further constructed to improve the visual quality of cross-modality synthetic images. The most appealing part is that the proposed method can synthesize perceptually realistic PET images while preserving the diverse brain structures of different subjects. Experimental results demonstrate that the performance of the proposed method outperforms other competitive methods in terms of quantitative measures, qualitative displays, and evaluation metrics for classification.
Collapse
|
2
|
Jiang M, Zhi M, Wei L, Yang X, Zhang J, Li Y, Wang P, Huang J, Yang G. FA-GAN: Fused attentive generative adversarial networks for MRI image super-resolution. Comput Med Imaging Graph 2021; 92:101969. [PMID: 34411966 PMCID: PMC8453331 DOI: 10.1016/j.compmedimag.2021.101969] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 07/03/2021] [Accepted: 08/06/2021] [Indexed: 11/29/2022]
Abstract
High-resolution magnetic resonance images can provide fine-grained anatomical information, but acquiring such data requires a long scanning time. In this paper, a framework called the Fused Attentive Generative Adversarial Networks(FA-GAN) is proposed to generate the super- resolution MR image from low-resolution magnetic resonance images, which can reduce the scanning time effectively but with high resolution MR images. In the framework of the FA-GAN, the local fusion feature block, consisting of different three-pass networks by using different convolution kernels, is proposed to extract image features at different scales. And the global feature fusion module, including the channel attention module, the self-attention module, and the fusion operation, is designed to enhance the important features of the MR image. Moreover, the spectral normalization process is introduced to make the discriminator network stable. 40 sets of 3D magnetic resonance images (each set of images contains 256 slices) are used to train the network, and 10 sets of images are used to test the proposed method. The experimental results show that the PSNR and SSIM values of the super-resolution magnetic resonance image generated by the proposed FA-GAN method are higher than the state-of-the-art reconstruction methods.
Collapse
Affiliation(s)
- Mingfeng Jiang
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, China,Corresponding author at: School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, China.
| | - Minghao Zhi
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Liying Wei
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Xiaocheng Yang
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Jucheng Zhang
- Department of Clinical Engineering, the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310019, China
| | - Yongming Li
- College of Communication Engineering, Chongqing University, Chongqing, China
| | - Pin Wang
- College of Communication Engineering, Chongqing University, Chongqing, China
| | - Jiahao Huang
- Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, UK,National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, UK,National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK,Corresponding author at: National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK.
| |
Collapse
|
3
|
Zhao C, Dewey BE, Pham DL, Calabresi PA, Reich DS, Prince JL. SMORE: A Self-Supervised Anti-Aliasing and Super-Resolution Algorithm for MRI Using Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:805-817. [PMID: 33170776 PMCID: PMC8053388 DOI: 10.1109/tmi.2020.3037187] [Citation(s) in RCA: 61] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
High resolution magnetic resonance (MR) images are desired in many clinical and research applications. Acquiring such images with high signal-to-noise (SNR), however, can require a long scan duration, which is difficult for patient comfort, is more costly, and makes the images susceptible to motion artifacts. A very common practical compromise for both 2D and 3D MR imaging protocols is to acquire volumetric MR images with high in-plane resolution, but lower through-plane resolution. In addition to having poor resolution in one orientation, 2D MRI acquisitions will also have aliasing artifacts, which further degrade the appearance of these images. This paper presents an approach SMORE1 based on convolutional neural networks (CNNs) that restores image quality by improving resolution and reducing aliasing in MR images.2 This approach is self-supervised, which requires no external training data because the high-resolution and low-resolution data that are present in the image itself are used for training. For 3D MRI, the method consists of only one self-supervised super-resolution (SSR) deep CNN that is trained from the volumetric image data. For 2D MRI, there is a self-supervised anti-aliasing (SAA) deep CNN that precedes the SSR CNN, also trained from the volumetric image data. Both methods were evaluated on a broad collection of MR data, including filtered and downsampled images so that quantitative metrics could be computed and compared, and actual acquired low resolution images for which visual and sharpness measures could be computed and compared. The super-resolution method is shown to be visually and quantitatively superior to previously reported methods.
Collapse
|
4
|
Ma B, Zhao Y, Yang Y, Zhang X, Dong X, Zeng D, Ma S, Li S. MRI image synthesis with dual discriminator adversarial learning and difficulty-aware attention mechanism for hippocampal subfields segmentation. Comput Med Imaging Graph 2020; 86:101800. [PMID: 33130416 DOI: 10.1016/j.compmedimag.2020.101800] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2020] [Revised: 07/27/2020] [Accepted: 09/24/2020] [Indexed: 12/01/2022]
Abstract
BACKGROUND AND OBJECTIVE Hippocampal subfields (HS) segmentation accuracy on high resolution (HR) MRI images is higher than that on low resolution (LR) MRI images. However, HR MRI data collection is more expensive and time-consuming. Thus, we intend to generate HR MRI images from the corresponding LR MRI images for HS segmentation. METHODS AND RESULTS To generate high-quality HR MRI hippocampus region images, we use a dual discriminator adversarial learning model with difficulty-aware attention mechanism in hippocampus regions (da-GAN). A local discriminator is applied in da-GAN to evaluate the visual quality of hippocampus region voxels of the synthetic images. And the difficulty-aware attention mechanism based on the local discriminator can better model the generation of hard-to-synthesis voxels in hippocampus regions. Additionally, we design a SemiDenseNet model with 3D Dense CRF postprocessing and an Unet-based model to perform HS segmentation. The experiments are implemented on Kulaga-Yoskovitz dataset. Compared with conditional generative adversarial network (c-GAN), the PSNR of generated HR T2w images acquired by our da-GAN achieves 0.406 and 0.347 improvement in left and right hippocampus regions. When using two segmentation models to segment HS, the DSC values achieved on the generated HR T1w and T2w images are both improved than that on LR T1w images. CONCLUSION Experimental results show that da-GAN model can generate higher-quality MRI images, especially in hippocampus regions, and the generated MRI images can improve HS segmentation accuracy.
Collapse
Affiliation(s)
- Baoqiang Ma
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, China
| | - Yan Zhao
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, China
| | - Yujing Yang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, China
| | - Xiaohui Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, China
| | - Xiaoxi Dong
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, China
| | - Debin Zeng
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, China
| | - Siyu Ma
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, China
| | - Shuyu Li
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, China.
| |
Collapse
|
5
|
Chun J, Zhang H, Gach HM, Olberg S, Mazur T, Green O, Kim T, Kim H, Kim JS, Mutic S, Park JC. MRI super‐resolution reconstruction for MRI‐guided adaptive radiotherapy using cascaded deep learning: In the presence of limited training data and unknown translation model. Med Phys 2019; 46:4148-4164. [DOI: 10.1002/mp.13717] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Revised: 06/14/2019] [Accepted: 07/07/2019] [Indexed: 11/06/2022] Open
Affiliation(s)
- Jaehee Chun
- Department of Radiation Oncology Washington University in St. Louis St Louis MO 63110 USA
- Department of Radiation Oncology, Yonsei Cancer Center Yonsei University College of Medicine Seoul South Korea
| | - Hao Zhang
- Department of Radiation Oncology Washington University in St. Louis St Louis MO 63110 USA
| | - H. Michael Gach
- Department of Radiation Oncology Washington University in St. Louis St Louis MO 63110 USA
- Departments of Radiology and Biomedical Engineering Washington University in St. Louis St Louis MO 63110 USA
| | - Sven Olberg
- Department of Radiation Oncology Washington University in St. Louis St Louis MO 63110 USA
- Department of Biomedical Engineering Washington University in St. Louis St Louis MO 63110 USA
| | - Thomas Mazur
- Department of Radiation Oncology Washington University in St. Louis St Louis MO 63110 USA
| | - Olga Green
- Department of Radiation Oncology Washington University in St. Louis St Louis MO 63110 USA
| | - Taeho Kim
- Department of Radiation Oncology Washington University in St. Louis St Louis MO 63110 USA
| | - Hyun Kim
- Department of Radiation Oncology Washington University in St. Louis St Louis MO 63110 USA
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center Yonsei University College of Medicine Seoul South Korea
| | - Sasa Mutic
- Department of Radiation Oncology Washington University in St. Louis St Louis MO 63110 USA
| | - Justin C. Park
- Department of Radiation Oncology Washington University in St. Louis St Louis MO 63110 USA
- Department of Biomedical Engineering Washington University in St. Louis St Louis MO 63110 USA
| |
Collapse
|
6
|
Vyas K. Transfer Recurrent Feature Learning for Endomicroscopy Image Recognition. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:791-801. [PMID: 30273147 DOI: 10.1109/tmi.2018.2872473] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Probe-based confocal laser endomicroscopy (pCLE) is an emerging tool for epithelial cancer diagnosis, which enables in-vivo microscopic imaging during endoscopic procedures and facilitates the development of automatic recognition algorithms to identify the status of tissues. In this paper, we propose a transfer recurrent feature learning framework for classification tasks on pCLE videos. At the first stage, the discriminative feature of single pCLE frame is learned via generative adversarial networks based on both pCLE and histology modalities. At the second stage, we use recurrent neural networks to handle the varying length and irregular shape of pCLE mosaics taking the frame-based features as input. The experiments on real pCLE data sets demonstrate that our approach outperforms, with statistical significance, state-of-the-art approaches. A binary classification accuracy of 84.1% has been achieved.
Collapse
|
7
|
Yang X, Wang T, Lei Y, Higgins K, Liu T, Shim H, Curran WJ, Mao H, Nye JA. MRI-based attenuation correction for brain PET/MRI based on anatomic signature and machine learning. Phys Med Biol 2019; 64:025001. [PMID: 30524027 PMCID: PMC7773209 DOI: 10.1088/1361-6560/aaf5e0] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deriving accurate attenuation maps for PET/MRI remains a challenging problem because MRI voxel intensities are not related to properties of photon attenuation and bone/air interfaces have similarly low signal. This work presents a learning-based method to derive patient-specific computed tomography (CT) maps from routine T1-weighted MRI in their native space for attenuation correction of brain PET. We developed a machine-learning-based method using a sequence of alternating random forests under the framework of an iterative refinement model. Anatomical feature selection is included in both training and predication stages to achieve optimal performance. To evaluate its accuracy, we retrospectively investigated 17 patients, each of which has been scanned by PET/CT and MR for brain. The PET images were corrected for attenuation on CT images as ground truth, as well as on pseudo CT (PCT) images generated from MR images. The PCT images showed mean average error of 66.1 ± 8.5 HU, average correlation coefficient of 0.974 ± 0.018 and average Dice similarity coefficient (DSC) larger than 0.85 for air, bone and soft tissue. The side-by-side image comparisons and joint histograms demonstrated very good agreement of PET images after correction by PCT and CT. The mean differences of voxel values in selected VOIs were less than 4%, the mean absolute difference of all active area is around 2.5%, and the mean linear correlation coefficient is 0.989 ± 0.017 between PET images corrected by CT and PCT. This work demonstrates a novel learning-based approach to automatically generate CT images from routine T1-weighted MR images based on a random forest regression with patch-based anatomical signatures to effectively capture the relationship between the CT and MR images. Reconstructed PET images using the PCT exhibit errors well below accepted test/retest reliability of PET/CT indicating high quantitative equivalence.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | - Kristin Higgins
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Hyunsuk Shim
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Hui Mao
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Jonathon A Nye
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, United States of America
| |
Collapse
|
8
|
Nie D, Trullo R, Lian J, Wang L, Petitjean C, Ruan S, Wang Q, Shen D. Medical Image Synthesis with Deep Convolutional Adversarial Networks. IEEE Trans Biomed Eng 2018; 65:2720-2730. [PMID: 29993445 PMCID: PMC6398343 DOI: 10.1109/tbme.2018.2814538] [Citation(s) in RCA: 292] [Impact Index Per Article: 41.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Medical imaging plays a critical role in various clinical applications. However, due to multiple considerations such as cost and radiation dose, the acquisition of certain image modalities may be limited. Thus, medical image synthesis can be of great benefit by estimating a desired imaging modality without incurring an actual scan. In this paper, we propose a generative adversarial approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate a target image given a source image. To better model a nonlinear mapping from source to target and to produce more realistic target images, we propose to use the adversarial learning strategy to better model the FCN. Moreover, the FCN is designed to incorporate an image-gradient-difference-based loss function to avoid generating blurry target images. Long-term residual unit is also explored to help the training of the network. We further apply Auto-Context Model to implement a context-aware deep convolutional adversarial network. Experimental results show that our method is accurate and robust for synthesizing target images from the corresponding source images. In particular, we evaluate our method on three datasets, to address the tasks of generating CT from MRI and generating 7T MRI from 3T MRI images. Our method outperforms the state-of-the-art methods under comparison in all datasets and tasks.
Collapse
Affiliation(s)
- Dong Nie
- Department of Computer Science, Department of Radiology and BRIC, UNC-Chapel Hill, Chapel Hill, NC, 27510 USA ()
| | - Roger Trullo
- Department of Radiology and BRIC, UNC-Chapel Hill, and also with the Department of Computer Science, University of Normandy
| | - Jun Lian
- Department of Radiation Oncology, UNC-Chapel Hill
| | - Li Wang
- Department of Radiology and BRIC, UNC-Chapel Hill
| | | | - Su Ruan
- Department of Computer Science, University of Normandy
| | - Qian Wang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China Radiology and Biomedical ()
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27510 USA, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, South Korea ()
| |
Collapse
|
9
|
Lei Y, Shu HK, Tian S, Jeong JJ, Liu T, Shim H, Mao H, Wang T, Jani AB, Curran WJ, Yang X. Magnetic resonance imaging-based pseudo computed tomography using anatomic signature and joint dictionary learning. J Med Imaging (Bellingham) 2018; 5:034001. [PMID: 30155512 DOI: 10.1117/1.jmi.5.3.034001] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2018] [Accepted: 08/06/2018] [Indexed: 12/30/2022] Open
Abstract
Magnetic resonance imaging (MRI) provides a number of advantages over computed tomography (CT) for radiation therapy treatment planning; however, MRI lacks the key electron density information necessary for accurate dose calculation. We propose a dictionary-learning-based method to derive electron density information from MRIs. Specifically, we first partition a given MR image into a set of patches, for which we used a joint dictionary learning method to directly predict a CT patch as a structured output. Then a feature selection method is used to ensure prediction robustness. Finally, we combine all the predicted CT patches to obtain the final prediction for the given MR image. This prediction technique was validated for a clinical application using 14 patients with brain MR and CT images. The peak signal-to-noise ratio (PSNR), mean absolute error (MAE), normalized cross-correlation (NCC) indices and similarity index (SI) for air, soft-tissue and bone region were used to quantify the prediction accuracy. The mean ± std of PSNR, MAE, and NCC were: 22.4±1.9 dB , 82.6±26.1 HU, and 0.91±0.03 for the 14 patients. The SIs for air, soft-tissue, and bone regions are 0.98±0.01 , 0.88±0.03 , and 0.69±0.08 . These indices demonstrate the CT prediction accuracy of the proposed learning-based method. This CT image prediction technique could be used as a tool for MRI-based radiation treatment planning, or for PET attenuation correction in a PET/MRI scanner.
Collapse
Affiliation(s)
- Yang Lei
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Hui-Kuo Shu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Sibo Tian
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Jiwoong Jason Jeong
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Tian Liu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Hyunsuk Shim
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States.,Emory University, Winship Cancer Institute, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Hui Mao
- Emory University, Winship Cancer Institute, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Tonghe Wang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Ashesh B Jani
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Walter J Curran
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Xiaofeng Yang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| |
Collapse
|
10
|
Dalca AV, Bouman KL, Freeman WT, Rost NS, Sabuncu MR, Golland P. Medical Image Imputation from Image Collections. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 38:10.1109/TMI.2018.2866692. [PMID: 30136936 PMCID: PMC6393212 DOI: 10.1109/tmi.2018.2866692] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
We present an algorithm for creating high resolution anatomically plausible images consistent with acquired clinical brain MRI scans with large inter-slice spacing. Although large data sets of clinical images contain a wealth of information, time constraints during acquisition result in sparse scans that fail to capture much of the anatomy. These characteristics often render computational analysis impractical as many image analysis algorithms tend to fail when applied to such images. Highly specialized algorithms that explicitly handle sparse slice spacing do not generalize well across problem domains. In contrast, we aim to enable application of existing algorithms that were originally developed for high resolution research scans to significantly undersampled scans. We introduce a generative model that captures fine-scale anatomical structure across subjects in clinical image collections and derive an algorithm for filling in the missing data in scans with large inter-slice spacing. Our experimental results demonstrate that the resulting method outperforms state-of-the-art upsampling super-resolution techniques, and promises to facilitate subsequent analysis not previously possible with scans of this quality. Our implementation is freely available at https://github.com/adalca/papago.
Collapse
Affiliation(s)
- Adrian V. Dalca
- Computer Science and Artificial Intelligence Lab, MIT (main contact: ) and also Martinos Center for Biomedical Imaging, Massachusetts General Hospital, HMS
| | | | | | - Natalia S. Rost
- Department of Neurology, Massachusetts General Hospital, HMS
| | - Mert R. Sabuncu
- School of Electrical and Computer Engineering, and Meinig School of Biomedical Engineering, Cornell University
| | | |
Collapse
|
11
|
Cao X, Yang J, Gao Y, Wang Q, Shen D. Region-adaptive Deformable Registration of CT/MRI Pelvic Images via Learning-based Image Synthesis. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:10.1109/TIP.2018.2820424. [PMID: 29994091 PMCID: PMC6165687 DOI: 10.1109/tip.2018.2820424] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Registration of pelvic CT and MRI is highly desired as it can facilitate effective fusion of two modalities for prostate cancer radiation therapy, i.e., using CT for dose planning and MRI for accurate organ delineation. However, due to the large inter-modality appearance gaps and the high shape/appearance variations of pelvic organs, the pelvic CT/MRI registration is highly challenging. In this paper, we propose a region-adaptive deformable registration method for multi-modal pelvic image registration. Specifically, to handle the large appearance gaps, we first perform both CT-to-MRI and MRI-to-CT image synthesis by multi-target regression forest (MT-RF). Then, to use the complementary anatomical information in the two modalities for steering the registration, we select key points automatically from both modalities and use them together for guiding correspondence detection in the region-adaptive fashion. That is, we mainly use CT to establish correspondences for bone regions, and use MRI to establish correspondences for soft tissue regions. The number of key points is increased gradually during the registration, to hierarchically guide the symmetric estimation of the deformation fields. Experiments for both intra-subject and inter-subject deformable registration show improved performances compared with state-of-the-art multi-modal registration methods, which demonstrate the potentials of our method to be applied for the routine prostate cancer radiation therapy.
Collapse
|
12
|
Cao X, Yang J, Gao Y, Guo Y, Wu G, Shen D. Dual-core steered non-rigid registration for multi-modal images via bi-directional image synthesis. Med Image Anal 2017; 41:18-31. [PMID: 28533050 PMCID: PMC5896773 DOI: 10.1016/j.media.2017.05.004] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Revised: 05/05/2017] [Accepted: 05/09/2017] [Indexed: 12/20/2022]
Abstract
In prostate cancer radiotherapy, computed tomography (CT) is widely used for dose planning purposes. However, because CT has low soft tissue contrast, it makes manual contouring difficult for major pelvic organs. In contrast, magnetic resonance imaging (MRI) provides high soft tissue contrast, which makes it ideal for accurate manual contouring. Therefore, the contouring accuracy on CT can be significantly improved if the contours in MRI can be mapped to CT domain by registering MRI with CT of the same subject, which would eventually lead to high treatment efficacy. In this paper, we propose a bi-directional image synthesis based approach for MRI-to-CT pelvic image registration. First, we use patch-wise random forest with auto-context model to learn the appearance mapping from CT to MRI domain, and then vice versa. Consequently, we can synthesize a pseudo-MRI whose anatomical structures are exactly same with CT but with MRI-like appearance, and a pseudo-CT as well. Then, our MRI-to-CT registration can be steered in a dual manner, by simultaneously estimating two deformation pathways: 1) one from the pseudo-CT to the actual CT and 2) another from actual MRI to the pseudo-MRI. Next, a dual-core deformation fusion framework is developed to iteratively and effectively combine these two registration pathways by using complementary information from both modalities. Experiments on a dataset with real pelvic CT and MRI have shown improved registration performance of the proposed method by comparing it to the conventional registration methods, thus indicating its high potential of translation to the routine radiation therapy.
Collapse
Affiliation(s)
- Xiaohuan Cao
- School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi, China; Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi, China
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Yanrong Guo
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
13
|
Nie D, Trullo R, Lian J, Petitjean C, Ruan S, Wang Q, Shen D. Medical Image Synthesis with Context-Aware Generative Adversarial Networks. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2017; 10435:417-425. [PMID: 30009283 PMCID: PMC6044459 DOI: 10.1007/978-3-319-66179-7_48] [Citation(s) in RCA: 184] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Computed tomography (CT) is critical for various clinical applications, e.g., radiation treatment planning and also PET attenuation correction in MRI/PET scanner. However, CT exposes radiation during acquisition, which may cause side effects to patients. Compared to CT, magnetic resonance imaging (MRI) is much safer and does not involve radiations. Therefore, recently researchers are greatly motivated to estimate CT image from its corresponding MR image of the same subject for the case of radiation planning. In this paper, we propose a data-driven approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate CT given the MR image. To better model the nonlinear mapping from MRI to CT and produce more realistic images, we propose to use the adversarial training strategy to train the FCN. Moreover, we propose an image-gradient-difference based loss function to alleviate the blurriness of the generated CT. We further apply Auto-Context Model (ACM) to implement a context-aware generative adversarial network. Experimental results show that our method is accurate and robust for predicting CT images from MR images, and also outperforms three state-of-the-art methods under comparison.
Collapse
Affiliation(s)
- Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Roger Trullo
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
- Normandie Univ, INSA Rouen, LITIS, 76000 Rouen, France
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | | | - Su Ruan
- Normandie Univ, INSA Rouen, LITIS, 76000 Rouen, France
| | - Qian Wang
- School of Biomedical Engineering, Med-X Research Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| |
Collapse
|
14
|
Dalca AV, Bouman KL, Freeman WT, Rost NS, Sabuncu MR, Golland P. Population Based Image Imputation. INFORMATION PROCESSING IN MEDICAL IMAGING : PROCEEDINGS OF THE ... CONFERENCE 2017; 10265:659-671. [PMID: 29379264 PMCID: PMC5786165 DOI: 10.1007/978-3-319-59050-9_52] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
We present an algorithm for creating high resolution anatomically plausible images consistent with acquired clinical brain MRI scans with large inter-slice spacing. Although large databases of clinical images contain a wealth of information, medical acquisition constraints result in sparse scans that miss much of the anatomy. These characteristics often render computational analysis impractical as standard processing algorithms tend to fail when applied to such images. Highly specialized or application-specific algorithms that explicitly handle sparse slice spacing do not generalize well across problem domains. In contrast, our goal is to enable application of existing algorithms that were originally developed for high resolution research scans to significantly undersampled scans. We introduce a model that captures fine-scale anatomical similarity across subjects in clinical image collections and use it to fill in the missing data in scans with large slice spacing. Our experimental results demonstrate that the proposed method outperforms current upsampling methods and promises to facilitate subsequent analysis not previously possible with scans of this quality.
Collapse
Affiliation(s)
- Adrian V Dalca
- Computer Science and Artificial Intelligence Lab, MIT, Cambridge, USA
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, HMS, Charlestown, MA, USA
| | | | - William T Freeman
- Computer Science and Artificial Intelligence Lab, MIT, Cambridge, USA
- Google Research, Cambridge, MA, USA
| | - Natalia S Rost
- Department of Neurology, Massachusetts General Hospital, HMS, Boston, USA
| | - Mert R Sabuncu
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, HMS, Charlestown, MA, USA
- School of Electrical and Computer Engineering, Cornell, Ithaca, USA
| | - Polina Golland
- Computer Science and Artificial Intelligence Lab, MIT, Cambridge, USA
| |
Collapse
|
15
|
Chen M, Carass A, Jog A, Lee J, Roy S, Prince JL. Cross contrast multi-channel image registration using image synthesis for MR brain images. Med Image Anal 2017; 36:2-14. [PMID: 27816859 PMCID: PMC5239759 DOI: 10.1016/j.media.2016.10.005] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2015] [Revised: 10/13/2016] [Accepted: 10/17/2016] [Indexed: 11/21/2022]
Abstract
Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information.
Collapse
Affiliation(s)
- Min Chen
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA.
| | - Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA.
| | - Amod Jog
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA.
| | - Junghoon Lee
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Radiation Oncology and Molecular Radiation Sciences, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA.
| | - Snehashis Roy
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD 20892, USA.
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA; Radiation Oncology and Molecular Radiation Sciences, The Johns Hopkins School of Medicine, Baltimore, MD 21287, USA.
| |
Collapse
|
16
|
Jog A, Carass A, Roy S, Pham DL, Prince JL. Random forest regression for magnetic resonance image synthesis. Med Image Anal 2017; 35:475-488. [PMID: 27607469 PMCID: PMC5099106 DOI: 10.1016/j.media.2016.08.009] [Citation(s) in RCA: 85] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Revised: 08/24/2016] [Accepted: 08/26/2016] [Indexed: 02/02/2023]
Abstract
By choosing different pulse sequences and their parameters, magnetic resonance imaging (MRI) can generate a large variety of tissue contrasts. This very flexibility, however, can yield inconsistencies with MRI acquisitions across datasets or scanning sessions that can in turn cause inconsistent automated image analysis. Although image synthesis of MR images has been shown to be helpful in addressing this problem, an inability to synthesize both T2-weighted brain images that include the skull and FLuid Attenuated Inversion Recovery (FLAIR) images has been reported. The method described herein, called REPLICA, addresses these limitations. REPLICA is a supervised random forest image synthesis approach that learns a nonlinear regression to predict intensities of alternate tissue contrasts given specific input tissue contrasts. Experimental results include direct image comparisons between synthetic and real images, results from image analysis tasks on both synthetic and real images, and comparison against other state-of-the-art image synthesis methods. REPLICA is computationally fast, and is shown to be comparable to other methods on tasks they are able to perform. Additionally REPLICA has the capability to synthesize both T2-weighted images of the full head and FLAIR images, and perform intensity standardization between different imaging datasets.
Collapse
Affiliation(s)
- Amod Jog
- Dept. of Computer Science, The Johns Hopkins University, United States.
| | - Aaron Carass
- Dept. of Computer Science, The Johns Hopkins University, United States; Dept. of Electrical and Computer Engineering, The Johns Hopkins University, United States
| | - Snehashis Roy
- The Henry M. Jackson Foundation for the Advancement of Military Medicine, United States
| | - Dzung L Pham
- The Henry M. Jackson Foundation for the Advancement of Military Medicine, United States
| | - Jerry L Prince
- Dept. of Electrical and Computer Engineering, The Johns Hopkins University, United States
| |
Collapse
|
17
|
Huynh T, Gao Y, Kang J, Wang L, Zhang P, Lian J, Shen D. Estimating CT Image From MRI Data Using Structured Random Forest and Auto-Context Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:174-83. [PMID: 26241970 PMCID: PMC4703527 DOI: 10.1109/tmi.2015.2461533] [Citation(s) in RCA: 164] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Computed tomography (CT) imaging is an essential tool in various clinical diagnoses and radiotherapy treatment planning. Since CT image intensities are directly related to positron emission tomography (PET) attenuation coefficients, they are indispensable for attenuation correction (AC) of the PET images. However, due to the relatively high dose of radiation exposure in CT scan, it is advised to limit the acquisition of CT images. In addition, in the new PET and magnetic resonance (MR) imaging scanner, only MR images are available, which are unfortunately not directly applicable to AC. These issues greatly motivate the development of methods for reliable estimate of CT image from its corresponding MR image of the same subject. In this paper, we propose a learning-based method to tackle this challenging problem. Specifically, we first partition a given MR image into a set of patches. Then, for each patch, we use the structured random forest to directly predict a CT patch as a structured output, where a new ensemble model is also used to ensure the robust prediction. Image features are innovatively crafted to achieve multi-level sensitivity, with spatial information integrated through only rigid-body alignment to help avoiding the error-prone inter-subject deformable registration. Moreover, we use an auto-context model to iteratively refine the prediction. Finally, we combine all of the predicted CT patches to obtain the final prediction for the given MR image. We demonstrate the efficacy of our method on two datasets: human brain and prostate images. Experimental results show that our method can accurately predict CT images in various scenarios, even for the images undergoing large shape variation, and also outperforms two state-of-the-art methods.
Collapse
Affiliation(s)
- Tri Huynh
- IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Yaozong Gao
- Department of Computer Science, and also with the IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Jiayin Kang
- IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Li Wang
- IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Pei Zhang
- IDEA lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina at Chapel Hill, NC, USA
| | - Dinggang Shen
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 136-071, Korea
| |
Collapse
|
18
|
MR image synthesis by contrast learning on neighborhood ensembles. Med Image Anal 2015; 24:63-76. [PMID: 26072167 DOI: 10.1016/j.media.2015.05.002] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2014] [Revised: 02/21/2015] [Accepted: 05/04/2015] [Indexed: 01/24/2023]
Abstract
Automatic processing of magnetic resonance images is a vital part of neuroscience research. Yet even the best and most widely used medical image processing methods will not produce consistent results when their input images are acquired with different pulse sequences. Although intensity standardization and image synthesis methods have been introduced to address this problem, their performance remains dependent on knowledge and consistency of the pulse sequences used to acquire the images. In this paper, an image synthesis approach that first estimates the pulse sequence parameters of the subject image is presented. The estimated parameters are then used with a collection of atlas or training images to generate a new atlas image having the same contrast as the subject image. This additional image provides an ideal source from which to synthesize any other target pulse sequence image contained in the atlas. In particular, a nonlinear regression intensity mapping is trained from the new atlas image to the target atlas image and then applied to the subject image to yield the particular target pulse sequence within the atlas. Both intensity standardization and synthesis of missing tissue contrasts can be achieved using this framework. The approach was evaluated on both simulated and real data, and shown to be superior in both intensity standardization and synthesis to other established methods.
Collapse
|