1
|
Wang T, Lei Y, Harms J, Ghavidel B, Lin L, Beitler JJ, McDonald M, Curran WJ, Liu T, Zhou J, Yang X. Learning-Based Stopping Power Mapping on Dual-Energy CT for Proton Radiation Therapy. Int J Part Ther 2021; 7:46-60. [PMID: 33604415 PMCID: PMC7886267 DOI: 10.14338/ijpt-d-20-00020.1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Accepted: 12/04/2020] [Indexed: 12/30/2022] Open
Abstract
Purpose Dual-energy computed tomography (DECT) has been used to derive relative stopping power (RSP) maps by obtaining the energy dependence of photon interactions. The DECT-derived RSP maps could potentially be compromised by image noise levels and the severity of artifacts when using physics-based mapping techniques. This work presents a noise-robust learning-based method to predict RSP maps from DECT for proton radiation therapy. Materials and Methods The proposed method uses a residual attention cycle-consistent generative adversarial network to bring DECT-to-RSP mapping close to a 1-to-1 mapping by introducing an inverse RSP-to-DECT mapping. To evaluate the proposed method, we retrospectively investigated 20 head-and-neck cancer patients with DECT scans in proton radiation therapy simulation. Ground truth RSP values were assigned by calculation based on chemical compositions and acted as learning targets in the training process for DECT datasets; they were evaluated against results from the proposed method using a leave-one-out cross-validation strategy. Results The predicted RSP maps showed an average normalized mean square error of 2.83% across the whole body volume and an average mean error less than 3% in all volumes of interest. With additional simulated noise added in DECT datasets, the proposed method still maintained a comparable performance, while the physics-based stoichiometric method suffered degraded inaccuracy from increased noise level. The average differences from ground truth in dose volume histogram metrics for clinical target volumes were less than 0.2 Gy for D95% and Dmax with no statistical significance. Maximum difference in dose volume histogram metrics of organs at risk was around 1 Gy on average. Conclusion These results strongly indicate the high accuracy of RSP maps predicted by our machine-learning–based method and show its potential feasibility for proton treatment planning and dose calculation.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Joseph Harms
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Beth Ghavidel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Liyong Lin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Jonathan J Beitler
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Mark McDonald
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
2
|
Abstract
Accurate and efficient dose calculation is an important prerequisite to ensure the success of radiation therapy. However, all the dose calculation algorithms commonly used in current clinical practice have to compromise between calculation accuracy and efficiency, which may result in unsatisfactory dose accuracy or highly intensive computation time in many clinical situations. The purpose of this work is to develop a novel dose calculation algorithm based on the deep learning method for radiation therapy. In this study we performed a feasibility investigation on implementing a fast and accurate dose calculation based on a deep learning technique. A two-dimensional (2D) fluence map was first converted into a three-dimensional (3D) volume using ray traversal algorithm. 3D U-Net like deep residual network was then established to learn a mapping between this converted 3D volume, CT and 3D dose distribution. Therefore an indirect relationship was built between a fluence map and its corresponding 3D dose distribution without using significantly complex neural networks. Two hundred patients, including nasopharyngeal, lung, rectum and breast cancer cases, were collected and applied to train the proposed network. Additional 47 patients were randomly selected to evaluate the accuracy of the proposed method through comparing dose distributions, dose volume histograms and clinical indices with the results from a treatment planning system (TPS), which was used as the ground truth in this study. The proposed deep learning based dose calculation algorithm achieved good predictive performance. For 47 tested patients, the average per-voxel bias of the deep learning calculated value and standard deviation (normalized to the prescription), relative to the TPS calculation, is 0.17%±2.28%. The average deep learning calculated values and standard deviations for relevant clinical indices were compared with the TPS calculated results and the t-test p-values demonstrated the consistency between them. In this study we developed a new deep learning based dose calculation method. This approach was evaluated by the clinical cases with different sites. Our results demonstrated its feasibility and reliability and indicated its great potential to improve the efficiency and accuracy of radiation dose calculation for different treatment modalities.
Collapse
Affiliation(s)
- Jiawei Fan
- Department of Radiation Oncology, Stanford University, 875 Blake Wilbur Drive, Stanford, CA 94305-5847, United States of America
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai 200032, People's Republic of China; Department of Oncology, Shanghai Medical College Fudan University, Shanghai 200032, People's Republic of China
- On leave from Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai 200032, People's Republic of China; Department of Oncology, Shanghai Medical College Fudan University, Shanghai 200032, People's Republic of China
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, 875 Blake Wilbur Drive, Stanford, CA 94305-5847, United States of America
| | - Peng Dong
- Department of Radiation Oncology, Stanford University, 875 Blake Wilbur Drive, Stanford, CA 94305-5847, United States of America
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai 200032, People's Republic of China; Department of Oncology, Shanghai Medical College Fudan University, Shanghai 200032, People's Republic of China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai 200032, People's Republic of China; Department of Oncology, Shanghai Medical College Fudan University, Shanghai 200032, People's Republic of China
| | - Yong Yang
- Department of Radiation Oncology, Stanford University, 875 Blake Wilbur Drive, Stanford, CA 94305-5847, United States of America
| |
Collapse
|
3
|
Dai X, Lei Y, Fu Y, Curran WJ, Liu T, Mao H, Yang X. Multimodal MRI synthesis using unified generative adversarial networks. Med Phys 2020; 47:6343-6354. [PMID: 33053202 PMCID: PMC7796974 DOI: 10.1002/mp.14539] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 08/27/2020] [Accepted: 10/01/2020] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Complementary information obtained from multiple contrasts of tissue facilitates physicians assessing, diagnosing and planning treatment of a variety of diseases. However, acquiring multiple contrasts magnetic resonance images (MRI) for every patient using multiple pulse sequences is time-consuming and expensive, where, medical image synthesis has been demonstrated as an effective alternative. The purpose of this study is to develop a unified framework for multimodal MR image synthesis. METHODS A unified generative adversarial network consisting of only a single generator and a single discriminator was developed to learn the mappings among images of four different modalities. The generator took an image and its modality label as inputs and learned to synthesize the image in the target modality, while the discriminator was trained to distinguish between real and synthesized images and classify them to their corresponding modalities. The network was trained and tested using multimodal brain MRI consisting of four different contrasts which are T1-weighted (T1), T1-weighted and contrast-enhanced (T1c), T2-weighted (T2), and fluid-attenuated inversion recovery (Flair). Quantitative assessments of our proposed method were made through computing normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), structural similarity index measurement (SSIM), visual information fidelity (VIF), and naturalness image quality evaluator (NIQE). RESULTS The proposed model was trained and tested on a cohort of 274 glioma patients with well-aligned multi-types of MRI scans. After the model was trained, tests were conducted by using each of T1, T1c, T2, Flair as a single input modality to generate its respective rest modalities. Our proposed method shows high accuracy and robustness for image synthesis with arbitrary MRI modality that is available in the database as input. For example, with T1 as input modality, the NMAEs for the generated T1c, T2, Flair respectively are 0.034 ± 0.005, 0.041 ± 0.006, and 0.041 ± 0.006, the PSNRs respectively are 32.353 ± 2.525 dB, 30.016 ± 2.577 dB, and 29.091 ± 2.795 dB, the SSIMs are 0.974 ± 0.059, 0.969 ± 0.059, and 0.959 ± 0.059, the VIF are 0.750 ± 0.087, 0.706 ± 0.097, and 0.654 ± 0.062, and NIQE are 1.396 ± 0.401, 1.511 ± 0.460, and 1.259 ± 0.358, respectively. CONCLUSIONS We proposed a novel multimodal MR image synthesis method based on a unified generative adversarial network. The network takes an image and its modality label as inputs and synthesizes multimodal images in a single forward pass. The results demonstrate that the proposed method is able to accurately synthesize multimodal MR images from a single MR image.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| |
Collapse
|
4
|
Wang T, Lei Y, Fu Y, Curran WJ, Liu T, Nye JA, Yang X. Machine learning in quantitative PET: A review of attenuation correction and low-count image reconstruction methods. Phys Med 2020; 76:294-306. [PMID: 32738777 PMCID: PMC7484241 DOI: 10.1016/j.ejmp.2020.07.028] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 07/13/2020] [Accepted: 07/21/2020] [Indexed: 02/08/2023] Open
Abstract
The rapid expansion of machine learning is offering a new wave of opportunities for nuclear medicine. This paper reviews applications of machine learning for the study of attenuation correction (AC) and low-count image reconstruction in quantitative positron emission tomography (PET). Specifically, we present the developments of machine learning methodology, ranging from random forest and dictionary learning to the latest convolutional neural network-based architectures. For application in PET attenuation correction, two general strategies are reviewed: 1) generating synthetic CT from MR or non-AC PET for the purposes of PET AC, and 2) direct conversion from non-AC PET to AC PET. For low-count PET reconstruction, recent deep learning-based studies and the potential advantages over conventional machine learning-based methods are presented and discussed. In each application, the proposed methods, study designs and performance of published studies are listed and compared with a brief discussion. Finally, the overall contributions and remaining challenges are summarized.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Jonathon A Nye
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
5
|
Dong X, Lei Y, Wang T, Higgins K, Liu T, Curran WJ, Mao H, Nye JA, Yang X. Deep learning-based attenuation correction in the absence of structural information for whole-body positron emission tomography imaging. Phys Med Biol 2020; 65:055011. [PMID: 31869826 PMCID: PMC7099429 DOI: 10.1088/1361-6560/ab652c] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Deriving accurate structural maps for attenuation correction (AC) of whole-body positron emission tomography (PET) remains challenging. Common problems include truncation, inter-scan motion, and erroneous transformation of structural voxel-intensities to PET µ-map values (e.g. modality artifacts, implanted devices, or contrast agents). This work presents a deep learning-based attenuation correction (DL-AC) method to generate attenuation corrected PET (AC PET) from non-attenuation corrected PET (NAC PET) images for whole-body PET imaging, without the use of structural information. 3D patch-based cycle-consistent generative adversarial networks (CycleGAN) is introduced to include NAC-PET-to-AC-PET mapping and inverse mapping from AC PET to NAC PET, which constrains NAC-PET-to-AC-PET mapping to be closer to one-to-one mapping. Since NAC PET images share similar anatomical structures to the AC PET image but lack contrast information, residual blocks, which aim to learn the differences between NAC PET and AC PET, are used to construct generators of CycleGAN. After training, patches from NAC PET images were fed into NAC-PET-to-AC-PET mapping to generate DL-AC PET patches. DL-AC PET image was then reconstructed through patch fusion. We conducted a retrospective study on 55 datasets of whole-body PET/CT scans to evaluate the proposed method. In comparing DL-AC PET with original AC PET, average mean error (ME) and normalized mean square error (NMSE) of the whole-body were 0.62% ± 1.26% and 0.72% ± 0.34%. The average intensity changes measured on sequential PET images with AC and DL-AC on both normal tissues and lesions differ less than 3%. There was no significant difference of the intensity changes between AC and DL-AC PET, which demonstrate DL-AC PET images generated by the proposed DL-AC method can reach a same level to that of original AC PET images. The method demonstrates excellent quantification accuracy and reliability and is applicable to PET data collected on a single PET scanner or hybrid platform with computed tomography (PET/CT) or magnetic resonance imaging (PET/MRI).
Collapse
Affiliation(s)
- Xue Dong
- Department of Radiation Oncology, Emory University, Atlanta, GA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA
| | - Kristin Higgins
- Department of Radiation Oncology, Emory University, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta, GA
| | - Walter J. Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta, GA
| | - Hui Mao
- Winship Cancer Institute, Emory University, Atlanta, GA
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Jonathon A. Nye
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta, GA
| |
Collapse
|
6
|
Lei Y, Dong X, Wang T, Higgins K, Liu T, Curran WJ, Mao H, Nye JA, Yang X. Whole-body PET estimation from low count statistics using cycle-consistent generative adversarial networks. Phys Med Biol 2019; 64:215017. [PMID: 31561244 DOI: 10.1088/1361-6560/ab4891] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Lowering either the administered activity or scan time is desirable in PET imaging as it decreases the patient's radiation burden or improves patient comfort and reduces motion artifacts. But reducing these parameters lowers overall photon counts and increases noise, adversely impacting image contrast and quantification. To address this low count statistics problem, we propose a cycle-consistent generative adversarial network (Cycle GAN) model to estimate diagnostic quality PET images using low count data. Cycle GAN learns a transformation to synthesize diagnostic PET images using low count data that would be indistinguishable from our standard clinical protocol. The algorithm also learns an inverse transformation such that cycle low count PET data (inverse of synthetic estimate) generated from synthetic full count PET is close to the true low count PET. We introduced residual blocks into the generator to catch the differences between low count and full count PET in the training dataset and better handle noise. The average mean error and normalized mean square error in whole body were -0.14% ± 1.43% and 0.52% ± 0.19% with Cycle GAN model, compared to 5.59% ± 2.11% and 3.51% ± 4.14% on the original low count PET images. Normalized cross-correlation is improved from 0.970 to 0.996, and peak signal-to-noise ratio is increased from 39.4 dB to 46.0 dB with proposed method. We developed a deep learning-based approach to accurately estimate diagnostic quality PET datasets from one eighth of photon counts, and has great potential to improve low count PET image quality to the level of diagnostic PET used in clinical settings.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America. Co-first author
| | | | | | | | | | | | | | | | | |
Collapse
|
7
|
Liu Y, Lei Y, Wang Y, Shafai-Erfani G, Wang T, Tian S, Patel P, Jani AB, McDonald M, Curran WJ, Liu T, Zhou J, Yang X. Evaluation of a deep learning-based pelvic synthetic CT generation technique for MRI-based prostate proton treatment planning. Phys Med Biol 2019; 64:205022. [PMID: 31487698 DOI: 10.1088/1361-6560/ab41af] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The purpose of this work is to validate the application of a deep learning-based method for pelvic synthetic CT (sCT) generation that can be used for prostate proton beam therapy treatment planning. We propose to integrate dense block minimization into 3D cycle-consistent generative adversarial networks (cycleGAN) framework to effectively learn the nonlinear mapping between MRI and CT pairs. A cohort of 17 patients with co-registered CT and MR pairs were used to test the deep learning-based sCT generation method by leave-one-out cross-validation. Image quality between the sCT and CT images, gamma analysis passing rate, dose-volume metrics, distal range displacement, and the individual pencil beam Bragg peak shift between sCT- and CT-based proton plans were evaluated. The average mean absolute error (MAE) was 51.32 ± 16.91 HU. The relative differences of the statistics of the PTV dose-volume histogram (DVH) metrics in between sCT and CT were generally less than 1%. Mean values of dose difference, absolute dose difference (in percent of the prescribed dose) were -0.07% ± 0.07% and 0.23% ± 0.08%. Mean gamma analysis pass rate of 1 mm/1%, 2 mm/2%, 3 mm/3% criteria with 10% dose threshold were 92.39% ± 5.97%, 97.95% ± 2.95% and 98.97% ± 1.62% respectively. The median, mean and standard deviation of absolute maximum range differences were 0.09 cm and 0.23 ± 0.25 cm. The median and mean Bragg peak shifts among the 17 patients were 0.09 cm and 0.18 ± 0.07 cm. The image similarity, dosimetric and distal range agreement between sCT and original CT suggests the feasibility of further development of an MRI-only workflow for prostate proton radiotherapy.
Collapse
Affiliation(s)
- Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
8
|
Liu Y, Lei Y, Wang T, Kayode O, Tian S, Liu T, Patel P, Curran WJ, Ren L, Yang X. MRI-based treatment planning for liver stereotactic body radiotherapy: validation of a deep learning-based synthetic CT generation method. Br J Radiol 2019; 92:20190067. [PMID: 31192695 DOI: 10.1259/bjr.20190067] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
OBJECTIVE The purpose of this work is to develop and validate a learning-based method to derive electron density from routine anatomical MRI for potential MRI-based SBRT treatment planning. METHODS We proposed to integrate dense block into cycle generative adversarial network (GAN) to effectively capture the relationship between the CT and MRI for CT synthesis. A cohort of 21 patients with co-registered CT and MR pairs were used to evaluate our proposed method by the leave-one-out cross-validation. Mean absolute error, peak signal-to-noise ratio and normalized cross-correlation were used to quantify the imaging differences between the synthetic CT (sCT) and CT. The accuracy of Hounsfield unit (HU) values in sCT for dose calculation was evaluated by comparing the dose distribution in sCT-based and CT-based treatment planning. Clinically relevant dose-volume histogram metrics were then extracted from the sCT-based and CT-based plans for quantitative comparison. RESULTS The mean absolute error, peak signal-to-noise ratio and normalized cross-correlation of the sCT were 72.87 ± 18.16 HU, 22.65 ± 3.63 dB and 0.92 ± 0.04, respectively. No significant differences were observed in the majority of the planning target volume and organ at risk dose-volume histogram metrics ( p > 0.05). The average pass rate of γ analysis was over 99% with 1%/1 mm acceptance criteria on the coronal plane that intersects with isocenter. CONCLUSION The image similarity and dosimetric agreement between sCT and original CT warrant further development of an MRI-only workflow for liver stereotactic body radiation therapy. ADVANCES IN KNOWLEDGE This work is the first deep-learning-based approach to generating abdominal sCT through dense-cycle-GAN. This method can successfully generate the small bony structures such as the rib bones and is able to predict the HU values for dose calculation with comparable accuracy to reference CT images.
Collapse
Affiliation(s)
- Yingzi Liu
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia
| | - Yang Lei
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia
| | - Tonghe Wang
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia
| | - Oluwatosin Kayode
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia
| | - Sibo Tian
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia
| | - Tian Liu
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia
| | - Pretesh Patel
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia
| | - Walter J Curran
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia
| | - Lei Ren
- 2 Department of Radiation Oncology, Duke University, Durham, North Carolina
| | - Xiaofeng Yang
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia
| |
Collapse
|
9
|
Lei Y, Harms J, Wang T, Liu Y, Shu HK, Jani AB, Curran WJ, Mao H, Liu T, Yang X. MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks. Med Phys 2019; 46:3565-3581. [PMID: 31112304 DOI: 10.1002/mp.13617] [Citation(s) in RCA: 142] [Impact Index Per Article: 28.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Revised: 05/14/2019] [Accepted: 05/14/2019] [Indexed: 02/07/2023] Open
Abstract
PURPOSE Automated synthetic computed tomography (sCT) generation based on magnetic resonance imaging (MRI) images would allow for MRI-only based treatment planning in radiation therapy, eliminating the need for CT simulation and simplifying the patient treatment workflow. In this work, the authors propose a novel method for generation of sCT based on dense cycle-consistent generative adversarial networks (cycle GAN), a deep-learning based model that trains two transformation mappings (MRI to CT and CT to MRI) simultaneously. METHODS AND MATERIALS The cycle GAN-based model was developed to generate sCT images in a patch-based framework. Cycle GAN was applied to this problem because it includes an inverse transformation from CT to MRI, which helps constrain the model to learn a one-to-one mapping. Dense block-based networks were used to construct generator of cycle GAN. The network weights and variables were optimized via a gradient difference (GD) loss and a novel distance loss metric between sCT and original CT. RESULTS Leave-one-out cross-validation was performed to validate the proposed model. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and normalized cross correlation (NCC) indexes were used to quantify the differences between the sCT and original planning CT images. For the proposed method, the mean MAE between sCT and CT were 55.7 Hounsfield units (HU) for 24 brain cancer patients and 50.8 HU for 20 prostate cancer patients. The mean PSNR and NCC were 26.6 dB and 0.963 in the brain cases, and 24.5 dB and 0.929 in the pelvis. CONCLUSION We developed and validated a novel learning-based approach to generate CT images from routine MRIs based on dense cycle GAN model to effectively capture the relationship between the CT and MRIs. The proposed method can generate robust, high-quality sCT in minutes. The proposed method offers strong potential for supporting near real-time MRI-only treatment planning in the brain and pelvis.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Joseph Harms
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Hui-Kuo Shu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
10
|
Lei Y, Harms J, Wang T, Tian S, Zhou J, Shu HK, Zhong J, Mao H, Curran WJ, Liu T, Yang X. MRI-based synthetic CT generation using semantic random forest with iterative refinement. Phys Med Biol 2019; 64:085001. [PMID: 30818292 PMCID: PMC7778365 DOI: 10.1088/1361-6560/ab0b66] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Target delineation for radiation therapy treatment planning often benefits from magnetic resonance imaging (MRI) in addition to x-ray computed tomography (CT) due to MRI's superior soft tissue contrast. MRI-based treatment planning could reduce systematic MR-CT co-registration errors, medical cost, radiation exposure, and simplify clinical workflow. However, MRI-only based treatment planning is not widely used to date because treatment-planning systems rely on the electron density information provided by CTs to calculate dose. Additionally, air and bone regions are difficult to separategiven their similar intensities in MR imaging. The purpose of this work is to develop a learning-based method to generate patient-specific synthetic CT (sCT) from a routine anatomical MRI for use in MRI-only radiotherapy treatment planning. An auto-context model with patch-based anatomical features was integrated into a classification random forest to generate and improve semantic information. The semantic information along with anatomical features was then used to train a series of regression random forests based on the auto-context model. After training, the sCT of a new MRI can be generated by feeding anatomical features extracted from the MRI into the well-trained classification and regression random forests. The proposed algorithm was evaluated using 14 patient datasets withT1-weighted MR and corresponding CT images of the brain. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and normalized cross correlation (NCC) were 57.45 ± 8.45 HU, 28.33 ± 1.68 dB, and 0.97 ± 0.01. We also compared the difference between dose maps calculated on the sCT and those on the original CT, using the same plan parameters. The average DVH differences among all patients are less than 0.2 Gy for PTVs, and less than 0.02 Gy for OARs. The sCT generation by the proposed method allows for dose calculation based MR imaging alone, and may be a useful tool for MRI-based radiation treatment planning.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Joseph Harms
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Hui-Kuo Shu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Jim Zhong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| |
Collapse
|
11
|
Wang B, Lei Y, Tian S, Wang T, Liu Y, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Deeply supervised 3D fully convolutional networks with group dilated convolution for automatic MRI prostate segmentation. Med Phys 2019; 46:1707-1718. [PMID: 30702759 DOI: 10.1002/mp.13416] [Citation(s) in RCA: 115] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Revised: 01/18/2019] [Accepted: 01/24/2019] [Indexed: 12/15/2022] Open
Abstract
PURPOSE Reliable automated segmentation of the prostate is indispensable for image-guided prostate interventions. However, the segmentation task is challenging due to inhomogeneous intensity distributions, variation in prostate anatomy, among other problems. Manual segmentation can be time-consuming and is subject to inter- and intraobserver variation. We developed an automated deep learning-based method to address this technical challenge. METHODS We propose a three-dimensional (3D) fully convolutional networks (FCN) with deep supervision and group dilated convolution to segment the prostate on magnetic resonance imaging (MRI). In this method, a deeply supervised mechanism was introduced into a 3D FCN to effectively alleviate the common exploding or vanishing gradients problems in training deep models, which forces the update process of the hidden layer filters to favor highly discriminative features. A group dilated convolution which aggregates multiscale contextual information for dense prediction was proposed to enlarge the effective receptive field of convolutional neural networks, which improve the prediction accuracy of prostate boundary. In addition, we introduced a combined loss function including cosine and cross entropy, which measures similarity and dissimilarity between segmented and manual contours, to further improve the segmentation accuracy. Prostate volumes manually segmented by experienced physicians were used as a gold standard against which our segmentation accuracy was measured. RESULTS The proposed method was evaluated on an internal dataset comprising 40 T2-weighted prostate MR volumes. Our method achieved a Dice similarity coefficient (DSC) of 0.86 ± 0.04, a mean surface distance (MSD) of 1.79 ± 0.46 mm, 95% Hausdorff distance (95%HD) of 7.98 ± 2.91 mm, and absolute relative volume difference (aRVD) of 15.65 ± 10.82. A public dataset (PROMISE12) including 50 T2-weighted prostate MR volumes was also employed to evaluate our approach. Our method yielded a DSC of 0.88 ± 0.05, MSD of 1.02 ± 0.35 mm, 95% HD of 9.50 ± 5.11 mm, and aRVD of 8.93 ± 7.56. CONCLUSION We developed a novel deeply supervised deep learning-based approach with a group dilated convolution to automatically segment the MRI prostate, demonstrated its clinical feasibility, and validated its accuracy against manual segmentation. The proposed technique could be a useful tool for image-guided interventions in prostate cancer.
Collapse
Affiliation(s)
- Bo Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.,School of Physics and Electronic-Electrical Engineering, Ningxia University, Yinchuan, Ningxia, 750021, P.R. China
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
12
|
Shafai-Erfani G, Wang T, Lei Y, Tian S, Patel P, Jani AB, Curran WJ, Liu T, Yang X. Dose evaluation of MRI-based synthetic CT generated using a machine learning method for prostate cancer radiotherapy. Med Dosim 2019; 44:e64-e70. [PMID: 30713000 DOI: 10.1016/j.meddos.2019.01.002] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2018] [Revised: 01/07/2019] [Accepted: 01/16/2019] [Indexed: 11/24/2022]
Abstract
Magnetic resonance imaging (MRI)-only radiotherapy treatment planning is attractive since MRI provides superior soft tissue contrast over computed tomographies (CTs), without the ionizing radiation exposure. However, it requires the generation of a synthetic CT (SCT) from MRIs for patient setup and dose calculation. In this study, we aim to investigate the accuracy of dose calculation in prostate cancer radiotherapy using SCTs generated from MRIs using our learning-based method. We retrospectively investigated a total of 17 treatment plans from 10 patients, each having both planning CTs (pCT) and MRIs acquired before treatment. The SCT was registered to the pCT for generating SCT-based treatment plans. The original pCT-based plans served as ground truth. Clinically-relevant dose volume histogram (DVH) metrics were extracted from both ground truth and SCT-based plans for comparison and evaluation. Gamma analysis was performed for the comparison of absorbed dose distributions between SCT- and pCT-based plans of each patient. Gamma analysis of dose distribution on pCT and SCT within 1%/1 mm at 10% dose threshold showed greater than 99% pass rate. The average differences in DVH metrics for planning target volumes (PTVs) were less than 1%, and similar metrics for organs at risk (OAR) were not statistically different. The SCT images created from MR images using our proposed machine learning method are accurate for dose calculation in prostate cancer radiation treatment planning. This study also demonstrates the great potential for MRI to completely replace CT scans in the process of simulation and treatment planning. However, MR images are needed to further analyze geometric distortion effects. Digitally reconstructed radiograph (DRR) can be generated within our method, and their accuracy in patient setup needs further analysis.
Collapse
Affiliation(s)
- Ghazal Shafai-Erfani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA.
| |
Collapse
|
13
|
Yang X, Wang T, Lei Y, Higgins K, Liu T, Shim H, Curran WJ, Mao H, Nye JA. MRI-based attenuation correction for brain PET/MRI based on anatomic signature and machine learning. Phys Med Biol 2019; 64:025001. [PMID: 30524027 PMCID: PMC7773209 DOI: 10.1088/1361-6560/aaf5e0] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deriving accurate attenuation maps for PET/MRI remains a challenging problem because MRI voxel intensities are not related to properties of photon attenuation and bone/air interfaces have similarly low signal. This work presents a learning-based method to derive patient-specific computed tomography (CT) maps from routine T1-weighted MRI in their native space for attenuation correction of brain PET. We developed a machine-learning-based method using a sequence of alternating random forests under the framework of an iterative refinement model. Anatomical feature selection is included in both training and predication stages to achieve optimal performance. To evaluate its accuracy, we retrospectively investigated 17 patients, each of which has been scanned by PET/CT and MR for brain. The PET images were corrected for attenuation on CT images as ground truth, as well as on pseudo CT (PCT) images generated from MR images. The PCT images showed mean average error of 66.1 ± 8.5 HU, average correlation coefficient of 0.974 ± 0.018 and average Dice similarity coefficient (DSC) larger than 0.85 for air, bone and soft tissue. The side-by-side image comparisons and joint histograms demonstrated very good agreement of PET images after correction by PCT and CT. The mean differences of voxel values in selected VOIs were less than 4%, the mean absolute difference of all active area is around 2.5%, and the mean linear correlation coefficient is 0.989 ± 0.017 between PET images corrected by CT and PCT. This work demonstrates a novel learning-based approach to automatically generate CT images from routine T1-weighted MR images based on a random forest regression with patch-based anatomical signatures to effectively capture the relationship between the CT and MR images. Reconstructed PET images using the PCT exhibit errors well below accepted test/retest reliability of PET/CT indicating high quantitative equivalence.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | - Kristin Higgins
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Hyunsuk Shim
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Hui Mao
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Jonathon A Nye
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, United States of America
| |
Collapse
|
14
|
Lei Y, Tang X, Higgins K, Lin J, Jeong J, Liu T, Dhabaan A, Wang T, Dong X, Press R, Curran WJ, Yang X. Learning-based CBCT correction using alternating random forest based on auto-context model. Med Phys 2018; 46:601-618. [PMID: 30471129 DOI: 10.1002/mp.13295] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2017] [Revised: 10/17/2018] [Accepted: 11/12/2018] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Quantitative Cone Beam CT (CBCT) imaging is increasing in demand for precise image-guided radiotherapy because it provides a foundation for advanced image-guided techniques, including accurate treatment setup, online tumor delineation, and patient dose calculation. However, CBCT is currently limited only to patient setup in the clinic because of the severe issues in its image quality. In this study, we develop a learning-based approach to improve CBCT's image quality for extended clinical applications. MATERIALS AND METHODS An auto-context model is integrated into a machine learning framework to iteratively generate corrected CBCT (CCBCT) with high-image quality. The first step is data preprocessing for the built training dataset, in which uninformative image regions are removed, noise is reduced, and CT and CBCT images are aligned. After a CBCT image is divided into a set of patches, the most informative and salient anatomical features are extracted to train random forests. Within each patch, alternating RF is applied to create a CCBCT patch as the output. Moreover, an iterative refinement strategy is exercised to enhance the image quality of CCBCT. Then, all the CCBCT patches are integrated to reconstruct final CCBCT images. RESULTS The learning-based CBCT correction algorithm was evaluated using the leave-one-out cross-validation method applied on a cohort of 12 patients' brain data and 14 patients' pelvis data. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC) indexes, and spatial nonuniformity (SNU) in the selected regions of interest (ROIs) were used to quantify the proposed algorithm's correction accuracy and generat the following results: mean MAE = 12.81 ± 2.04 and 19.94 ± 5.44 HU, mean PSNR = 40.22 ± 3.70 and 31.31 ± 2.85 dB, mean NCC = 0.98 ± 0.02 and 0.95 ± 0.01, and SNU = 2.07 ± 3.36% and 2.07 ± 3.36% for brain and pelvis data. CONCLUSION Preliminary results demonstrated that the novel learning-based correction method can significantly improve CBCT image quality. Hence, the proposed algorithm is of great potential in improving CBCT's image quality to support its clinical utility in CBCT-guided adaptive radiotherapy.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jolinta Lin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jiwoong Jeong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.,Department of Medical Physics, Georgia Institute of Technology, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Anees Dhabaan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Robert Press
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
15
|
Lei Y, Jeong JJ, Wang T, Shu HK, Patel P, Tian S, Liu T, Shim H, Mao H, Jani AB, Curran WJ, Yang X. MRI-based pseudo CT synthesis using anatomical signature and alternating random forest with iterative refinement model. J Med Imaging (Bellingham) 2018; 5:043504. [PMID: 30840748 PMCID: PMC6280993 DOI: 10.1117/1.jmi.5.4.043504] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Accepted: 11/12/2018] [Indexed: 12/20/2022] Open
Abstract
We develop a learning-based method to generate patient-specific pseudo computed tomography (CT) from routinely acquired magnetic resonance imaging (MRI) for potential MRI-based radiotherapy treatment planning. The proposed pseudo CT (PCT) synthesis method consists of a training stage and a synthesizing stage. During the training stage, patch-based features are extracted from MRIs. Using a feature selection, the most informative features are identified as an anatomical signature to train a sequence of alternating random forests based on an iterative refinement model. During the synthesizing stage, we feed the anatomical signatures extracted from an MRI into the sequence of well-trained forests for a PCT synthesis. Our PCT was compared with original CT (ground truth) to quantitatively assess the synthesis accuracy. The mean absolute error, peak signal-to-noise ratio, and normalized cross-correlation indices were 60.87 ± 15.10 HU , 24.63 ± 1.73 dB , and 0.954 ± 0.013 for 14 patients' brain data and 29.86 ± 10.4 HU , 34.18 ± 3.31 dB , and 0.980 ± 0.025 for 12 patients' pelvic data, respectively. We have investigated a learning-based approach to synthesize CTs from routine MRIs and demonstrated its feasibility and reliability. The proposed PCT synthesis technique can be a useful tool for MRI-based radiation treatment planning.
Collapse
Affiliation(s)
- Yang Lei
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Jiwoong Jason Jeong
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Tonghe Wang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Hui-Kuo Shu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Pretesh Patel
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Sibo Tian
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Tian Liu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Hyunsuk Shim
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
- Emory University, Winship Cancer Institute, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Hui Mao
- Emory University, Winship Cancer Institute, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Ashesh B. Jani
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Walter J. Curran
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Xiaofeng Yang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| |
Collapse
|