101
|
Research outlook and state-of-the-art methods in context awareness data modeling and retrieval. EVOLUTIONARY INTELLIGENCE 2019. [DOI: 10.1007/s12065-019-00274-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
102
|
Elevator Fault Detection Using Profile Extraction and Deep Autoencoder Feature Extraction for Acceleration and Magnetic Signals. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9152990] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
In this paper, we propose a new algorithm for data extraction from time-series data, and furthermore automatic calculation of highly informative deep features to be used in fault detection. In data extraction, elevator start and stop events are extracted from sensor data including both acceleration and magnetic signals. In addition, a generic deep autoencoder model is also developed for automated feature extraction from the extracted profiles. After this, extracted deep features are classified with random forest algorithm for fault detection. Sensor data are labelled as healthy and faulty based on the maintenance actions recorded. The remaining healthy data are used for validation of the model to prove its efficacy in terms of avoiding false positives. We have achieved above 90% accuracy in fault detection along with avoiding false positives based on new extracted deep features, which outperforms results using existing features. Existing features are also classified with random forest to compare results. Our developed algorithm provides better results due to the new deep features extracted from the dataset when compared to existing features. This research will help various predictive maintenance systems to detect false alarms, which will in turn reduce unnecessary visits of service technicians to installation sites.
Collapse
|
103
|
Harms J, Lei Y, Wang T, Zhang R, Zhou J, Tang X, Curran WJ, Liu T, Yang X. Paired cycle-GAN-based image correction for quantitative cone-beam computed tomography. Med Phys 2019; 46:3998-4009. [PMID: 31206709 DOI: 10.1002/mp.13656] [Citation(s) in RCA: 158] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 06/07/2019] [Accepted: 06/07/2019] [Indexed: 12/15/2022] Open
Abstract
PURPOSE The incorporation of cone-beam computed tomography (CBCT) has allowed for enhanced image-guided radiation therapy. While CBCT allows for daily 3D imaging, images suffer from severe artifacts, limiting the clinical potential of CBCT. In this work, a deep learning-based method for generating high quality corrected CBCT (CCBCT) images is proposed. METHODS The proposed method integrates a residual block concept into a cycle-consistent adversarial network (cycle-GAN) framework, called res-cycle GAN, to learn a mapping between CBCT images and paired planning CT images. Compared with a GAN, a cycle-GAN includes an inverse transformation from CBCT to CT images, which constrains the model by forcing calculation of both a CCBCT and a synthetic CBCT. A fully convolution neural network with residual blocks is used in the generator to enable end-to-end CBCT-to-CT transformations. The proposed algorithm was evaluated using 24 sets of patient data in the brain and 20 sets of patient data in the pelvis. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC) indices, and spatial non-uniformity (SNU) were used to quantify the correction accuracy of the proposed algorithm. The proposed method is compared to both a conventional scatter correction and another machine learning-based CBCT correction method. RESULTS Overall, the MAE, PSNR, NCC, and SNU were 13.0 HU, 37.5 dB, 0.99, and 0.05 in the brain, 16.1 HU, 30.7 dB, 0.98, and 0.09 in the pelvis for the proposed method, improvements of 45%, 16%, 1%, and 93% in the brain, and 71%, 38%, 2%, and 65% in the pelvis, over the CBCT image. The proposed method showed superior image quality as compared to the scatter correction method, reducing noise and artifact severity. The proposed method produced images with less noise and artifacts than the comparison machine learning-based method. CONCLUSIONS The authors have developed a novel deep learning-based method to generate high-quality corrected CBCT images. The proposed method increases onboard CBCT image quality, making it comparable to that of the planning CT. With further evaluation and clinical implementation, this method could lead to quantitative adaptive radiation therapy.
Collapse
Affiliation(s)
- Joseph Harms
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Rongxiao Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
104
|
Liu Y, Lei Y, Wang Y, Wang T, Ren L, Lin L, McDonald M, Curran WJ, Liu T, Zhou J, Yang X. MRI-based treatment planning for proton radiotherapy: dosimetric validation of a deep learning-based liver synthetic CT generation method. Phys Med Biol 2019; 64:145015. [PMID: 31146267 PMCID: PMC6635951 DOI: 10.1088/1361-6560/ab25bc] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Magnetic resonance imaging (MRI) has been widely used in combination with computed tomography (CT) radiation therapy because MRI improves the accuracy and reliability of target delineation due to its superior soft tissue contrast over CT. The MRI-only treatment process is currently an active field of research since it could eliminate systematic MR-CT co-registration errors, reduce medical cost, avoid diagnostic radiation exposure, and simplify clinical workflow. The purpose of this work is to validate the application of a deep learning-based method for abdominal synthetic CT (sCT) generation by image evaluation and dosimetric assessment in a commercial proton pencil beam treatment planning system (TPS). This study proposes to integrate dense block into a 3D cycle-consistent generative adversarial networks (cycle GAN) framework in an effort to effectively learn the nonlinear mapping between MRI and CT pairs. A cohort of 21 patients with co-registered CT and MR pairs were used to test the deep learning-based sCT image quality by leave-one-out cross validation. The CT image quality, dosimetric accuracy and the distal range fidelity were rigorously checked, using side-by-side comparison against the corresponding original CT images. The average mean absolute error (MAE) was 72.87 ± 18.16 HU. The relative differences of the statistics of the PTV dose volume histogram (DVH) metrics between sCT and CT were generally less than 1%. Mean 3D gamma analysis passing rate of 1 mm/1%, 2 mm/2%, 3 mm/3% criteria with 10% dose threshold were 90.76% ± 5.94%, 96.98% ± 2.93% and 99.37% ± 0.99%, respectively. The median, mean and standard deviation of absolute maximum range differences were 0.170 cm, 0.186 cm and 0.155 cm. The image similarity, dosimetric and distal range agreement between sCT and original CT suggests the feasibility of further development of an MRI-only workflow for liver proton radiotherapy.
Collapse
Affiliation(s)
- Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Yinan Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Lei Ren
- Department of Radiation Oncology, Duke University, Durham, NC 27708
| | - Liyong Lin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Mark McDonald
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| |
Collapse
|
105
|
Zhong L, Chen Y, Zhang X, Liu S, Wu Y, Liu Y, Lin L, Feng Q, Chen W, Yang W. Flexible Prediction of CT Images From MRI Data Through Improved Neighborhood Anchored Regression for PET Attenuation Correction. IEEE J Biomed Health Inform 2019; 24:1114-1124. [PMID: 31295129 DOI: 10.1109/jbhi.2019.2927368] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Given the complicated relationship between the magnetic resonance imaging (MRI) signals and the attenuation values, the attenuation correction in hybrid positron emission tomography (PET)/MRI systems remains a challenging task. Currently, existing methods are either time-consuming or require sufficient samples to train the models. In this paper, an efficient approach for predicting pseudo computed tomography (CT) images from T1- and T2-weighted MRI data with limited data is proposed. The proposed approach uses improved neighborhood anchored regression (INAR) as a baseline method to pre-calculate projected matrices to flexibly predict the pseudo CT patches. Techniques, including the augmentation of the MR/CT dataset, learning of the nonlinear descriptors of MR images, hierarchical search for nearest neighbors, data-driven optimization, and multi-regressor ensemble, are adopted to improve the effectiveness of the proposed approach. In total, 22 healthy subjects were enrolled in the study. The pseudo CT images obtained using INAR with multi-regressor ensemble yielded mean absolute error (MAE) of 92.73 ± 14.86 HU, peak signal-to-noise ratio of 29.77 ± 1.63 dB, Pearson linear correlation coefficient of 0.82 ± 0.05, dice similarity coefficient of 0.81 ± 0.03, and the relative mean absolute error (rMAE) in PET attenuation correction of 1.30 ± 0.20% compared with true CT images. Moreover, our proposed INAR method, without any refinement strategies, can achieve considerable results with only seven subjects (MAE 106.89 ± 14.43 HU, rMAE 1.51 ± 0.21%). The experiments prove the superior performance of the proposed method over the six innovative methods. Moreover, the proposed method can rapidly generate the pseudo CT images that are suitable for PET attenuation correction.
Collapse
|
106
|
Yu B, Zhou L, Wang L, Shi Y, Fripp J, Bourgeat P. Ea-GANs: Edge-Aware Generative Adversarial Networks for Cross-Modality MR Image Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1750-1762. [PMID: 30714911 DOI: 10.1109/tmi.2019.2895894] [Citation(s) in RCA: 101] [Impact Index Per Article: 16.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Magnetic resonance (MR) imaging is a widely used medical imaging protocol that can be configured to provide different contrasts between the tissues in human body. By setting different scanning parameters, each MR imaging modality reflects the unique visual characteristic of scanned body part, benefiting the subsequent analysis from multiple perspectives. To utilize the complementary information from multiple imaging modalities, cross-modality MR image synthesis has aroused increasing research interest recently. However, most existing methods only focus on minimizing pixel/voxel-wise intensity difference but ignore the textural details of image content structure, which affects the quality of synthesized images. In this paper, we propose edge-aware generative adversarial networks (Ea-GANs) for cross-modality MR image synthesis. Specifically, we integrate edge information, which reflects the textural structure of image content and depicts the boundaries of different objects in images, to reduce this gap. Corresponding to different learning strategies, two frameworks are proposed, i.e., a generator-induced Ea-GAN (gEa-GAN) and a discriminator-induced Ea-GAN (dEa-GAN). The gEa-GAN incorporates the edge information via its generator, while the dEa-GAN further does this from both the generator and the discriminator so that the edge similarity is also adversarially learned. In addition, the proposed Ea-GANs are 3D-based and utilize hierarchical features to capture contextual information. The experimental results demonstrate that the proposed Ea-GANs, especially the dEa-GAN, outperform multiple state-of-the-art methods for cross-modality MR image synthesis in both qualitative and quantitative measures. Moreover, the dEa-GAN also shows excellent generality to generic image synthesis tasks on benchmark datasets about facades, maps, and cityscapes.
Collapse
|
107
|
Arabi H, Zeng G, Zheng G, Zaidi H. Novel adversarial semantic structure deep learning for MRI-guided attenuation correction in brain PET/MRI. Eur J Nucl Med Mol Imaging 2019; 46:2746-2759. [PMID: 31264170 DOI: 10.1007/s00259-019-04380-x] [Citation(s) in RCA: 71] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Accepted: 05/28/2019] [Indexed: 10/26/2022]
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Guodong Zeng
- Institute for Surgical Technology and Biomechanics, University of Bern, CH-3014, Bern, Switzerland
| | - Guoyan Zheng
- Institute for Surgical Technology and Biomechanics, University of Bern, CH-3014, Bern, Switzerland
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, CH-1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
| |
Collapse
|
108
|
[Radiotherapy treatment planning of prostate cancer using magnetic resonance imaging]. Cancer Radiother 2019; 23:281-289. [PMID: 31151816 DOI: 10.1016/j.canrad.2018.09.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2018] [Revised: 09/10/2018] [Accepted: 09/27/2018] [Indexed: 11/21/2022]
Abstract
PURPOSE Magnetic resonance imaging (MRI) plays an increasing role in radiotherapy dose planning. Indeed, MRI offers superior soft tissue contrast compared to computerized tomography (CT) and therefore could provide a better delineation of target volumes and organs at risk than CT for radiotherapy. Furthermore, an MRI-only radiotherapy workflow would suppress registration errors inherent to the registration of MRI with CT. However, the estimation of the electronic density of tissues using MRI images is still a challenging issue. The purpose of this work was to design and evaluate a pseudo-CT generation method for prostate cancer treatments. MATERIALS AND METHODS A pseudo-CT was generated for ten prostate cancer patients using an elastic deformation based method. For each patient, dose delivered to the patient was calculated using both the planning CT and the pseudo-CT. Dose differences between CT and pseudo-CT were investigated. RESULTS Mean dose relative difference in the planning target volume is 0.9% on average and ranges from 0.1% to 1.7%. In organs at risks, this value is 1.8%, 0.8%, 0.8% and 1% on average in the rectum, the right and left femoral heads, and the bladder respectively. CONCLUSION The dose calculated using the pseudo-CT is very close to the dose calculated using the CT for both organs at risk and PTV. These results confirm that pseudo-CT images generated using the proposed method could be used to calculate radiotherapy treatment doses on MRI images.
Collapse
|
109
|
DC2Anet: Generating Lumbar Spine MR Images from CT Scan Data Based on Semi-Supervised Learning. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9122521] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Magnetic resonance imaging (MRI) plays a significant role in the diagnosis of lumbar disc disease. However, the use of MRI is limited because of its high cost and significant operating and processing time. More importantly, MRI is contraindicated for some patients with claustrophobia or cardiac pacemakers due to the possibility of injury. In contrast, computed tomography (CT) scans are much less expensive, are faster, and do not face the same limitations. In this paper, we propose a method for estimating lumbar spine MR images based on CT images using a novel objective function and a dual cycle-consistent adversarial network (DC 2 Anet) with semi-supervised learning. The objective function includes six independent loss terms to balance quantitative and qualitative losses, enabling the generation of a realistic and accurate synthetic MR image. DC 2 Anet is also capable of semi-supervised learning, and the network is general enough for supervised or unsupervised setups. Experimental results prove that the method is accurate, being able to construct MR images that closely approximate reference MR images, while also outperforming four other state-of-the-art methods.
Collapse
|
110
|
Lei Y, Harms J, Wang T, Liu Y, Shu HK, Jani AB, Curran WJ, Mao H, Liu T, Yang X. MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks. Med Phys 2019; 46:3565-3581. [PMID: 31112304 DOI: 10.1002/mp.13617] [Citation(s) in RCA: 172] [Impact Index Per Article: 28.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Revised: 05/14/2019] [Accepted: 05/14/2019] [Indexed: 02/07/2023] Open
Abstract
PURPOSE Automated synthetic computed tomography (sCT) generation based on magnetic resonance imaging (MRI) images would allow for MRI-only based treatment planning in radiation therapy, eliminating the need for CT simulation and simplifying the patient treatment workflow. In this work, the authors propose a novel method for generation of sCT based on dense cycle-consistent generative adversarial networks (cycle GAN), a deep-learning based model that trains two transformation mappings (MRI to CT and CT to MRI) simultaneously. METHODS AND MATERIALS The cycle GAN-based model was developed to generate sCT images in a patch-based framework. Cycle GAN was applied to this problem because it includes an inverse transformation from CT to MRI, which helps constrain the model to learn a one-to-one mapping. Dense block-based networks were used to construct generator of cycle GAN. The network weights and variables were optimized via a gradient difference (GD) loss and a novel distance loss metric between sCT and original CT. RESULTS Leave-one-out cross-validation was performed to validate the proposed model. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and normalized cross correlation (NCC) indexes were used to quantify the differences between the sCT and original planning CT images. For the proposed method, the mean MAE between sCT and CT were 55.7 Hounsfield units (HU) for 24 brain cancer patients and 50.8 HU for 20 prostate cancer patients. The mean PSNR and NCC were 26.6 dB and 0.963 in the brain cases, and 24.5 dB and 0.929 in the pelvis. CONCLUSION We developed and validated a novel learning-based approach to generate CT images from routine MRIs based on dense cycle GAN model to effectively capture the relationship between the CT and MRIs. The proposed method can generate robust, high-quality sCT in minutes. The proposed method offers strong potential for supporting near real-time MRI-only treatment planning in the brain and pelvis.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Joseph Harms
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Hui-Kuo Shu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
111
|
Jin CB, Kim H, Liu M, Jung W, Joo S, Park E, Ahn YS, Han IH, Lee JI, Cui X. Deep CT to MR Synthesis Using Paired and Unpaired Data. SENSORS (BASEL, SWITZERLAND) 2019; 19:E2361. [PMID: 31121961 PMCID: PMC6566351 DOI: 10.3390/s19102361] [Citation(s) in RCA: 107] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Revised: 05/10/2019] [Accepted: 05/14/2019] [Indexed: 12/15/2022]
Abstract
Magnetic resonance (MR) imaging plays a highly important role in radiotherapy treatment planning for the segmentation of tumor volumes and organs. However, the use of MR is limited, owing to its high cost and the increased use of metal implants for patients. This study is aimed towards patients who are contraindicated owing to claustrophobia and cardiac pacemakers, and many scenarios in which only computed tomography (CT) images are available, such as emergencies, situations lacking an MR scanner, and situations in which the cost of obtaining an MR scan is prohibitive. From medical practice, our approach can be adopted as a screening method by radiologists to observe abnormal anatomical lesions in certain diseases that are difficult to diagnose by CT. The proposed approach can estimate an MR image based on a CT image using paired and unpaired training data. In contrast to existing synthetic methods for medical imaging, which depend on sparse pairwise-aligned data or plentiful unpaired data, the proposed approach alleviates the rigid registration of paired training, and overcomes the context-misalignment problem of unpaired training. A generative adversarial network was trained to transform two-dimensional (2D) brain CT image slices into 2D brain MR image slices, combining the adversarial, dual cycle-consistent, and voxel-wise losses. Qualitative and quantitative comparisons against independent paired and unpaired training methods demonstrated the superiority of our approach.
Collapse
Affiliation(s)
- Cheng-Bin Jin
- School of Information and Communication Engineering, INHA University, Incheon 22212, Korea.
| | - Hakil Kim
- School of Information and Communication Engineering, INHA University, Incheon 22212, Korea.
| | - Mingjie Liu
- School of Information and Communication Engineering, INHA University, Incheon 22212, Korea.
| | - Wonmo Jung
- Acupuncture and Meridian Science Research Center, Kyung Hee University, Seoul 02447, Korea.
| | | | | | - Young Saem Ahn
- Department of Computer Engineering, INHA University, Incheon 22212, Korea.
| | - In Ho Han
- Department of Neurosurgery, Pusan National University Hospital, Pusan 49241, Korea.
| | - Jae Il Lee
- Department of Neurosurgery, Pusan National University Hospital, Pusan 49241, Korea.
| | - Xuenan Cui
- School of Information and Communication Engineering, INHA University, Incheon 22212, Korea.
| |
Collapse
|
112
|
Mishra KM, Krogerus TR, Huhtala KJ. Fault Detection of Elevator Systems Using Deep Autoencoder Feature Extraction. 2019 13TH INTERNATIONAL CONFERENCE ON RESEARCH CHALLENGES IN INFORMATION SCIENCE (RCIS) 2019:1-6. [DOI: 10.1109/rcis.2019.8876984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
113
|
Lei Y, Harms J, Wang T, Tian S, Zhou J, Shu HK, Zhong J, Mao H, Curran WJ, Liu T, Yang X. MRI-based synthetic CT generation using semantic random forest with iterative refinement. Phys Med Biol 2019; 64:085001. [PMID: 30818292 PMCID: PMC7778365 DOI: 10.1088/1361-6560/ab0b66] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Target delineation for radiation therapy treatment planning often benefits from magnetic resonance imaging (MRI) in addition to x-ray computed tomography (CT) due to MRI's superior soft tissue contrast. MRI-based treatment planning could reduce systematic MR-CT co-registration errors, medical cost, radiation exposure, and simplify clinical workflow. However, MRI-only based treatment planning is not widely used to date because treatment-planning systems rely on the electron density information provided by CTs to calculate dose. Additionally, air and bone regions are difficult to separategiven their similar intensities in MR imaging. The purpose of this work is to develop a learning-based method to generate patient-specific synthetic CT (sCT) from a routine anatomical MRI for use in MRI-only radiotherapy treatment planning. An auto-context model with patch-based anatomical features was integrated into a classification random forest to generate and improve semantic information. The semantic information along with anatomical features was then used to train a series of regression random forests based on the auto-context model. After training, the sCT of a new MRI can be generated by feeding anatomical features extracted from the MRI into the well-trained classification and regression random forests. The proposed algorithm was evaluated using 14 patient datasets withT1-weighted MR and corresponding CT images of the brain. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and normalized cross correlation (NCC) were 57.45 ± 8.45 HU, 28.33 ± 1.68 dB, and 0.97 ± 0.01. We also compared the difference between dose maps calculated on the sCT and those on the original CT, using the same plan parameters. The average DVH differences among all patients are less than 0.2 Gy for PTVs, and less than 0.02 Gy for OARs. The sCT generation by the proposed method allows for dose calculation based MR imaging alone, and may be a useful tool for MRI-based radiation treatment planning.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Joseph Harms
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Hui-Kuo Shu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Jim Zhong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| |
Collapse
|
114
|
Wang T, Lei Y, Manohar N, Tian S, Jani AB, Shu HK, Higgins K, Dhabaan A, Patel P, Tang X, Liu T, Curran WJ, Yang X. Dosimetric study on learning-based cone-beam CT correction in adaptive radiation therapy. Med Dosim 2019; 44:e71-e79. [PMID: 30948341 DOI: 10.1016/j.meddos.2019.03.001] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Revised: 08/16/2018] [Accepted: 03/04/2019] [Indexed: 11/16/2022]
Abstract
INTRODUCTION Cone-beam CT (CBCT) image quality is important for its quantitative analysis in adaptive radiation therapy. However, due to severe artifacts, the CBCTs are primarily used for verifying patient setup only so far. We have developed a learning-based image quality improvement method which could provide CBCTs with image quality comparable to planning CTs (pCTs). The accuracy of dose calculations based on these CBCTs is unknown. In this study, we aim to investigate the dosimetric accuracy of our corrected CBCT (CCBCT) in brain stereotactic radiosurgery (SRS) and pelvic radiotherapy. MATERIALS AND METHODS We retrospectively investigated a total of 32 treatment plans from 22 patients, each of whom with both original treatment pCTs and CBCTs acquired during treatment setup. The CCBCT and original CBCT (OCBCT) were registered to the pCT for generating CCBCT-based and OCBCT-based treatment plans. The original pCT-based plans served as ground truth. Clinically-relevant dose volume histogram (DVH) metrics were extracted from the ground truth, OCBCT-based and CCBCT-based plans for comparison. Gamma analysis was also performed to compare the absorbed dose distributions between the pCT-based and OCBCT/CCBCT-based plans of each patient. RESULTS CCBCTs demonstrated better image contrast and more accurate HU ranges when compared side-by-side with OCBCTs. For pelvic radiotherapy plans, the mean dose error in DVH metrics for planning target volume (PTV), bladder and rectum was significantly reduced, from 1% to 0.3%, after CBCT correction. The gamma analysis showed the average pass rate increased from 94.5% before correction to 99.0% after correction. For brain SRS treatment plans, both original and corrected CBCT images were accurate enough for dose calculation, though CCBCT featured higher image quality. CONCLUSION CCBCTs can provide a level of dose accuracy comparable to traditional pCTs for brain and prostate radiotherapy planning and the correction method proposed here can be useful in CBCT-guided adaptive radiotherapy.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Nivedh Manohar
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Hui-Kuo Shu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Anees Dhabaan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA.
| |
Collapse
|
115
|
Wei W, Poirion E, Bodini B, Durrleman S, Colliot O, Stankoff B, Ayache N. Fluid-attenuated inversion recovery MRI synthesis from multisequence MRI using three-dimensional fully convolutional networks for multiple sclerosis. J Med Imaging (Bellingham) 2019; 6:014005. [PMID: 30820439 DOI: 10.1117/1.jmi.6.1.014005] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Accepted: 01/29/2019] [Indexed: 11/14/2022] Open
Abstract
Multiple sclerosis (MS) is a white matter (WM) disease characterized by the formation of WM lesions, which can be visualized by magnetic resonance imaging (MRI). The fluid-attenuated inversion recovery (FLAIR) MRI pulse sequence is used clinically and in research for the detection of WM lesions. However, in clinical settings, some MRI pulse sequences could be missed because of various constraints. The use of the three-dimensional fully convolutional neural networks is proposed to predict FLAIR pulse sequences from other MRI pulse sequences. In addition, the contribution of each input pulse sequence is evaluated with a pulse sequence-specific saliency map. This approach is tested on a real MS image dataset and evaluated by comparing this approach with other methods and by assessing the lesion contrast in the synthetic FLAIR pulse sequence. Both the qualitative and quantitative results show that this method is competitive for FLAIR synthesis.
Collapse
Affiliation(s)
- Wen Wei
- Université Côte d'Azur, Inria, Epione Project Team, Sophia Antipolis, France.,Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France.,Inria, Aramis Project Team, Paris, France
| | - Emilie Poirion
- Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France
| | - Benedetta Bodini
- Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France
| | - Stanley Durrleman
- Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France.,Inria, Aramis Project Team, Paris, France
| | - Olivier Colliot
- Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France.,Inria, Aramis Project Team, Paris, France
| | - Bruno Stankoff
- Sorbonne Université, Inserm, CNRS, Institut du cerveau et la moelle (ICM), AP-HP-Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, Paris, France
| | - Nicholas Ayache
- Université Côte d'Azur, Inria, Epione Project Team, Sophia Antipolis, France
| |
Collapse
|
116
|
Shafai-Erfani G, Wang T, Lei Y, Tian S, Patel P, Jani AB, Curran WJ, Liu T, Yang X. Dose evaluation of MRI-based synthetic CT generated using a machine learning method for prostate cancer radiotherapy. Med Dosim 2019; 44:e64-e70. [PMID: 30713000 DOI: 10.1016/j.meddos.2019.01.002] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2018] [Revised: 01/07/2019] [Accepted: 01/16/2019] [Indexed: 11/24/2022]
Abstract
Magnetic resonance imaging (MRI)-only radiotherapy treatment planning is attractive since MRI provides superior soft tissue contrast over computed tomographies (CTs), without the ionizing radiation exposure. However, it requires the generation of a synthetic CT (SCT) from MRIs for patient setup and dose calculation. In this study, we aim to investigate the accuracy of dose calculation in prostate cancer radiotherapy using SCTs generated from MRIs using our learning-based method. We retrospectively investigated a total of 17 treatment plans from 10 patients, each having both planning CTs (pCT) and MRIs acquired before treatment. The SCT was registered to the pCT for generating SCT-based treatment plans. The original pCT-based plans served as ground truth. Clinically-relevant dose volume histogram (DVH) metrics were extracted from both ground truth and SCT-based plans for comparison and evaluation. Gamma analysis was performed for the comparison of absorbed dose distributions between SCT- and pCT-based plans of each patient. Gamma analysis of dose distribution on pCT and SCT within 1%/1 mm at 10% dose threshold showed greater than 99% pass rate. The average differences in DVH metrics for planning target volumes (PTVs) were less than 1%, and similar metrics for organs at risk (OAR) were not statistically different. The SCT images created from MR images using our proposed machine learning method are accurate for dose calculation in prostate cancer radiation treatment planning. This study also demonstrates the great potential for MRI to completely replace CT scans in the process of simulation and treatment planning. However, MR images are needed to further analyze geometric distortion effects. Digitally reconstructed radiograph (DRR) can be generated within our method, and their accuracy in patient setup needs further analysis.
Collapse
Affiliation(s)
- Ghazal Shafai-Erfani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA.
| |
Collapse
|
117
|
Yang X, Wang T, Lei Y, Higgins K, Liu T, Shim H, Curran WJ, Mao H, Nye JA. MRI-based attenuation correction for brain PET/MRI based on anatomic signature and machine learning. Phys Med Biol 2019; 64:025001. [PMID: 30524027 PMCID: PMC7773209 DOI: 10.1088/1361-6560/aaf5e0] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deriving accurate attenuation maps for PET/MRI remains a challenging problem because MRI voxel intensities are not related to properties of photon attenuation and bone/air interfaces have similarly low signal. This work presents a learning-based method to derive patient-specific computed tomography (CT) maps from routine T1-weighted MRI in their native space for attenuation correction of brain PET. We developed a machine-learning-based method using a sequence of alternating random forests under the framework of an iterative refinement model. Anatomical feature selection is included in both training and predication stages to achieve optimal performance. To evaluate its accuracy, we retrospectively investigated 17 patients, each of which has been scanned by PET/CT and MR for brain. The PET images were corrected for attenuation on CT images as ground truth, as well as on pseudo CT (PCT) images generated from MR images. The PCT images showed mean average error of 66.1 ± 8.5 HU, average correlation coefficient of 0.974 ± 0.018 and average Dice similarity coefficient (DSC) larger than 0.85 for air, bone and soft tissue. The side-by-side image comparisons and joint histograms demonstrated very good agreement of PET images after correction by PCT and CT. The mean differences of voxel values in selected VOIs were less than 4%, the mean absolute difference of all active area is around 2.5%, and the mean linear correlation coefficient is 0.989 ± 0.017 between PET images corrected by CT and PCT. This work demonstrates a novel learning-based approach to automatically generate CT images from routine T1-weighted MR images based on a random forest regression with patch-based anatomical signatures to effectively capture the relationship between the CT and MR images. Reconstructed PET images using the PCT exhibit errors well below accepted test/retest reliability of PET/CT indicating high quantitative equivalence.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | - Kristin Higgins
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Hyunsuk Shim
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Hui Mao
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, United States of America
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Jonathon A Nye
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, United States of America
| |
Collapse
|
118
|
Lei Y, Tang X, Higgins K, Lin J, Jeong J, Liu T, Dhabaan A, Wang T, Dong X, Press R, Curran WJ, Yang X. Learning-based CBCT correction using alternating random forest based on auto-context model. Med Phys 2018; 46:601-618. [PMID: 30471129 DOI: 10.1002/mp.13295] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2017] [Revised: 10/17/2018] [Accepted: 11/12/2018] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Quantitative Cone Beam CT (CBCT) imaging is increasing in demand for precise image-guided radiotherapy because it provides a foundation for advanced image-guided techniques, including accurate treatment setup, online tumor delineation, and patient dose calculation. However, CBCT is currently limited only to patient setup in the clinic because of the severe issues in its image quality. In this study, we develop a learning-based approach to improve CBCT's image quality for extended clinical applications. MATERIALS AND METHODS An auto-context model is integrated into a machine learning framework to iteratively generate corrected CBCT (CCBCT) with high-image quality. The first step is data preprocessing for the built training dataset, in which uninformative image regions are removed, noise is reduced, and CT and CBCT images are aligned. After a CBCT image is divided into a set of patches, the most informative and salient anatomical features are extracted to train random forests. Within each patch, alternating RF is applied to create a CCBCT patch as the output. Moreover, an iterative refinement strategy is exercised to enhance the image quality of CCBCT. Then, all the CCBCT patches are integrated to reconstruct final CCBCT images. RESULTS The learning-based CBCT correction algorithm was evaluated using the leave-one-out cross-validation method applied on a cohort of 12 patients' brain data and 14 patients' pelvis data. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC) indexes, and spatial nonuniformity (SNU) in the selected regions of interest (ROIs) were used to quantify the proposed algorithm's correction accuracy and generat the following results: mean MAE = 12.81 ± 2.04 and 19.94 ± 5.44 HU, mean PSNR = 40.22 ± 3.70 and 31.31 ± 2.85 dB, mean NCC = 0.98 ± 0.02 and 0.95 ± 0.01, and SNU = 2.07 ± 3.36% and 2.07 ± 3.36% for brain and pelvis data. CONCLUSION Preliminary results demonstrated that the novel learning-based correction method can significantly improve CBCT image quality. Hence, the proposed algorithm is of great potential in improving CBCT's image quality to support its clinical utility in CBCT-guided adaptive radiotherapy.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jolinta Lin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jiwoong Jeong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.,Department of Medical Physics, Georgia Institute of Technology, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Anees Dhabaan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Robert Press
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
119
|
Image synthesis-based multi-modal image registration framework by using deep fully convolutional networks. Med Biol Eng Comput 2018; 57:1037-1048. [PMID: 30523534 DOI: 10.1007/s11517-018-1924-y] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2018] [Accepted: 10/30/2018] [Indexed: 10/27/2022]
Abstract
Multi-modal image registration has significant meanings in clinical diagnosis, treatment planning, and image-guided surgery. Since different modalities exhibit different characteristics, finding a fast and accurate correspondence between images of different modalities is still a challenge. In this paper, we propose an image synthesis-based multi-modal registration framework. Image synthesis is performed by a ten-layer fully convolutional network (FCN). The network is composed of 10 convolutional layers combined with batch normalization (BN) and rectified linear unit (ReLU), which can be trained to learn an end-to-end mapping from one modality to the other. After the cross-modality image synthesis, multi-modal registration can be transformed into mono-modal registration. The mono-modal registration can be solved by methods with lower computational complexity, such as sum of squared differences (SSD). We tested our method in T1-weighted vs T2-weighted, T1-weighted vs PD, and T2-weighted vs PD image registrations with BrainWeb phantom data and IXI real patients' data. The result shows that our framework can achieve higher registration accuracy than the state-of-the-art multi-modal image registration methods, such as local mutual information (LMI) and α-mutual information (α-MI). The average registration errors of our method in experiment with IXI real patients' data were 1.19, 2.23, and 1.57 compared to 1.53, 2.60, and 2.36 of LMI and 1.34, 2.39, and 1.76 of α-MI in T2-weighted vs PD, T1-weighted vs PD, and T1-weighted vs T2-weighted image registration, respectively. In this paper, we propose an image synthesis-based multi-modal image registration framework. A deep FCN model is developed to perform image synthesis for this framework, which can capture the complex nonlinear relationship between different modalities and discover complex structural representations automatically by a large number of trainable mapping and parameters and perform accurate image synthesis. The framework combined with the deep FCN model and mono-modal registration methods (SSD) can achieve fast and robust results in multi-modal medical image registration. Graphical abstract The workflow of proposed multi-modal image registration framework.
Collapse
|
120
|
Nie D, Trullo R, Lian J, Wang L, Petitjean C, Ruan S, Wang Q, Shen D. Medical Image Synthesis with Deep Convolutional Adversarial Networks. IEEE Trans Biomed Eng 2018; 65:2720-2730. [PMID: 29993445 PMCID: PMC6398343 DOI: 10.1109/tbme.2018.2814538] [Citation(s) in RCA: 292] [Impact Index Per Article: 41.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Medical imaging plays a critical role in various clinical applications. However, due to multiple considerations such as cost and radiation dose, the acquisition of certain image modalities may be limited. Thus, medical image synthesis can be of great benefit by estimating a desired imaging modality without incurring an actual scan. In this paper, we propose a generative adversarial approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate a target image given a source image. To better model a nonlinear mapping from source to target and to produce more realistic target images, we propose to use the adversarial learning strategy to better model the FCN. Moreover, the FCN is designed to incorporate an image-gradient-difference-based loss function to avoid generating blurry target images. Long-term residual unit is also explored to help the training of the network. We further apply Auto-Context Model to implement a context-aware deep convolutional adversarial network. Experimental results show that our method is accurate and robust for synthesizing target images from the corresponding source images. In particular, we evaluate our method on three datasets, to address the tasks of generating CT from MRI and generating 7T MRI from 3T MRI images. Our method outperforms the state-of-the-art methods under comparison in all datasets and tasks.
Collapse
Affiliation(s)
- Dong Nie
- Department of Computer Science, Department of Radiology and BRIC, UNC-Chapel Hill, Chapel Hill, NC, 27510 USA ()
| | - Roger Trullo
- Department of Radiology and BRIC, UNC-Chapel Hill, and also with the Department of Computer Science, University of Normandy
| | - Jun Lian
- Department of Radiation Oncology, UNC-Chapel Hill
| | - Li Wang
- Department of Radiology and BRIC, UNC-Chapel Hill
| | | | - Su Ruan
- Department of Computer Science, University of Normandy
| | - Qian Wang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China Radiology and Biomedical ()
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27510 USA, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, South Korea ()
| |
Collapse
|
121
|
Chen S, Qin A, Zhou D, Yan D. Technical Note: U-net-generated synthetic CT images for magnetic resonance imaging-only prostate intensity-modulated radiation therapy treatment planning. Med Phys 2018; 45:5659-5665. [PMID: 30341917 DOI: 10.1002/mp.13247] [Citation(s) in RCA: 68] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2018] [Revised: 08/30/2018] [Accepted: 10/09/2018] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Clinical implementation of magnetic resonance imaging (MRI)-only radiotherapy requires a method to derive synthetic CT image (S-CT) for dose calculation. This study investigated the feasibility of building a deep convolutional neural network for MRI-based S-CT generation and evaluated the dosimetric accuracy on prostate IMRT planning. METHODS A paired CT and T2-weighted MR images were acquired from each of 51 prostate cancer patients. Fifteen pairs were randomly chosen as tested set and the remaining 36 pairs as training set. The training subjects were augmented by applying artificial deformations and feed to a two-dimensional U-net which contains 23 convolutional layers and 25.29 million trainable parameters. The U-net represents a nonlinear function with input an MR slice and output the corresponding S-CT slice. The mean absolute error (MAE) of Hounsfield unit (HU) between the true CT and S-CT images was used to evaluate the HU estimation accuracy. IMRT plans with dose 79.2 Gy prescribed to the PTV were applied using the true CT images. The true CT images then were replaced by the S-CT images and the dose matrices were recalculated on the same plan and compared to the one obtained from the true CT using gamma index analysis and absolute point dose discrepancy. RESULTS The U-net was trained from scratch in 58.67 h using a GP100-GPU. The computation time for generating a new S-CT volume image was 3.84-7.65 s. Within body, the (mean ± SD) of MAE was (29.96 ± 4.87) HU. The 1%/1 mm and 2%/2 mm gamma pass rates were over 98.03% and 99.36% respectively. The DVH parameters discrepancy was less than 0.87% and the maximum point dose discrepancy within PTV was less than 1.01% respect to the prescription. CONCLUSION The U-net can generate S-CT images from conventional MR image within seconds with high dosimetric accuracy for prostate IMRT plan.
Collapse
Affiliation(s)
- Shupeng Chen
- Department of Radiation Oncology, William Beaumont Hospital, 3601 W. 13 Mile Rd, Royal Oak, MI, 48073, USA
| | - An Qin
- Department of Radiation Oncology, William Beaumont Hospital, 3601 W. 13 Mile Rd, Royal Oak, MI, 48073, USA
| | - Dingyi Zhou
- Department of Oncology, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Di Yan
- Department of Radiation Oncology, William Beaumont Hospital, 3601 W. 13 Mile Rd, Royal Oak, MI, 48073, USA
| |
Collapse
|
122
|
Bayisa FL, Liu X, Garpebring A, Yu J. Statistical learning in computed tomography image estimation. Med Phys 2018; 45:5450-5460. [PMID: 30242845 DOI: 10.1002/mp.13204] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Revised: 08/08/2018] [Accepted: 09/06/2018] [Indexed: 01/25/2023] Open
Abstract
PURPOSE There is increasing interest in computed tomography (CT) image estimations from magnetic resonance (MR) images. The estimated CT images can be utilized for attenuation correction, patient positioning, and dose planning in diagnostic and radiotherapy workflows. This study aims to introduce a novel statistical learning approach for improving CT estimation from MR images and to compare the performance of our method with the existing model-based CT image estimation methods. METHODS The statistical learning approach proposed here consists of two stages. At the training stage, prior knowledge about tissue types from CT images was used together with a Gaussian mixture model (GMM) to explore CT image estimations from MR images. Since the prior knowledge is not available at the prediction stage, a classifier based on RUSBoost algorithm was trained to estimate the tissue types from MR images. For a new patient, the trained classifier and GMMs were used to predict CT image from MR images. The classifier and GMMs were validated by using voxel-level tenfold cross-validation and patient-level leave-one-out cross-validation, respectively. RESULTS The proposed approach has outperformance in CT estimation quality in comparison with the existing model-based methods, especially on bone tissues. Our method improved CT image estimation by 5% and 23% on the whole brain and bone tissues, respectively. CONCLUSIONS Evaluation of our method shows that it is a promising method to generate CT image substitutes for the implementation of fully MR-based radiotherapy and PET/MRI applications.
Collapse
Affiliation(s)
- Fekadu L Bayisa
- Department of Mathematics and Mathematical Statistics, Umeå University, Umeå, 901 87, Sweden
| | - Xijia Liu
- Department of Mathematics and Mathematical Statistics, Umeå University, Umeå, 901 87, Sweden
| | - Anders Garpebring
- Department of Radiation Sciences, Umeå University, Umeå, 901 87, Sweden
| | - Jun Yu
- Department of Mathematics and Mathematical Statistics, Umeå University, Umeå, 901 87, Sweden
| |
Collapse
|
123
|
Largent A, Barateau A, Nunes JC, Lafond C, Greer PB, Dowling JA, Saint-Jalmes H, Acosta O, de Crevoisier R. Pseudo-CT Generation for MRI-Only Radiation Therapy Treatment Planning: Comparison Among Patch-Based, Atlas-Based, and Bulk Density Methods. Int J Radiat Oncol Biol Phys 2018; 103:479-490. [PMID: 30336265 DOI: 10.1016/j.ijrobp.2018.10.002] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2018] [Revised: 08/15/2018] [Accepted: 10/01/2018] [Indexed: 12/25/2022]
Abstract
PURPOSE Methods have been recently developed to generate pseudo-computed tomography (pCT) for dose calculation in magnetic resonance imaging (MRI)-only radiation therapy. This study aimed to propose an original nonlocal mean patch-based method (PBM) and to compare this PBM to an atlas-based method (ABM) and to a bulk density method (BDM) for prostate MRI-only radiation therapy. MATERIALS AND METHODS Thirty-nine patients received a volumetric modulated arc therapy for prostate cancer. In addition to the planning computed tomography (CT) scans, T2-weighted MRI scans were acquired. pCTs were generated from MRIs using 3 methods: an original nonlocal mean PBM, ABM, and BDM. The PBM was performed using feature extraction and approximate nearest neighbor search in a training cohort. The PBM accuracy was evaluated in a validation cohort by using imaging and dosimetric endpoints. Imaging endpoints included mean absolute error and mean error between Hounsfield units of the pCT and the reference CT (CTref). Dosimetric endpoints were based on dose-volume histograms calculated from the CTref and the pCTs for various volumes of interest and on 3-dimensional gamma analyses. The PBM uncertainties were compared with those of the ABM and BDM. RESULTS The mean absolute error and mean error obtained from the PBM were 41.1 and -1.1 Hounsfield units. The PBM dose-volume histogram differences were 0.7% for prostate planning target volume V95%, 0.5% for rectum V70Gy, and 0.2% for bladder V50Gy. Compared with ABM and BDM, PBM provided significantly lower dose uncertainties for the prostate planning target volume (70-78 Gy), the rectum (8.5-29 Gy, 40-48 Gy, and 61-73 Gy), and the bladder (12-78 Gy). The PBM mean gamma pass rate (99.5%) was significantly higher than that of ABM (94.9%) or BDM (96.1%). CONCLUSIONS The proposed PBM provides low uncertainties with dose planned on CTref. These uncertainties were smaller than those of ABM and BDM and are unlikely to be clinically significant.
Collapse
Affiliation(s)
- Axel Largent
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France.
| | - Anaïs Barateau
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Jean-Claude Nunes
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Caroline Lafond
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Peter B Greer
- School of Mathematical and Physical Sciences, University of Newcastle, Newcastle, Australia
| | - Jason A Dowling
- CSIRO Australian e-Health Research Centre, Herston, Queensland, Australia
| | - Hervé Saint-Jalmes
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Oscar Acosta
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Renaud de Crevoisier
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| |
Collapse
|
124
|
Arabi H, Dowling JA, Burgos N, Han X, Greer PB, Koutsouvelis N, Zaidi H. Comparative study of algorithms for synthetic CT generation from MRI: Consequences for MRI-guided radiation planning in the pelvic region. Med Phys 2018; 45:5218-5233. [PMID: 30216462 DOI: 10.1002/mp.13187] [Citation(s) in RCA: 80] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Revised: 07/29/2018] [Accepted: 09/06/2018] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Magnetic resonance imaging (MRI)-guided radiation therapy (RT) treatment planning is limited by the fact that the electron density distribution required for dose calculation is not readily provided by MR imaging. We compare a selection of novel synthetic CT generation algorithms recently reported in the literature, including segmentation-based, atlas-based and machine learning techniques, using the same cohort of patients and quantitative evaluation metrics. METHODS Six MRI-guided synthetic CT generation algorithms were evaluated: one segmentation technique into a single tissue class (water-only), four atlas-based techniques, namely, median value of atlas images (ALMedian), atlas-based local weighted voting (ALWV), bone enhanced atlas-based local weighted voting (ALWV-Bone), iterative atlas-based local weighted voting (ALWV-Iter), and a machine learning technique using deep convolution neural network (DCNN). RESULTS Organ auto-contouring from MR images was evaluated for bladder, rectum, bones, and body boundary. Overall, DCNN exhibited higher segmentation accuracy resulting in Dice indices (DSC) of 0.93 ± 0.17, 0.90 ± 0.04, and 0.93 ± 0.02 for bladder, rectum, and bones, respectively. On the other hand, ALMedian showed the lowest accuracy with DSC of 0.82 ± 0.20, 0.81 ± 0.08, and 0.88 ± 0.04, respectively. DCNN reached the best performance in terms of accurate derivation of synthetic CT values within each organ, with a mean absolute error within the body contour of 32.7 ± 7.9 HU, followed by the advanced atlas-based methods (ALWV: 40.5 ± 8.2 HU, ALWV-Iter: 42.4 ± 8.1 HU, ALWV-Bone: 44.0 ± 8.9 HU). ALMedian led to the highest error (52.1 ± 11.1 HU). Considering the dosimetric evaluation results, ALWV-Iter, ALWV, DCNN and ALWV-Bone led to similar mean dose estimation within each organ at risk and target volume with less than 1% dose discrepancy. However, the two-dimensional gamma analysis demonstrated higher pass rates for ALWV-Bone, DCNN, ALMedian and ALWV-Iter at 1%/1 mm criterion with 94.99 ± 5.15%, 94.59 ± 5.65%, 93.68 ± 5.53% and 93.10 ± 5.99% success, respectively, while ALWV and water-only resulted in 86.91 ± 13.50% and 80.77 ± 12.10%, respectively. CONCLUSIONS Overall, machine learning and advanced atlas-based methods exhibited promising performance by achieving reliable organ segmentation and synthetic CT generation. DCNN appears to have slightly better performance by achieving accurate automated organ segmentation and relatively small dosimetric errors (followed closely by advanced atlas-based methods, which in some cases achieved similar performance). However, the DCNN approach showed higher vulnerability to anatomical variation, where a greater number of outliers was observed with this method. Considering the dosimetric results obtained from the evaluated methods, the challenge of electron density estimation from MR images can be resolved with a clinically tolerable error.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, CH-1211, Switzerland
| | - Jason A Dowling
- CSIRO Australian e-Health Research Centre, Herston, QLD, Australia
| | - Ninon Burgos
- Inria Paris, Aramis Project-Team, Institut du Cerveau et de la Moelle épinière, ICM, Inserm U 1127, CNRS, UMR 7225, Sorbonne Université, Paris, F-75013, France
| | - Xiao Han
- Elekta Inc., Maryland Heights, MO, 63043, USA
| | - Peter B Greer
- Calvary Mater Newcastle Hospital, Waratah, NSW, Australia.,University of Newcastle, Callaghan, NSW, Australia
| | - Nikolaos Koutsouvelis
- Division of Radiation Oncology, Geneva University Hospital, Geneva, CH-1211, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, CH-1211, Switzerland.,Geneva University Neurocenter, University of Geneva, Geneva, 1205, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, the Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, Odense, DK-500, Denmark
| |
Collapse
|
125
|
Lei Y, Jeong JJ, Wang T, Shu HK, Patel P, Tian S, Liu T, Shim H, Mao H, Jani AB, Curran WJ, Yang X. MRI-based pseudo CT synthesis using anatomical signature and alternating random forest with iterative refinement model. J Med Imaging (Bellingham) 2018; 5:043504. [PMID: 30840748 PMCID: PMC6280993 DOI: 10.1117/1.jmi.5.4.043504] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Accepted: 11/12/2018] [Indexed: 12/20/2022] Open
Abstract
We develop a learning-based method to generate patient-specific pseudo computed tomography (CT) from routinely acquired magnetic resonance imaging (MRI) for potential MRI-based radiotherapy treatment planning. The proposed pseudo CT (PCT) synthesis method consists of a training stage and a synthesizing stage. During the training stage, patch-based features are extracted from MRIs. Using a feature selection, the most informative features are identified as an anatomical signature to train a sequence of alternating random forests based on an iterative refinement model. During the synthesizing stage, we feed the anatomical signatures extracted from an MRI into the sequence of well-trained forests for a PCT synthesis. Our PCT was compared with original CT (ground truth) to quantitatively assess the synthesis accuracy. The mean absolute error, peak signal-to-noise ratio, and normalized cross-correlation indices were 60.87 ± 15.10 HU , 24.63 ± 1.73 dB , and 0.954 ± 0.013 for 14 patients' brain data and 29.86 ± 10.4 HU , 34.18 ± 3.31 dB , and 0.980 ± 0.025 for 12 patients' pelvic data, respectively. We have investigated a learning-based approach to synthesize CTs from routine MRIs and demonstrated its feasibility and reliability. The proposed PCT synthesis technique can be a useful tool for MRI-based radiation treatment planning.
Collapse
Affiliation(s)
- Yang Lei
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Jiwoong Jason Jeong
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Tonghe Wang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Hui-Kuo Shu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Pretesh Patel
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Sibo Tian
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Tian Liu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Hyunsuk Shim
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
- Emory University, Winship Cancer Institute, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Hui Mao
- Emory University, Winship Cancer Institute, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Ashesh B. Jani
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Walter J. Curran
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Xiaofeng Yang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| |
Collapse
|
126
|
Mackie TR, Jackson EF, Giger M. Opportunities and challenges to utilization of quantitative imaging: Report of the AAPM practical big data workshop. Med Phys 2018; 45:e820-e828. [PMID: 30248184 DOI: 10.1002/mp.13135] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Revised: 05/08/2018] [Accepted: 05/31/2018] [Indexed: 11/05/2022] Open
Abstract
BACKGROUND This article is a summary of the quantitative imaging subgroup of the 2017 AAPM Practical Big Data Workshop (PBDW-2017) on progress and challenges in big data applied to cancer treatment and research supplemented by a draft white paper following an American Association of Physicists in Medicine FOREM meeting on Imaging Genomics in 2014. AIMS The goal of PBDW-2017 was to close the gap between theoretical vision and practical experience with encountering and solving challenges in curating and analyzing data. CONCLUSIONS Recommendations based on the meetings are summarized.
Collapse
Affiliation(s)
- Thomas R Mackie
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, WI, 53705, USA
| | - Edward F Jackson
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, WI, 53705, USA
| | - Maryellen Giger
- Department of Radiology, University of Chicago, Chicago, IL, 60637, USA
| |
Collapse
|
127
|
Iglesias JE, Modat M, Peter L, Stevens A, Annunziata R, Vercauteren T, Lein E, Fischl B, Ourselin S. Joint registration and synthesis using a probabilistic model for alignment of MRI and histological sections. Med Image Anal 2018; 50:127-144. [PMID: 30282061 PMCID: PMC6742511 DOI: 10.1016/j.media.2018.09.002] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2017] [Revised: 08/30/2018] [Accepted: 09/05/2018] [Indexed: 11/30/2022]
Abstract
Nonlinear registration of 2D histological sections with corresponding slices of MRI data is a critical step of 3D histology reconstruction algorithms. This registration is difficult due to the large differences in image contrast and resolution, as well as the complex nonrigid deformations and artefacts produced when sectioning the sample and mounting it on the glass slide. It has been shown in brain MRI registration that better spatial alignment across modalities can be obtained by synthesising one modality from the other and then using intra-modality registration metrics, rather than by using information theory based metrics to solve the problem directly. However, such an approach typically requires a database of aligned images from the two modalities, which is very difficult to obtain for histology and MRI. Here, we overcome this limitation with a probabilistic method that simultaneously solves for deformable registration and synthesis directly on the target images, without requiring any training data. The method is based on a probabilistic model in which the MRI slice is assumed to be a contrast-warped, spatially deformed version of the histological section. We use approximate Bayesian inference to iteratively refine the probabilistic estimate of the synthesis and the registration, while accounting for each other’s uncertainty. Moreover, manually placed landmarks can be seamlessly integrated in the framework for increased performance and robustness. Experiments on a synthetic dataset of MRI slices show that, compared with mutual information based registration, the proposed method makes it possible to use a much more flexible deformation model in the registration to improve its accuracy, without compromising robustness. Moreover, our framework also exploits information in manually placed landmarks more efficiently than mutual information: landmarks constrain the deformation field in both methods, but in our algorithm, it also has a positive effect on the synthesis – which further improves the registration. We also show results on two real, publicly available datasets: the Allen and BigBrain atlases. In both of them, the proposed method provides a clear improvement over mutual information based registration, both qualitatively (visual inspection) and quantitatively (registration error measured with pairs of manually annotated landmarks).
Collapse
Affiliation(s)
- Juan Eugenio Iglesias
- Translational Imaging Group, Centre for Medical Image Computing, University College London, UK.
| | - Marc Modat
- Translational Imaging Group, Centre for Medical Image Computing, University College London, UK
| | - Loïc Peter
- Wellcome EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK
| | - Allison Stevens
- Martinos Center for Biomedical Imaging, Harvard Medical School and Massachusetts General Hospital, USA
| | - Roberto Annunziata
- Translational Imaging Group, Centre for Medical Image Computing, University College London, UK
| | - Tom Vercauteren
- Wellcome EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK
| | - Ed Lein
- Allen Institute for Brain Science, USA
| | - Bruce Fischl
- Martinos Center for Biomedical Imaging, Harvard Medical School and Massachusetts General Hospital, USA; Computer Science and AI lab, Massachusetts Institute of Technology, USA
| | - Sebastien Ourselin
- Wellcome EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK
| | | |
Collapse
|
128
|
Chandarana H, Wang H, Tijssen RHN, Das IJ. Emerging role of MRI in radiation therapy. J Magn Reson Imaging 2018; 48:1468-1478. [PMID: 30194794 DOI: 10.1002/jmri.26271] [Citation(s) in RCA: 82] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2018] [Revised: 07/08/2018] [Accepted: 07/09/2018] [Indexed: 12/12/2022] Open
Abstract
Advances in multimodality imaging, providing accurate information of the irradiated target volume and the adjacent critical structures or organs at risk (OAR), has made significant improvements in delivery of the external beam radiation dose. Radiation therapy conventionally has used computed tomography (CT) imaging for treatment planning and dose delivery. However, magnetic resonance imaging (MRI) provides unique advantages: added contrast information that can improve segmentation of the areas of interest, motion information that can help to better target and deliver radiation therapy, and posttreatment outcome analysis to better understand the biologic effect of radiation. To take advantage of these and other potential advantages of MRI in radiation therapy, radiologists and MRI physicists will need to understand the current radiation therapy workflow and speak the same language as our radiation therapy colleagues. This review article highlights the emerging role of MRI in radiation dose planning and delivery, but more so for MR-only treatment planning and delivery. Some of the areas of interest and challenges in implementing MRI in radiation therapy workflow are also briefly discussed. Level of Evidence: 5 Technical Efficacy: Stage 5 J. Magn. Reson. Imaging 2018;48:1468-1478.
Collapse
Affiliation(s)
- Hersh Chandarana
- Center for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, New York University School of Medicine, New York, New York, USA.,Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, New York, USA
| | - Hesheng Wang
- Department of Radiation Oncology, New York University School of Medicine & Laura and Isaac Perlmutter Cancer Center, New York, New York, USA
| | - R H N Tijssen
- Department of Radiotherapy, University Medical Center Utrecht, the Netherlands
| | - Indra J Das
- Department of Radiation Oncology, New York University School of Medicine & Laura and Isaac Perlmutter Cancer Center, New York, New York, USA
| |
Collapse
|
129
|
Bradshaw TJ, Zhao G, Jang H, Liu F, McMillan AB. Feasibility of Deep Learning-Based PET/MR Attenuation Correction in the Pelvis Using Only Diagnostic MR Images. Tomography 2018; 4:138-147. [PMID: 30320213 PMCID: PMC6173790 DOI: 10.18383/j.tom.2018.00016] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
This study evaluated the feasibility of using only diagnostically relevant magnetic resonance (MR) images together with deep learning for positron emission tomography (PET)/MR attenuation correction (deepMRAC) in the pelvis. Such an approach could eliminate dedicated MRAC sequences that have limited diagnostic utility but can substantially lengthen acquisition times for multibed position scans. We used axial T2 and T1 LAVA Flex magnetic resonance imaging images that were acquired for diagnostic purposes as inputs to a 3D deep convolutional neural network. The network was trained to produce a discretized (air, water, fat, and bone) substitute computed tomography (CT) (CTsub). Discretized (CTref-discrete) and continuously valued (CTref) reference CT images were created to serve as ground truth for network training and attenuation correction, respectively. Training was performed with data from 12 subjects. CTsub, CTref, and the system MRAC were used for PET/MR attenuation correction, and quantitative PET values of the resulting images were compared in 6 test subjects. Overall, the network produced CTsub with Dice coefficients of 0.79 ± 0.03 for cortical bone, 0.98 ± 0.01 for soft tissue (fat: 0.94 ± 0.0; water: 0.88 ± 0.02), and 0.49 ± 0.17 for bowel gas when compared with CTref-discrete. The root mean square error of the whole PET image was 4.9% by using deepMRAC and 11.6% by using the system MRAC. In evaluating 16 soft tissue lesions, the distribution of errors for maximum standardized uptake value was significantly narrower using deepMRAC (-1.0% ± 1.3%) than using system MRAC method (0.0% ± 6.4%) according to the Brown-Forsy the test (P < .05). These results indicate that improved PET/MR attenuation correction can be achieved in the pelvis using only diagnostically relevant MR images.
Collapse
Affiliation(s)
| | - Gengyan Zhao
- Medical Physics, School of Medicine and Public Health, University of Wisconsin–Madison, Madison, WI; and
| | - Hyungseok Jang
- Department of Radiology, University of California, San Diego, San Diego, CA
| | | | | |
Collapse
|
130
|
Lei Y, Shu HK, Tian S, Jeong JJ, Liu T, Shim H, Mao H, Wang T, Jani AB, Curran WJ, Yang X. Magnetic resonance imaging-based pseudo computed tomography using anatomic signature and joint dictionary learning. J Med Imaging (Bellingham) 2018; 5:034001. [PMID: 30155512 DOI: 10.1117/1.jmi.5.3.034001] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2018] [Accepted: 08/06/2018] [Indexed: 12/30/2022] Open
Abstract
Magnetic resonance imaging (MRI) provides a number of advantages over computed tomography (CT) for radiation therapy treatment planning; however, MRI lacks the key electron density information necessary for accurate dose calculation. We propose a dictionary-learning-based method to derive electron density information from MRIs. Specifically, we first partition a given MR image into a set of patches, for which we used a joint dictionary learning method to directly predict a CT patch as a structured output. Then a feature selection method is used to ensure prediction robustness. Finally, we combine all the predicted CT patches to obtain the final prediction for the given MR image. This prediction technique was validated for a clinical application using 14 patients with brain MR and CT images. The peak signal-to-noise ratio (PSNR), mean absolute error (MAE), normalized cross-correlation (NCC) indices and similarity index (SI) for air, soft-tissue and bone region were used to quantify the prediction accuracy. The mean ± std of PSNR, MAE, and NCC were: 22.4±1.9 dB , 82.6±26.1 HU, and 0.91±0.03 for the 14 patients. The SIs for air, soft-tissue, and bone regions are 0.98±0.01 , 0.88±0.03 , and 0.69±0.08 . These indices demonstrate the CT prediction accuracy of the proposed learning-based method. This CT image prediction technique could be used as a tool for MRI-based radiation treatment planning, or for PET attenuation correction in a PET/MRI scanner.
Collapse
Affiliation(s)
- Yang Lei
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Hui-Kuo Shu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Sibo Tian
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Jiwoong Jason Jeong
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Tian Liu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Hyunsuk Shim
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States.,Emory University, Winship Cancer Institute, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Hui Mao
- Emory University, Winship Cancer Institute, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Tonghe Wang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Ashesh B Jani
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Walter J Curran
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Xiaofeng Yang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| |
Collapse
|
131
|
Wang T, Manohar N, Lei Y, Dhabaan A, Shu HK, Liu T, Curran WJ, Yang X. MRI-based treatment planning for brain stereotactic radiosurgery: Dosimetric validation of a learning-based pseudo-CT generation method. Med Dosim 2018; 44:199-204. [PMID: 30115539 DOI: 10.1016/j.meddos.2018.06.008] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2017] [Revised: 03/19/2018] [Accepted: 06/22/2018] [Indexed: 01/23/2023]
Abstract
Magnetic resonance imaging (MRI)-only radiotherapy treatment planning is attractive since MRI provides superior soft tissue contrast without ionizing radiation compared with computed tomography (CT). However, it requires the generation of pseudo CT from MRI images for patient setup and dose calculation. Our machine-learning-based method to generate pseudo CT images has been shown to provide pseudo CT images with excellent image quality, while its dose calculation accuracy remains an open question. In this study, we aim to investigate the accuracy of dose calculation in brain frameless stereotactic radiosurgery (SRS) using pseudo CT images which are generated from MRI images using the machine learning-based method developed by our group. We retrospectively investigated a total of 19 treatment plans from 14 patients, each of whom has CT simulation and MRI images acquired during pretreatment. The dose distributions of the same treatment plans were calculated on original CT simulation images as ground truth, as well as on pseudo CT images generated from MRI images. Clinically-relevant DVH metrics and gamma analysis were extracted from both ground truth and pseudo CT results for comparison and evaluation. The side-by-side comparisons on image quality and dose distributions demonstrated very good agreement of image contrast and calculated dose between pseudo CT and original CT. The average differences in Dose-volume histogram (DVH) metrics for Planning target volume (PTVs) were less than 0.6%, and no differences in those for organs at risk at a significance level of 0.05. The average pass rate of gamma analysis was 99%. These quantitative results strongly indicate that the pseudo CT images created from MRI images using our proposed machine learning method are accurate enough to replace current CT simulation images for dose calculation in brain SRS treatment. This study also demonstrates the great potential for MRI to completely replace CT scans in the process of simulation and treatment planning.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Nivedh Manohar
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Anees Dhabaan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Hui-Kuo Shu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322.
| |
Collapse
|
132
|
Liu X, Fu T, Pan Z, Liu D, Hu W, Liu J, Zhang K. Automated Layer Segmentation of Retinal Optical Coherence Tomography Images Using a Deep Feature Enhanced Structured Random Forests Classifier. IEEE J Biomed Health Inform 2018; 23:1404-1416. [PMID: 30010602 DOI: 10.1109/jbhi.2018.2856276] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Optical coherence tomography (OCT) is a high-resolution and noninvasive imaging modality that has become one of the most prevalent techniques for ophthalmic diagnosis. Retinal layer segmentation is very crucial for doctors to diagnose and study retinal diseases. However, manual segmentation is often a time-consuming and subjective process. In this work, we propose a new method for automatically segmenting retinal OCT images, which integrates deep features and hand-designed features to train a structured random forests classifier. The deep convolutional features are learned from deep residual network. With the trained classifier, we can get the contour probability graph of each layer; finally, the shortest path is employed to achieve the final layer segmentation. The experimental results show that our method achieves good results with the mean layer contour error of 1.215 pixels, whereas that of the state of the art was 1.464 pixels, and achieves an F1-score of 0.885, which is also better than 0.863 that is obtained by the state of the art method.
Collapse
|
133
|
Xiang L, Wang Q, Nie D, Zhang L, Jin X, Qiao Y, Shen D. Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image. Med Image Anal 2018; 47:31-44. [PMID: 29674235 PMCID: PMC6410565 DOI: 10.1016/j.media.2018.03.011] [Citation(s) in RCA: 112] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Revised: 03/17/2018] [Accepted: 03/26/2018] [Indexed: 02/01/2023]
Abstract
Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1-weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR-to-CT synthesis task by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis result back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeating this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate imaging datasets, by also comparing with the state-of-the-art methods. Experimental results suggest that our DECNN (with repeated embedding operations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image.
Collapse
Affiliation(s)
- Lei Xiang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Qian Wang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China.
| | - Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Lichi Zhang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Xiyao Jin
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Yu Qiao
- Shenzhen Key Lab of Computer Vision & Pattern Recognition, Shenzhen Institutes of Advanced Technology, CAS, Shenzhen, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
134
|
Deep-learned 3D black-blood imaging using automatic labelling technique and 3D convolutional neural networks for detecting metastatic brain tumors. Sci Rep 2018; 8:9450. [PMID: 29930257 PMCID: PMC6013490 DOI: 10.1038/s41598-018-27742-1] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2017] [Accepted: 06/05/2018] [Indexed: 11/16/2022] Open
Abstract
Black-blood (BB) imaging is used to complement contrast-enhanced 3D gradient-echo (CE 3D-GRE) imaging for detecting brain metastases, requiring additional scan time. In this study, we proposed deep-learned 3D BB imaging with an auto-labelling technique and 3D convolutional neural networks for brain metastases detection without additional BB scan. Patients were randomly selected for training (29 sets) and testing (36 sets). Two neuroradiologists independently evaluated deep-learned and original BB images, assessing the degree of blood vessel suppression and lesion conspicuity. Vessel signals were effectively suppressed in all patients. The figure of merits, which indicate the diagnostic performance of radiologists, were 0.9708 with deep-learned BB and 0.9437 with original BB imaging, suggesting that the deep-learned BB imaging is highly comparable to the original BB imaging (difference was not significant; p = 0.2142). In per patient analysis, sensitivities were 100% for both deep-learned and original BB imaging; however, the original BB imaging indicated false positive results for two patients. In per lesion analysis, sensitivities were 90.3% for deep-learned and 100% for original BB images. There were eight false positive lesions on the original BB imaging but only one on the deep-learned BB imaging. Deep-learned 3D BB imaging can be effective for brain metastases detection.
Collapse
|
135
|
Gong K, Yang J, Kim K, El Fakhri G, Seo Y, Li Q. Attenuation correction for brain PET imaging using deep neural network based on Dixon and ZTE MR images. Phys Med Biol 2018; 63:125011. [PMID: 29790857 PMCID: PMC6031313 DOI: 10.1088/1361-6560/aac763] [Citation(s) in RCA: 80] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Positron emission tomography (PET) is a functional imaging modality widely used in neuroscience studies. To obtain meaningful quantitative results from PET images, attenuation correction is necessary during image reconstruction. For PET/MR hybrid systems, PET attenuation is challenging as magnetic resonance (MR) images do not reflect attenuation coefficients directly. To address this issue, we present deep neural network methods to derive the continuous attenuation coefficients for brain PET imaging from MR images. With only Dixon MR images as the network input, the existing U-net structure was adopted and analysis using forty patient data sets shows it is superior to other Dixon-based methods. When both Dixon and zero echo time (ZTE) images are available, we have proposed a modified U-net structure, named GroupU-net, to efficiently make use of both Dixon and ZTE information through group convolution modules when the network goes deeper. Quantitative analysis based on fourteen real patient data sets demonstrates that both network approaches can perform better than the standard methods, and the proposed network structure can further reduce the PET quantification error compared to the U-net structure.
Collapse
Affiliation(s)
- Kuang Gong
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, United States of America. Department of Biomedical Engineering, University of California, Davis, CA 95616, United States of America
| | | | | | | | | | | |
Collapse
|
136
|
Panda R, Puhan N, Rao A, Padhy D, Panda G. Automated retinal nerve fiber layer defect detection using fundus imaging in glaucoma. Comput Med Imaging Graph 2018; 66:56-65. [DOI: 10.1016/j.compmedimag.2018.02.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Revised: 01/30/2018] [Accepted: 02/27/2018] [Indexed: 10/17/2022]
|
137
|
Fan F, Cong W, Wang G. Generalized backpropagation algorithm for training second-order neural networks. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2018; 34:e2956. [PMID: 29277960 DOI: 10.1002/cnm.2956] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Revised: 12/10/2017] [Accepted: 12/11/2017] [Indexed: 06/07/2023]
Abstract
The artificial neural network is a popular framework in machine learning. To empower individual neurons, we recently suggested that the current type of neurons could be upgraded to second-order counterparts, in which the linear operation between inputs to a neuron and the associated weights is replaced with a nonlinear quadratic operation. A single second-order neurons already have a strong nonlinear modeling ability, such as implementing basic fuzzy logic operations. In this paper, we develop a general backpropagation algorithm to train the network consisting of second-order neurons. The numerical studies are performed to verify the generalized backpropagation algorithm.
Collapse
Affiliation(s)
- Fenglei Fan
- Biomedical Imaging Center, BME/CBIS, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Wenxiang Cong
- Biomedical Imaging Center, BME/CBIS, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Ge Wang
- Biomedical Imaging Center, BME/CBIS, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
138
|
Yang W, Zhong L, Chen Y, Lin L, Lu Z, Liu S, Wu Y, Feng Q, Chen W. Predicting CT Image From MRI Data Through Feature Matching With Learned Nonlinear Local Descriptors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:977-987. [PMID: 29610076 DOI: 10.1109/tmi.2018.2790962] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Attenuation correction for positron-emission tomography (PET)/magnetic resonance (MR) hybrid imaging systems and dose planning for MR-based radiation therapy remain challenging due to insufficient high-energy photon attenuation information. We present a novel approach that uses the learned nonlinear local descriptors and feature matching to predict pseudo computed tomography (pCT) images from T1-weighted and T2-weighted magnetic resonance imaging (MRI) data. The nonlinear local descriptors are obtained by projecting the linear descriptors into the nonlinear high-dimensional space using an explicit feature map and low-rank approximation with supervised manifold regularization. The nearest neighbors of each local descriptor in the input MR images are searched in a constrained spatial range of the MR images among the training dataset. Then the pCT patches are estimated through k-nearest neighbor regression. The proposed method for pCT prediction is quantitatively analyzed on a dataset consisting of paired brain MRI and CT images from 13 subjects. Our method generates pCT images with a mean absolute error (MAE) of 75.25 ± 18.05 Hounsfield units, a peak signal-to-noise ratio of 30.87 ± 1.15 dB, a relative MAE of 1.56 ± 0.5% in PET attenuation correction, and a dose relative structure volume difference of 0.055 ± 0.107% in , as compared with true CT. The experimental results also show that our method outperforms four state-of-the-art methods.
Collapse
|
139
|
Cao X, Yang J, Gao Y, Wang Q, Shen D. Region-adaptive Deformable Registration of CT/MRI Pelvic Images via Learning-based Image Synthesis. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:10.1109/TIP.2018.2820424. [PMID: 29994091 PMCID: PMC6165687 DOI: 10.1109/tip.2018.2820424] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Registration of pelvic CT and MRI is highly desired as it can facilitate effective fusion of two modalities for prostate cancer radiation therapy, i.e., using CT for dose planning and MRI for accurate organ delineation. However, due to the large inter-modality appearance gaps and the high shape/appearance variations of pelvic organs, the pelvic CT/MRI registration is highly challenging. In this paper, we propose a region-adaptive deformable registration method for multi-modal pelvic image registration. Specifically, to handle the large appearance gaps, we first perform both CT-to-MRI and MRI-to-CT image synthesis by multi-target regression forest (MT-RF). Then, to use the complementary anatomical information in the two modalities for steering the registration, we select key points automatically from both modalities and use them together for guiding correspondence detection in the region-adaptive fashion. That is, we mainly use CT to establish correspondences for bone regions, and use MRI to establish correspondences for soft tissue regions. The number of key points is increased gradually during the registration, to hierarchically guide the symmetric estimation of the deformation fields. Experiments for both intra-subject and inter-subject deformable registration show improved performances compared with state-of-the-art multi-modal registration methods, which demonstrate the potentials of our method to be applied for the routine prostate cancer radiation therapy.
Collapse
|
140
|
Ren X, Xiang L, Nie D, Shao Y, Zhang H, Shen D, Wang Q. Interleaved 3D-CNNs for joint segmentation of small-volume structures in head and neck CT images. Med Phys 2018; 45:2063-2075. [PMID: 29480928 DOI: 10.1002/mp.12837] [Citation(s) in RCA: 81] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2017] [Revised: 01/05/2018] [Accepted: 02/10/2018] [Indexed: 01/17/2023] Open
Abstract
PURPOSE Accurate 3D image segmentation is a crucial step in radiation therapy planning of head and neck tumors. These segmentation results are currently obtained by manual outlining of tissues, which is a tedious and time-consuming procedure. Automatic segmentation provides an alternative solution, which, however, is often difficult for small tissues (i.e., chiasm and optic nerves in head and neck CT images) because of their small volumes and highly diverse appearance/shape information. In this work, we propose to interleave multiple 3D Convolutional Neural Networks (3D-CNNs) to attain automatic segmentation of small tissues in head and neck CT images. METHOD A 3D-CNN was designed to segment each structure of interest. To make full use of the image appearance information, multiscale patches are extracted to describe the center voxel under consideration and then input to the CNN architecture. Next, as neighboring tissues are often highly related in the physiological and anatomical perspectives, we interleave the CNNs designated for the individual tissues. In this way, the tentative segmentation result of a specific tissue can contribute to refine the segmentations of other neighboring tissues. Finally, as more CNNs are interleaved and cascaded, a complex network of CNNs can be derived, such that all tissues can be jointly segmented and iteratively refined. RESULT Our method was validated on a set of 48 CT images, obtained from the Medical Image Computing and Computer Assisted Intervention (MICCAI) Challenge 2015. The Dice coefficient (DC) and the 95% Hausdorff Distance (95HD) are computed to measure the accuracy of the segmentation results. The proposed method achieves higher segmentation accuracy (with the average DC: 0.58 ± 0.17 for optic chiasm, and 0.71 ± 0.08 for optic nerve; 95HD: 2.81 ± 1.56 mm for optic chiasm, and 2.23 ± 0.90 mm for optic nerve) than the MICCAI challenge winner (with the average DC: 0.38 for optic chiasm, and 0.68 for optic nerve; 95HD: 3.48 for optic chiasm, and 2.48 for optic nerve). CONCLUSION An accurate and automatic segmentation method has been proposed for small tissues in head and neck CT images, which is important for the planning of radiotherapy.
Collapse
Affiliation(s)
- Xuhua Ren
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Lei Xiang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Yeqin Shao
- Nantong University, Nantong, Jiangsu, 226019, China
| | - Huan Zhang
- Department of Radiology, Ruijin Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.,Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Korea
| | - Qian Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| |
Collapse
|
141
|
Chartsias A, Joyce T, Giuffrida MV, Tsaftaris SA. Multimodal MR Synthesis via Modality-Invariant Latent Representation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:803-814. [PMID: 29053447 PMCID: PMC5904017 DOI: 10.1109/tmi.2017.2764326] [Citation(s) in RCA: 131] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
We propose a multi-input multi-output fully convolutional neural network model for MRI synthesis. The model is robust to missing data, as it benefits from, but does not require, additional input modalities. The model is trained end-to-end, and learns to embed all input modalities into a shared modality-invariant latent space. These latent representations are then combined into a single fused representation, which is transformed into the target output modality with a learnt decoder. We avoid the need for curriculum learning by exploiting the fact that the various input modalities are highly correlated. We also show that by incorporating information from segmentation masks the model can both decrease its error and generate data with synthetic lesions. We evaluate our model on the ISLES and BRATS data sets and demonstrate statistically significant improvements over state-of-the-art methods for single input tasks. This improvement increases further when multiple input modalities are used, demonstrating the benefits of learning a common latent space, again resulting in a statistically significant improvement over the current best method. Finally, we demonstrate our approach on non skull-stripped brain images, producing a statistically significant improvement over the previous best method. Code is made publicly available at https://github.com/agis85/multimodal_brain_synthesis.
Collapse
Affiliation(s)
| | | | - Mario Valerio Giuffrida
- School of Engineering at The University of Edinburgh. Giuffrida and Tsaftaris are also with The Alan Turing Institute of London. Giuffrida is also with IMT Lucca
| | - Sotirios A. Tsaftaris
- School of Engineering at The University of Edinburgh. Giuffrida and Tsaftaris are also with The Alan Turing Institute of London. Giuffrida is also with IMT Lucca
| |
Collapse
|
142
|
Hu Y, Zhang L. Pseudo CT Generation Based on 3D Group Feature Extraction and Alternative Regression Forest for MRI-Only Radiotherapy. INT J PATTERN RECOGN 2018. [DOI: 10.1142/s0218001418550091] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
In recent decades, magnetic resonance imaging (MRI) has attracted attention in radiation therapy as the only modality. This nontrivial task requires the application of pseudo computed tomography (PCT) generation methods. On the one hand, the electron density information provided by the CT scan is critical for calculating the 3D dose distribution of tissues. On the other hand, the bone image provided by the CT is precise enough for the construction of a radiograph. Lately, the use of MRI/CT has combined all of the soft tissue contrast merits which are contributed by the MRI and the virtue of CT imaging. However, owing to the unbalance of voxel-intensities in the MRI and CT scan, the MRI/CT workflow also has shortcomings. Inspired by the random forest-based PCT estimation, this paper investigated the potential of the 3D group feature as the input of the random forest regression, which is based on the 3D block-matching method, taking the correlated central voxel as the target. Four types of features including the voxel level, sub-regional level, whole cubic level with adaptive weighted conjunction and compressed level were introduced to attain the robust features. The group-based random forest regression was then utilized to obtain the approximated PCT only from corresponding MRI, and the feature is extracted from the 3D cubic MRI patches and mapped to the 3D cubic CT patch, which helps in decreasing the computation difficulty, representing the MR patches into an anatomical feature space. The alternative regression forest was used in solving the regression task for enhancing the prediction power compared with the random forest. The proposed method could efficiently capture the correlation that is observable between the CT as well as the MR images on the basis of the alternative random forest (ARF) with cubic features, and the experimental results show the performance and effectiveness of the proposed method compared with the recent learning-based and atlas-based (AB) methods
Collapse
Affiliation(s)
- Yongsheng Hu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, P. R. China
- School of Information Engineering, Binzhou University, Shandong, P. R. China
| | - Liyi Zhang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, P. R. China
- School of Information Engineering, Tianjin University of Commerce, Tianjin, P. R. China
| |
Collapse
|
143
|
Zhensong Wang, Lifang Wei, Li Wang, Yaozong Gao, Wufan Chen, Dinggang Shen. Hierarchical Vertex Regression-Based Segmentation of Head and Neck CT Images for Radiotherapy Planning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:923-937. [PMID: 29757737 PMCID: PMC5954838 DOI: 10.1109/tip.2017.2768621] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Segmenting organs at risk from head and neck CT images is a prerequisite for the treatment of head and neck cancer using intensity modulated radiotherapy. However, accurate and automatic segmentation of organs at risk is a challenging task due to the low contrast of soft tissue and image artifact in CT images. Shape priors have been proved effective in addressing this challenging task. However, conventional methods incorporating shape priors often suffer from sensitivity to shape initialization and also shape variations across individuals. In this paper, we propose a novel approach to incorporate shape priors into a hierarchical learning-based model. The contributions of our proposed approach are as follows: 1) a novel mechanism for critical vertices identification is proposed to identify vertices with distinctive appearances and strong consistency across different subjects; 2) a new strategy of hierarchical vertex regression is also used to gradually locate more vertices with the guidance of previously located vertices; and 3) an innovative framework of joint shape and appearance learning is further developed to capture salient shape and appearance features simultaneously. Using these innovative strategies, our proposed approach can essentially overcome drawbacks of the conventional shape-based segmentation methods. Experimental results show that our approach can achieve much better results than state-of-the-art methods.
Collapse
|
144
|
Lei Y, Tang X, Higgins K, Wang T, Liu T, Dhabaan A, Shim H, Curran WJ, Yang X. Improving Image Quality of Cone-Beam CT Using Alternating Regression Forest. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10573:1057345. [PMID: 31456600 PMCID: PMC6711599 DOI: 10.1117/12.2292886] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We propose a CBCT image quality improvement method based on anatomic signature and auto-context alternating regression forest. Patient-specific anatomical features are extracted from the aligned training images and served as signatures for each voxel. The most relevant and informative features are identified to train regression forest. The well-trained regression forest is used to correct the CBCT of a new patient. This proposed algorithm was evaluated using 10 patients' data with CBCT and CT images. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and normalized cross correlation (NCC) indexes were used to quantify the correction accuracy of the proposed algorithm. The mean MAE, PSNR and NCC between corrected CBCT and ground truth CT were 16.66HU, 37.28dB and 0.98, which demonstrated the CBCT correction accuracy of the proposed learning-based method. We have developed a learning-based method and demonstrated that this method could significantly improve CBCT image quality. The proposed method has great potential in improving CBCT image quality to a level close to planning CT, therefore, allowing its quantitative use in CBCT-guided adaptive radiotherapy.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Anees Dhabaan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Hyunsuk Shim
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| |
Collapse
|
145
|
Choi J, Song E, Lee S. L-Tree: A Local-Area-Learning-Based Tree Induction Algorithm for Image Classification. SENSORS (BASEL, SWITZERLAND) 2018; 18:E306. [PMID: 29361699 PMCID: PMC5795769 DOI: 10.3390/s18010306] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Revised: 01/05/2018] [Accepted: 01/18/2018] [Indexed: 12/05/2022]
Abstract
The decision tree is one of the most effective tools for deriving meaningful outcomes from image data acquired from the visual sensors. Owing to its reliability, superior generalization abilities, and easy implementation, the tree model has been widely used in various applications. However, in image classification problems, conventional tree methods use only a few sparse attributes as the splitting criterion. Consequently, they suffer from several drawbacks in terms of performance and environmental sensitivity. To overcome these limitations, this paper introduces a new tree induction algorithm that classifies images on the basis of local area learning. To train our predictive model, we extract a random local area within the image and use it as a feature for classification. In addition, the self-organizing map, which is a clustering technique, is used for node learning. We also adopt a random sampled optimization technique to search for the optimal node. Finally, each trained node stores the weights that represent the training data and class probabilities. Thus, a recursively trained tree classifies the data hierarchically based on the local similarity at each node. The proposed tree is a type of predictive model that offers benefits in terms of image's semantic energy conservation compared with conventional tree methods. Consequently, it exhibits improved performance under various conditions, such as noise and illumination changes. Moreover, the proposed algorithm can improve the generalization ability owing to its randomness. In addition, it can be easily applied to ensemble techniques. To evaluate the performance of the proposed algorithm, we perform quantitative and qualitative comparisons with various tree-based methods using four image datasets. The results show that our algorithm not only involves a lower classification error than the conventional methods but also exhibits stable performance even under unfavorable conditions such as noise and illumination changes.
Collapse
Affiliation(s)
- Jaesung Choi
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea.
| | - Eungyeol Song
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea.
| | - Sangyoun Lee
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea.
| |
Collapse
|
146
|
Xiang L, Qiao Y, Nie D, An L, Wang Q, Shen D. Deep Auto-context Convolutional Neural Networks for Standard-Dose PET Image Estimation from Low-Dose PET/MRI. Neurocomputing 2017; 267:406-416. [PMID: 29217875 PMCID: PMC5714510 DOI: 10.1016/j.neucom.2017.06.048] [Citation(s) in RCA: 167] [Impact Index Per Article: 20.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Positron emission tomography (PET) is an essential technique in many clinical applications such as tumor detection and brain disorder diagnosis. In order to obtain high-quality PET images, a standard-dose radioactive tracer is needed, which inevitably causes the risk of radiation exposure damage. For reducing the patient's exposure to radiation and maintaining the high quality of PET images, in this paper, we propose a deep learning architecture to estimate the high-quality standard-dose PET (SPET) image from the combination of the low-quality low-dose PET (LPET) image and the accompanying T1-weighted acquisition from magnetic resonance imaging (MRI). Specifically, we adapt the convolutional neural network (CNN) to account for the two channel inputs of LPET and T1, and directly learn the end-to-end mapping between the inputs and the SPET output. Then, we integrate multiple CNN modules following the auto-context strategy, such that the tentatively estimated SPET of an early CNN can be iteratively refined by subsequent CNNs. Validations on real human brain PET/MRI data show that our proposed method can provide competitive estimation quality of the PET images, compared to the state-of-the-art methods. Meanwhile, our method is highly efficient to test on a new subject, e.g., spending ~2 seconds for estimating an entire SPET image in contrast to ~16 minutes by the state-of-the-art method. The results above demonstrate the potential of our method in real clinical applications.
Collapse
Affiliation(s)
- Lei Xiang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yu Qiao
- Shenzhen key lab of Comp. Vis. & Pat. Rec., Shenzhen Institutes of Advanced Technology, CAS, Shenzhen, China
| | - Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Le An
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Qian Wang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
147
|
Cao X, Yang J, Gao Y, Guo Y, Wu G, Shen D. Dual-core steered non-rigid registration for multi-modal images via bi-directional image synthesis. Med Image Anal 2017; 41:18-31. [PMID: 28533050 PMCID: PMC5896773 DOI: 10.1016/j.media.2017.05.004] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Revised: 05/05/2017] [Accepted: 05/09/2017] [Indexed: 12/20/2022]
Abstract
In prostate cancer radiotherapy, computed tomography (CT) is widely used for dose planning purposes. However, because CT has low soft tissue contrast, it makes manual contouring difficult for major pelvic organs. In contrast, magnetic resonance imaging (MRI) provides high soft tissue contrast, which makes it ideal for accurate manual contouring. Therefore, the contouring accuracy on CT can be significantly improved if the contours in MRI can be mapped to CT domain by registering MRI with CT of the same subject, which would eventually lead to high treatment efficacy. In this paper, we propose a bi-directional image synthesis based approach for MRI-to-CT pelvic image registration. First, we use patch-wise random forest with auto-context model to learn the appearance mapping from CT to MRI domain, and then vice versa. Consequently, we can synthesize a pseudo-MRI whose anatomical structures are exactly same with CT but with MRI-like appearance, and a pseudo-CT as well. Then, our MRI-to-CT registration can be steered in a dual manner, by simultaneously estimating two deformation pathways: 1) one from the pseudo-CT to the actual CT and 2) another from actual MRI to the pseudo-MRI. Next, a dual-core deformation fusion framework is developed to iteratively and effectively combine these two registration pathways by using complementary information from both modalities. Experiments on a dataset with real pelvic CT and MRI have shown improved registration performance of the proposed method by comparing it to the conventional registration methods, thus indicating its high potential of translation to the routine radiation therapy.
Collapse
Affiliation(s)
- Xiaohuan Cao
- School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi, China; Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi, China
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Yanrong Guo
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
148
|
Nie D, Trullo R, Lian J, Petitjean C, Ruan S, Wang Q, Shen D. Medical Image Synthesis with Context-Aware Generative Adversarial Networks. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2017; 10435:417-425. [PMID: 30009283 PMCID: PMC6044459 DOI: 10.1007/978-3-319-66179-7_48] [Citation(s) in RCA: 184] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Computed tomography (CT) is critical for various clinical applications, e.g., radiation treatment planning and also PET attenuation correction in MRI/PET scanner. However, CT exposes radiation during acquisition, which may cause side effects to patients. Compared to CT, magnetic resonance imaging (MRI) is much safer and does not involve radiations. Therefore, recently researchers are greatly motivated to estimate CT image from its corresponding MR image of the same subject for the case of radiation planning. In this paper, we propose a data-driven approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate CT given the MR image. To better model the nonlinear mapping from MRI to CT and produce more realistic images, we propose to use the adversarial training strategy to train the FCN. Moreover, we propose an image-gradient-difference based loss function to alleviate the blurriness of the generated CT. We further apply Auto-Context Model (ACM) to implement a context-aware generative adversarial network. Experimental results show that our method is accurate and robust for predicting CT images from MR images, and also outperforms three state-of-the-art methods under comparison.
Collapse
Affiliation(s)
- Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Roger Trullo
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
- Normandie Univ, INSA Rouen, LITIS, 76000 Rouen, France
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | | | - Su Ruan
- Normandie Univ, INSA Rouen, LITIS, 76000 Rouen, France
| | - Qian Wang
- School of Biomedical Engineering, Med-X Research Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| |
Collapse
|
149
|
Zhao C, Carass A, Lee J, Jog A, Prince JL. A supervoxel based random forest synthesis framework for bidirectional MR/CT synthesis. SIMULATION AND SYNTHESIS IN MEDICAL IMAGING : ... INTERNATIONAL WORKSHOP, SASHIMI ..., HELD IN CONJUNCTION WITH MICCAI ..., PROCEEDINGS. SASHIMI (WORKSHOP) 2017; 10557:33-40. [PMID: 30221260 DOI: 10.1007/978-3-319-68127-6_4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
Synthesizing magnetic resonance (MR) and computed tomography (CT) images (from each other) has important implications for clinical neuroimaging. The MR to CT direction is critical for MRI-based radiotherapy planning and dose computation, whereas the CT to MR direction can provide an economic alternative to real MRI for image processing tasks. Additionally, synthesis in both directions can enhance MR/CT multi-modal image registration. Existing approaches have focused on synthesizing CT from MR. In this paper, we propose a multi-atlas based hybrid method to synthesize T1-weighted MR images from CT and CT images from T1-weighted MR images using a common framework. The task is carried out by: (a) computing a label field based on supervoxels for the subject image using joint label fusion; (b) correcting this result using a random forest classifier (RF-C); (c) spatial smoothing using a Markov random field; (d) synthesizing intensities using a set of RF regressors, one trained for each label. The algorithm is evaluated using a set of six registered CT and MR image pairs of the whole head.
Collapse
Affiliation(s)
- Can Zhao
- Dept. of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218
| | - Aaron Carass
- Dept. of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218
| | - Junghoon Lee
- Dept. of Radiation Oncology, The Johns Hopkins School of Medicine, Baltimore, MD 21287
| | - Amod Jog
- Dept. of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218
| | - Jerry L Prince
- Dept. of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218
| |
Collapse
|
150
|
Learning-based structurally-guided construction of resting-state functional correlation tensors. Magn Reson Imaging 2017; 43:110-121. [PMID: 28729016 DOI: 10.1016/j.mri.2017.07.008] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2017] [Revised: 05/22/2017] [Accepted: 07/13/2017] [Indexed: 12/18/2022]
Abstract
Functional magnetic resonance imaging (fMRI) measures changes in blood-oxygenation-level-dependent (BOLD) signals to detect brain activities. It has been recently reported that the spatial correlation patterns of resting-state BOLD signals in the white matter (WM) also give WM information often measured by diffusion tensor imaging (DTI). These correlation patterns can be captured using functional correlation tensor (FCT), which is analogous to the diffusion tensor (DT) obtained from DTI. In this paper, we propose a noise-robust FCT method aiming at further improving its quality, and making it eligible for further neuroscience study. The novel FCT estimation method consists of three major steps: First, we estimate the initial FCT using a patch-based approach for BOLD signal correlation to improve the noise robustness. Second, by utilizing the relationship between functional and diffusion data, we employ a regression forest model to learn the mapping between the initial FCTs and the corresponding DTs using the training data. The learned forest can then be applied to predict the DTI-like tensors given the initial FCTs from the testing fMRI data. Third, we re-estimate the enhanced FCT by utilizing the DTI-like tensors as a feedback guidance to further improve FCT computation. We have demonstrated the utility of our enhanced FCTs in Alzheimer's disease (AD) diagnosis by identifying mild cognitive impairment (MCI) patients from normal subjects.
Collapse
|