1
|
Pan S, Abouei E, Peng J, Qian J, Wynne JF, Wang T, Chang CW, Roper J, Nye JA, Mao H, Yang X. Full-dose whole-body PET synthesis from low-dose PET using high-efficiency denoising diffusion probabilistic model: PET consistency model. Med Phys 2024. [PMID: 38588512 DOI: 10.1002/mp.17068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 03/26/2024] [Accepted: 03/26/2024] [Indexed: 04/10/2024] Open
Abstract
PURPOSE Positron Emission Tomography (PET) has been a commonly used imaging modality in broad clinical applications. One of the most important tradeoffs in PET imaging is between image quality and radiation dose: high image quality comes with high radiation exposure. Improving image quality is desirable for all clinical applications while minimizing radiation exposure is needed to reduce risk to patients. METHODS We introduce PET Consistency Model (PET-CM), an efficient diffusion-based method for generating high-quality full-dose PET images from low-dose PET images. It employs a two-step process, adding Gaussian noise to full-dose PET images in the forward diffusion, and then denoising them using a PET Shifted-window Vision Transformer (PET-VIT) network in the reverse diffusion. The PET-VIT network learns a consistency function that enables direct denoising of Gaussian noise into clean full-dose PET images. PET-CM achieves state-of-the-art image quality while requiring significantly less computation time than other methods. Evaluation with normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), multi-scale structure similarity index (SSIM), normalized cross-correlation (NCC), and clinical evaluation including Human Ranking Score (HRS) and Standardized Uptake Value (SUV) Error analysis shows its superiority in synthesizing full-dose PET images from low-dose inputs. RESULTS In experiments comparing eighth-dose to full-dose images, PET-CM demonstrated impressive performance with NMAE of 1.278 ± 0.122%, PSNR of 33.783 ± 0.824 dB, SSIM of 0.964 ± 0.009, NCC of 0.968 ± 0.011, HRS of 4.543, and SUV Error of 0.255 ± 0.318%, with an average generation time of 62 s per patient. This is a significant improvement compared to the state-of-the-art diffusion-based model with PET-CM reaching this result 12× faster. Similarly, in the quarter-dose to full-dose image experiments, PET-CM delivered competitive outcomes, achieving an NMAE of 0.973 ± 0.066%, PSNR of 36.172 ± 0.801 dB, SSIM of 0.984 ± 0.004, NCC of 0.990 ± 0.005, HRS of 4.428, and SUV Error of 0.151 ± 0.192% using the same generation process, which underlining its high quantitative and clinical precision in both denoising scenario. CONCLUSIONS We propose PET-CM, the first efficient diffusion-model-based method, for estimating full-dose PET images from low-dose images. PET-CM provides comparable quality to the state-of-the-art diffusion model with higher efficiency. By utilizing this approach, it becomes possible to maintain high-quality PET images suitable for clinical use while mitigating the risks associated with radiation. The code is availble at https://github.com/shaoyanpan/Full-dose-Whole-body-PET-Synthesis-from-Low-dose-PET-Using-Consistency-Model.
Collapse
Affiliation(s)
- Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Junbo Peng
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Joshua Qian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jacob F Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jonathon A Nye
- Radiology and Radiological Science, Medical University of South Carolina, Charleston, South Carolina, USA
| | - Hui Mao
- Department of Radiology and Imaging Science, and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
2
|
Davis TM, Luca K, Sudmeier LJ, Buchwald ZS, Khan MK, Yang X, Schreibmann E, Zhang J, Roper J. Total scalp irradiation: A study comparing multiple types of bolus and VMAT optimization techniques. J Appl Clin Med Phys 2024; 25:e14260. [PMID: 38243628 PMCID: PMC11005987 DOI: 10.1002/acm2.14260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 11/13/2023] [Accepted: 12/19/2023] [Indexed: 01/21/2024] Open
Abstract
PURPOSE To investigate bolus design and VMAT optimization settings for total scalp irradiation. METHODS Three silicone bolus designs (flat, hat, and custom) from .decimal were evaluated for adherence to five anthropomorphic head phantoms. Flat bolus was cut from a silicone sheet. Generic hat bolus resembles an elongated swim cap while custom bolus is manufactured by injecting silicone into a 3D printed mold. Bolus placement time was recorded. Air gaps between bolus and scalp were quantified on CT images. The dosimetric effect of air gaps on target coverage was evaluated in a treatment planning study where the scalp was planned to 60 Gy in 30 fractions. A noncoplanar VMAT technique based on gEUD penalties was investigated that explored the full range of gEUD alpha values to determine which settings achieve sufficient target coverage while minimizing brain dose. ANOVA and the t-test were used to evaluate statistically significant differences (threshold = 0.05). RESULTS The flat bolus took 32 ± 5.9 min to construct and place, which was significantly longer (p < 0.001) compared with 0.67 ± 0.2 min for the generic hat bolus or 0.53 ± 0.10 min for the custom bolus. The air gap volumes were 38 ± 9.3 cc, 32 ± 14 cc, and 17 ± 7.0 cc for the flat, hat, and custom boluses, respectively. While the air gap differences between the flat and custom boluses were significant (p = 0.011), there were no significant dosimetric differences in PTV coverage at V57Gy or V60Gy. In the VMAT optimization study, a gEUD alpha of 2 was found to minimize the mean brain dose. CONCLUSIONS Two challenging aspects of total scalp irradiation were investigated: bolus design and plan optimization. Results from this study show opportunities to shorten bolus fabrication time during simulation and create high quality treatment plans using a straightforward VMAT template with simple optimization settings.
Collapse
Affiliation(s)
- Tanisha M. Davis
- Medical Dosimetry ProgramSouthern Illinois UniversityCarbondaleIllinoisUSA
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| | - Kirk Luca
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| | - Lisa J. Sudmeier
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| | | | - Mohammad K. Khan
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| | | | - Jiahan Zhang
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| | - Justin Roper
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
3
|
Pan S, Abouei E, Wynne J, Chang CW, Wang T, Qiu RLJ, Li Y, Peng J, Roper J, Patel P, Yu DS, Mao H, Yang X. Synthetic CT generation from MRI using 3D transformer-based denoising diffusion model. Med Phys 2024; 51:2538-2548. [PMID: 38011588 PMCID: PMC10994752 DOI: 10.1002/mp.16847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 11/02/2023] [Accepted: 11/03/2023] [Indexed: 11/29/2023] Open
Abstract
BACKGROUND AND PURPOSE Magnetic resonance imaging (MRI)-based synthetic computed tomography (sCT) simplifies radiation therapy treatment planning by eliminating the need for CT simulation and error-prone image registration, ultimately reducing patient radiation dose and setup uncertainty. In this work, we propose a MRI-to-CT transformer-based improved denoising diffusion probabilistic model (MC-IDDPM) to translate MRI into high-quality sCT to facilitate radiation treatment planning. METHODS MC-IDDPM implements diffusion processes with a shifted-window transformer network to generate sCT from MRI. The proposed model consists of two processes: a forward process, which involves adding Gaussian noise to real CT scans to create noisy images, and a reverse process, in which a shifted-window transformer V-net (Swin-Vnet) denoises the noisy CT scans conditioned on the MRI from the same patient to produce noise-free CT scans. With an optimally trained Swin-Vnet, the reverse diffusion process was used to generate noise-free sCT scans matching MRI anatomy. We evaluated the proposed method by generating sCT from MRI on an institutional brain dataset and an institutional prostate dataset. Quantitative evaluations were conducted using several metrics, including Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), Multi-scale Structure Similarity Index (SSIM), and Normalized Cross Correlation (NCC). Dosimetry analyses were also performed, including comparisons of mean dose and target dose coverages for 95% and 99%. RESULTS MC-IDDPM generated brain sCTs with state-of-the-art quantitative results with MAE 48.825 ± 21.491 HU, PSNR 26.491 ± 2.814 dB, SSIM 0.947 ± 0.032, and NCC 0.976 ± 0.019. For the prostate dataset: MAE 55.124 ± 9.414 HU, PSNR 28.708 ± 2.112 dB, SSIM 0.878 ± 0.040, and NCC 0.940 ± 0.039. MC-IDDPM demonstrates a statistically significant improvement (with p < 0.05) in most metrics when compared to competing networks, for both brain and prostate synthetic CT. Dosimetry analyses indicated that the target dose coverage differences by using CT and sCT were within ± 0.34%. CONCLUSIONS We have developed and validated a novel approach for generating CT images from routine MRIs using a transformer-based improved DDPM. This model effectively captures the complex relationship between CT and MRI images, allowing for robust and high-quality synthetic CT images to be generated in a matter of minutes. This approach has the potential to greatly simplify the treatment planning process for radiation therapy by eliminating the need for additional CT scans, reducing the amount of time patients spend in treatment planning, and enhancing the accuracy of treatment delivery.
Collapse
Affiliation(s)
- Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jacob Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yuheng Li
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Junbo Peng
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - David S Yu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences, Winship Cancer Institute, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
4
|
Matkovic L, Lei Y, Fu Y, Wang T, Kesarwala AH, Axente M, Roper J, Higgins K, Bradley JD, Liu T, Yang X. Deformable lung 4DCT image registration via landmark-driven cycle network. Med Phys 2024; 51:1974-1984. [PMID: 37708440 PMCID: PMC10937322 DOI: 10.1002/mp.16738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 09/01/2023] [Indexed: 09/16/2023] Open
Abstract
BACKGROUND An automated, accurate, and efficient lung four-dimensional computed tomography (4DCT) image registration method is clinically important to quantify respiratory motion for optimal motion management. PURPOSE The purpose of this work is to develop a weakly supervised deep learning method for 4DCT lung deformable image registration (DIR). METHODS The landmark-driven cycle network is proposed as a deep learning platform that performs DIR of individual phase datasets in a simulation 4DCT. This proposed network comprises a generator and a discriminator. The generator accepts moving and target CTs as input and outputs the deformation vector fields (DVFs) to match the two CTs. It is optimized during both forward and backward paths to enhance the bi-directionality of DVF generation. Further, the landmarks are used to weakly supervise the generator network. Landmark-driven loss is used to guide the generator's training. The discriminator then judges the realism of the deformed CT to provide extra DVF regularization. RESULTS We performed four-fold cross-validation on 10 4DCT datasets from the public DIR-Lab dataset and a hold-out test on our clinic dataset, which included 50 4DCT datasets. The DIR-Lab dataset was used to evaluate the performance of the proposed method against other methods in the literature by calculating the DIR-Lab Target Registration Error (TRE). The proposed method outperformed other deep learning-based methods on the DIR-Lab datasets in terms of TRE. Bi-directional and landmark-driven loss were shown to be effective for obtaining high registration accuracy. The mean and standard deviation of TRE for the DIR-Lab datasets was 1.20 ± 0.72 mm and the mean absolute error (MAE) and structural similarity index (SSIM) for our datasets were 32.1 ± 11.6 HU and 0.979 ± 0.011, respectively. CONCLUSION The landmark-driven cycle network has been validated and tested for automatic deformable image registration of patients' lung 4DCTs with results comparable to or better than competing methods.
Collapse
Affiliation(s)
- Luke Matkovic
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yabo Fu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Aparna H Kesarwala
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Marian Axente
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
5
|
Peng J, Qiu RLJ, Wynne JF, Chang CW, Pan S, Wang T, Roper J, Liu T, Patel PR, Yu DS, Yang X. CBCT-Based synthetic CT image generation using conditional denoising diffusion probabilistic model. Med Phys 2024; 51:1847-1859. [PMID: 37646491 DOI: 10.1002/mp.16704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 07/17/2023] [Accepted: 08/08/2023] [Indexed: 09/01/2023] Open
Abstract
BACKGROUND Daily or weekly cone-beam computed tomography (CBCT) scans are commonly used for accurate patient positioning during the image-guided radiotherapy (IGRT) process, making it an ideal option for adaptive radiotherapy (ART) replanning. However, the presence of severe artifacts and inaccurate Hounsfield unit (HU) values prevent its use for quantitative applications such as organ segmentation and dose calculation. To enable the clinical practice of online ART, it is crucial to obtain CBCT scans with a quality comparable to that of a CT scan. PURPOSE This work aims to develop a conditional diffusion model to perform image translation from the CBCT to the CT distribution for the image quality improvement of CBCT. METHODS The proposed method is a conditional denoising diffusion probabilistic model (DDPM) that utilizes a time-embedded U-net architecture with residual and attention blocks to gradually transform the white Gaussian noise sample to the target CT distribution conditioned on the CBCT. The model was trained on deformed planning CT (dpCT) and CBCT image pairs, and its feasibility was verified in brain patient study and head-and-neck (H&N) patient study. The performance of the proposed algorithm was evaluated using mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and normalized cross-correlation (NCC) metrics on generated synthetic CT (sCT) samples. The proposed method was also compared to four other diffusion model-based sCT generation methods. RESULTS In the brain patient study, the MAE, PSNR, and NCC of the generated sCT were 25.99 HU, 30.49 dB, and 0.99, respectively, compared to 40.63 HU, 27.87 dB, and 0.98 of the CBCT images. In the H&N patient study, the metrics were 32.56 HU, 27.65 dB, 0.98 and 38.99 HU, 27.00, 0.98 for sCT and CBCT, respectively. Compared to the other four diffusion models and one Cycle generative adversarial network (Cycle GAN), the proposed method showed superior results in both visual quality and quantitative analysis. CONCLUSIONS The proposed conditional DDPM method can generate sCT from CBCT with accurate HU numbers and reduced artifacts, enabling accurate CBCT-based organ segmentation and dose calculation for online ART.
Collapse
Affiliation(s)
- Junbo Peng
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Nuclear and Radiological Engineering and Medical physics Programs, George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jacob F Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Pretesh R Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - David S Yu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Nuclear and Radiological Engineering and Medical physics Programs, George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| |
Collapse
|
6
|
Chang CW, Peng J, Safari M, Salari E, Pan S, Roper J, Qiu RLJ, Gao Y, Shu HK, Mao H, Yang X. High-resolution MRI synthesis using a data-driven framework with denoising diffusion probabilistic modeling. Phys Med Biol 2024; 69:045001. [PMID: 38241726 PMCID: PMC10839468 DOI: 10.1088/1361-6560/ad209c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 01/08/2024] [Accepted: 01/19/2024] [Indexed: 01/21/2024]
Abstract
Objective. High-resolution magnetic resonance imaging (MRI) can enhance lesion diagnosis, prognosis, and delineation. However, gradient power and hardware limitations prohibit recording thin slices or sub-1 mm resolution. Furthermore, long scan time is not clinically acceptable. Conventional high-resolution images generated using statistical or analytical methods include the limitation of capturing complex, high-dimensional image data with intricate patterns and structures. This study aims to harness cutting-edge diffusion probabilistic deep learning techniques to create a framework for generating high-resolution MRI from low-resolution counterparts, improving the uncertainty of denoising diffusion probabilistic models (DDPM).Approach. DDPM includes two processes. The forward process employs a Markov chain to systematically introduce Gaussian noise to low-resolution MRI images. In the reverse process, a U-Net model is trained to denoise the forward process images and produce high-resolution images conditioned on the features of their low-resolution counterparts. The proposed framework was demonstrated using T2-weighted MRI images from institutional prostate patients and brain patients collected in the Brain Tumor Segmentation Challenge 2020 (BraTS2020).Main results. For the prostate dataset, the bicubic interpolation model (Bicubic), conditional generative-adversarial network (CGAN), and our proposed DDPM framework improved the noise quality measure from low-resolution images by 4.4%, 5.7%, and 12.8%, respectively. Our method enhanced the signal-to-noise ratios by 11.7%, surpassing Bicubic (9.8%) and CGAN (8.1%). In the BraTS2020 dataset, the proposed framework and Bicubic enhanced peak signal-to-noise ratio from resolution-degraded images by 9.1% and 5.8%. The multi-scale structural similarity indexes were 0.970 ± 0.019, 0.968 ± 0.022, and 0.967 ± 0.023 for the proposed method, CGAN, and Bicubic, respectively.Significance. This study explores a deep learning-based diffusion probabilistic framework for improving MR image resolution. Such a framework can be used to improve clinical workflow by obtaining high-resolution images without penalty of the long scan time. Future investigation will likely focus on prospectively testing the efficacy of this framework with different clinical indications.
Collapse
Affiliation(s)
- Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Junbo Peng
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Mojtaba Safari
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Elahheh Salari
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30308, United States of America
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Yuan Gao
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Hui-Kuo Shu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30308, United States of America
| |
Collapse
|
7
|
Gao Y, Chang CW, Pan S, Peng J, Ma C, Patel P, Roper J, Zhou J, Yang X. Deep learning-based synthetic dose-weighted LET map generation for intensity modulated proton therapy. Phys Med Biol 2024; 69:025004. [PMID: 38091613 PMCID: PMC10767225 DOI: 10.1088/1361-6560/ad154b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 12/02/2023] [Accepted: 12/13/2023] [Indexed: 01/06/2024]
Abstract
The advantage of proton therapy as compared to photon therapy stems from the Bragg peak effect, which allows protons to deposit most of their energy directly at the tumor while sparing healthy tissue. However, even with such benefits, proton therapy does present certain challenges. The biological effectiveness differences between protons and photons are not fully incorporated into clinical treatment planning processes. In current clinical practice, the relative biological effectiveness (RBE) between protons and photons is set as constant 1.1. Numerous studies have suggested that the RBE of protons can exhibit significant variability. Given these findings, there is a substantial interest in refining proton therapy treatment planning to better account for the variable RBE. Dose-average linear energy transfer (LETd) is a key physical parameter for evaluating the RBE of proton therapy and aids in optimizing proton treatment plans. Calculating precise LETddistributions necessitates the use of intricate physical models and the execution of specialized Monte-Carlo simulation software, which is a computationally intensive and time-consuming progress. In response to these challenges, we propose a deep learning based framework designed to predict the LETddistribution map using the dose distribution map. This approach aims to simplify the process and increase the speed of LETdmap generation in clinical settings. The proposed CycleGAN model has demonstrated superior performance over other GAN-based models. The mean absolute error (MAE), peak signal-to-noise ratio and normalized cross correlation of the LETdmaps generated by the proposed method are 0.096 ± 0.019 keVμm-1, 24.203 ± 2.683 dB, and 0.997 ± 0.002, respectively. The MAE of the proposed method in the clinical target volume, bladder, and rectum are 0.193 ± 0.103, 0.277 ± 0.112, and 0.211 ± 0.086 keVμm-1, respectively. The proposed framework has demonstrated the feasibility of generating synthetic LETdmaps from dose maps and has the potential to improve proton therapy planning by providing accurate LETdinformation.
Collapse
Affiliation(s)
- Yuan Gao
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
- Department of Biomedical Informatics, Emory University, Atlanta, GA, United States of America
| | - Junbo Peng
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Chaoqiong Ma
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
- Department of Biomedical Informatics, Emory University, Atlanta, GA, United States of America
- Department of Nuclear & Radiological Engineering and Medical Physics, Georgia Institute of Technology, Atlanta, GA, United States of America
| |
Collapse
|
8
|
Hu M, Yang K, Wang J, Qiu RLJ, Roper J, Kahn S, Shu HK, Yang X. MGMT promoter methylation prediction based on multiparametric MRI via vision graph neural network. J Med Imaging (Bellingham) 2024; 11:014503. [PMID: 38370421 PMCID: PMC10869845 DOI: 10.1117/1.jmi.11.1.014503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 12/24/2023] [Accepted: 01/29/2024] [Indexed: 02/20/2024] Open
Abstract
Purpose Glioblastoma (GBM) is aggressive and malignant. The methylation status of the O 6 -methylguanine-DNA methyltransferase (MGMT) promoter in GBM tissue is considered an important biomarker for developing the most effective treatment plan. Although the standard method for assessing the MGMT promoter methylation status is via bisulfite modification and deoxyribonucleic acid (DNA) sequencing of biopsy or surgical specimens, a secondary automated method based on medical imaging may improve the efficiency and accuracy of those tests. Approach We propose a deep vision graph neural network (ViG) using multiparametric magnetic resonance imaging (MRI) to predict the MGMT promoter methylation status noninvasively. Our model was compared to the RSNA radiogenomic classification winners. The dataset includes 583 usable patient cases. Combinations of MRI sequences were compared. Our multi-sequence fusion strategy was compared with those using single MR sequences. Results Our best model [Fluid Attenuated Inversion Recovery (FLAIR), T1-weighted pre-contrast (T1w), T2-weighted (T2)] outperformed the winning models with a test area under the curve (AUC) of 0.628, an accuracy of 0.632, a precision of 0.646, a recall of 0.677, a specificity of 0.581, and an F1 score of 0.661. Compared to the winning models with single MR sequences, our ViG utilizing fused-MRI showed a significant improvement statistically in AUC scores, which are FLAIR (p = 0.042 ), T1w (p = 0.017 ), T1wCE (p = 0.001 ), and T2 (p = 0.018 ). Conclusions Our model is superior to challenge champions. A graph representation of the medical images enabled good handling of complexity and irregularity. Our work provides an automatic secondary check pipeline to ensure the correctness of MGMT methylation status prediction.
Collapse
Affiliation(s)
- Mingzhe Hu
- Emory University, Department of Radiation Oncology and Winship Cancer Institute, Atlanta, Georgia, United States
- Emory University, Department of Computer Science and Informatics, Atlanta, Georgia, United States
| | - Kailin Yang
- Cleveland Clinic, Taussig Cancer Center, Department of Radiation Oncology, Cleveland, Ohio, United States
| | - Jing Wang
- Emory University, Department of Radiation Oncology and Winship Cancer Institute, Atlanta, Georgia, United States
| | - Richard L. J. Qiu
- Emory University, Department of Radiation Oncology and Winship Cancer Institute, Atlanta, Georgia, United States
| | - Justin Roper
- Emory University, Department of Radiation Oncology and Winship Cancer Institute, Atlanta, Georgia, United States
| | - Shannon Kahn
- Emory University, Department of Radiation Oncology and Winship Cancer Institute, Atlanta, Georgia, United States
| | - Hui-Kuo Shu
- Emory University, Department of Radiation Oncology and Winship Cancer Institute, Atlanta, Georgia, United States
| | - Xiaofeng Yang
- Emory University, Department of Radiation Oncology and Winship Cancer Institute, Atlanta, Georgia, United States
- Emory University, Department of Computer Science and Informatics, Atlanta, Georgia, United States
- Georgia Institute of Technology and Emory University, Department of Biomedical Engineering, Atlanta, Georgia, United States
| |
Collapse
|
9
|
Lin MH, Olsen L, Kavanaugh JA, Jacqmin D, Lobb E, Yoo S, Berry SL, Pichardo JC, Cardenas CE, Roper J, Kirk M, Cheung JP, Solberg TD, Moore KL, Kim M. Beyond Acceptable: The Vital Role of Medical Physicists in Ensuring High-Quality Treatment Plans. Pract Radiat Oncol 2024; 14:6-9. [PMID: 38182304 DOI: 10.1016/j.prro.2023.08.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 07/09/2023] [Accepted: 08/18/2023] [Indexed: 01/07/2024]
Affiliation(s)
- Mu-Han Lin
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas.
| | - Lindsey Olsen
- Department of Radiation Oncology, Memorial Hospital, Colorado Springs, Colorado
| | - James A Kavanaugh
- Department of Radiation Oncology, Mayo Clinic College of Medicine and Science, Rochester, Minnesota
| | - Dustin Jacqmin
- Department of Human Oncology, University of Wisconsin, Madison, Wisconsin
| | - Eric Lobb
- Department of Radiation Oncology, Ascension NE Wisconsin-St. Elizabeth Hospital, Appleton, Wisconsin
| | - Sua Yoo
- Radiation Oncology, Duke University Medical Center, Durham, North Carolina
| | - Sean L Berry
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York
| | | | - Carlos E Cardenas
- Department of Radiation Oncology, The University of Alabama at Birmingham, Birmingham, Alabama
| | - Justin Roper
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia
| | - Maura Kirk
- Department of Radiation Oncology, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Joey P Cheung
- Radiation Oncology, Sutter Health Mills-Peninsula Medical Center, San Mateo, California
| | - Timothy D Solberg
- Department of Radiation Oncology, University of Washington, Seattle, Washington
| | - Kevin L Moore
- Department of Radiation Oncology, UC San Diego, La Jolla, California
| | - Minsun Kim
- Department of Radiation Oncology, University of Washington, Seattle, Washington
| |
Collapse
|
10
|
Wang T, Lei Y, Schreibmann E, Roper J, Liu T, Schuster DM, Jani AB, Yang X. Lesion segmentation on 18F-fluciclovine PET/CT images using deep learning. Front Oncol 2023; 13:1274803. [PMID: 38156106 PMCID: PMC10753832 DOI: 10.3389/fonc.2023.1274803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 11/27/2023] [Indexed: 12/30/2023] Open
Abstract
Background and purpose A novel radiotracer, 18F-fluciclovine (anti-3-18F-FACBC), has been demonstrated to be associated with significantly improved survival when it is used in PET/CT imaging to guide postprostatectomy salvage radiotherapy for prostate cancer. We aimed to investigate the feasibility of using a deep learning method to automatically detect and segment lesions on 18F-fluciclovine PET/CT images. Materials and methods We retrospectively identified 84 patients who are enrolled in Arm B of the Emory Molecular Prostate Imaging for Radiotherapy Enhancement (EMPIRE-1) trial. All 84 patients had prostate adenocarcinoma and underwent prostatectomy and 18F-fluciclovine PET/CT imaging with lesions identified and delineated by physicians. Three different neural networks with increasing levels of complexity (U-net, Cascaded U-net, and a cascaded detection segmentation network) were trained and tested on the 84 patients with a fivefold cross-validation strategy and a hold-out test, using manual contours as the ground truth. We also investigated using both PET and CT or using PET only as input to the neural network. Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), center-of-mass distance (CMD), and volume difference (VD) were used to quantify the quality of segmentation results against ground truth contours provided by physicians. Results All three deep learning methods were able to detect 144/155 lesions and 153/155 lesions successfully when PET+CT and PET only, respectively, served as input. Quantitative results demonstrated that the neural network with the best performance was able to segment lesions with an average DSC of 0.68 ± 0.15 and HD95 of 4 ± 2 mm. The center of mass of the segmented contours deviated from physician contours by approximately 2 mm on average, and the volume difference was less than 1 cc. The novel network proposed by us achieves the best performance compared to current networks. The addition of CT as input to the neural network contributed to more cases of failure (DSC = 0), and among those cases of DSC > 0, it was shown to produce no statistically significant difference with the use of only PET as input for our proposed method. Conclusion Quantitative results demonstrated the feasibility of the deep learning methods in automatically segmenting lesions on 18F-fluciclovine PET/CT images. This indicates the great potential of 18F-fluciclovine PET/CT combined with deep learning for providing a second check in identifying lesions as well as saving time and effort for physicians in contouring.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States
| | - Eduard Schreibmann
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - David M. Schuster
- Department of Radiology and Imaging Science and Winship Cancer Institute, Emory University, Atlanta, GA, United States
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States
| |
Collapse
|
11
|
Lei Y, Ding Y, Qiu RLJ, Wang T, Roper J, Fu Y, Shu HK, Mao H, Yang X. Hippocampus substructure segmentation using morphological vision transformer learning. Phys Med Biol 2023; 68:235013. [PMID: 37972414 PMCID: PMC10690959 DOI: 10.1088/1361-6560/ad0d45] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 11/01/2023] [Accepted: 11/15/2023] [Indexed: 11/19/2023]
Abstract
The hippocampus plays a crucial role in memory and cognition. Because of the associated toxicity from whole brain radiotherapy, more advanced treatment planning techniques prioritize hippocampal avoidance, which depends on an accurate segmentation of the small and complexly shaped hippocampus. To achieve accurate segmentation of the anterior and posterior regions of the hippocampus from T1 weighted (T1w) MR images, we developed a novel model, Hippo-Net, which uses a cascaded model strategy. The proposed model consists of two major parts: (1) a localization model is used to detect the volume-of-interest (VOI) of hippocampus. (2) An end-to-end morphological vision transformer network (Franchietal2020Pattern Recognit.102107246, Ranemetal2022 IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW) pp 3710-3719) is used to perform substructures segmentation within the hippocampus VOI. The substructures include the anterior and posterior regions of the hippocampus, which are defined as the hippocampus proper and parts of the subiculum. The vision transformer incorporates the dominant features extracted from MR images, which are further improved by learning-based morphological operators. The integration of these morphological operators into the vision transformer increases the accuracy and ability to separate hippocampus structure into its two distinct substructures. A total of 260 T1w MRI datasets from medical segmentation decathlon dataset were used in this study. We conducted a five-fold cross-validation on the first 200 T1w MR images and then performed a hold-out test on the remaining 60 T1w MR images with the model trained on the first 200 images. In five-fold cross-validation, the Dice similarity coefficients were 0.900 ± 0.029 and 0.886 ± 0.031 for the hippocampus proper and parts of the subiculum, respectively. The mean surface distances (MSDs) were 0.426 ± 0.115 mm and 0.401 ± 0.100 mm for the hippocampus proper and parts of the subiculum, respectively. The proposed method showed great promise in automatically delineating hippocampus substructures on T1w MR images. It may facilitate the current clinical workflow and reduce the physicians' effort.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States of America
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Yabo Fu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States of America
| | - Hui-Kuo Shu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Atlanta, GA 30308, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308, United States of America
| |
Collapse
|
12
|
Gao Y, Chang CW, Roper J, Axente M, Lei Y, Pan S, Bradley JD, Zhou J, Liu T, Yang X. Single energy CT-based mass density and relative stopping power estimation for proton therapy using deep learning method. Front Oncol 2023; 13:1278180. [PMID: 38074686 PMCID: PMC10702508 DOI: 10.3389/fonc.2023.1278180] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 11/06/2023] [Indexed: 02/09/2024] Open
Abstract
Background The number of patients undergoing proton therapy has increased in recent years. Current treatment planning systems (TPS) calculate dose maps using three-dimensional (3D) maps of relative stopping power (RSP) and mass density. The patient-specific maps of RSP and mass density were obtained by translating the CT number (HU) acquired using single-energy computed tomography (SECT) with appropriate conversions and coefficients. The proton dose calculation uncertainty of this approach is 2.5%-3.5% plus 1 mm margin. SECT is the major clinical modality for proton therapy treatment planning. It would be intriguing to enhance proton dose calculation accuracy using a deep learning (DL) approach centered on SECT. Objectives The purpose of this work is to develop a deep learning method to generate mass density and relative stopping power (RSP) maps based on clinical single-energy CT (SECT) data for proton dose calculation in proton therapy treatment. Methods Artificial neural networks (ANN), fully convolutional neural networks (FCNN), and residual neural networks (ResNet) were used to learn the correlation between voxel-specific mass density, RSP, and SECT CT number (HU). A stoichiometric calibration method based on SECT data and an empirical model based on dual-energy CT (DECT) images were chosen as reference models to evaluate the performance of deep learning neural networks. SECT images of a CIRS 062M electron density phantom were used as the training dataset for deep learning models. CIRS anthropomorphic M701 and M702 phantoms were used to test the performance of deep learning models. Results For M701, the mean absolute percentage errors (MAPE) of the mass density map by FCNN are 0.39%, 0.92%, 0.68%, 0.92%, and 1.57% on the brain, spinal cord, soft tissue, bone, and lung, respectively, whereas with the SECT stoichiometric method, they are 0.99%, 2.34%, 1.87%, 2.90%, and 12.96%. For RSP maps, the MAPE of FCNN on M701 are 0.85%, 2.32%, 0.75%, 1.22%, and 1.25%, whereas with the SECT reference model, they are 0.95%, 2.61%, 2.08%, 7.74%, and 8.62%. Conclusion The results show that deep learning neural networks have the potential to generate accurate voxel-specific material property information, which can be used to improve the accuracy of proton dose calculation. Advances in knowledge Deep learning-based frameworks are proposed to estimate material mass density and RSP from SECT with improved accuracy compared with conventional methods.
Collapse
Affiliation(s)
- Yuan Gao
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States
| | - Marian Axente
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States
| | - Shaoyan Pan
- Department of Biomedical Informatics, Emory University, Atlanta, GA, United States
| | - Jeffrey D. Bradley
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States
- Department of Biomedical Informatics, Emory University, Atlanta, GA, United States
| |
Collapse
|
13
|
Peng J, Chang CW, Xie H, Qiu RLJ, Roper J, Wang T, Bradshaw B, Tang X, Yang X. Image-Domain Material Decomposition for Dual-energy CT using Unsupervised Learning with Data-fidelity Loss. ArXiv 2023:arXiv:2311.10641v1. [PMID: 38013889 PMCID: PMC10680906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
BACKGROUND Dual-energy CT (DECT) and material decomposition play vital roles in quantitative medical imaging. However, the decomposition process may suffer from significant noise amplification, leading to severely degraded image signal-to-noise ratios (SNRs). While existing iterative algorithms perform noise suppression using different image priors, these heuristic image priors cannot accurately represent the features of the target image manifold. Although deep learning-based decomposition methods have been reported, these methods are in the supervised-learning framework requiring paired data for training, which is not readily available in clinical settings. PURPOSE This work aims to develop an unsupervised-learning framework with data-measurement consistency for image-domain material decomposition in DECT.
Collapse
|
14
|
Ankrah NK, Thomas EM, Bredel M, Middlebrooks EH, Walker H, Fiveash JB, Guthrie BL, Popple RA, Roper J, Brinkerhoff S. Frameless LINAC-Based Stereotactic Radiosurgery is Safe and Effective for Essential and Parkinsonian Tremor. Int J Radiat Oncol Biol Phys 2023; 117:S173. [PMID: 37784432 DOI: 10.1016/j.ijrobp.2023.06.640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
PURPOSE/OBJECTIVE(S) Stereotactic radiosurgery (SRS) to the thalamus is an ablative technique used for treatment of refractory tremor of essential or Parkinsonian origin. Because of the high dose, small target, and required precision, framed SRS on the Gamma Knife has been the historical platform of choice. We tested our recently developed technique to emulate GK dose distributions on a multi-leaf collimator (MLC)-equipped linear accelerator (LINAC) without cumbersome, inefficient cones in a prospective trial of safety and efficacy. MATERIALS/METHODS We quantified pre-treatment contralateral tremor according to FTM scoring system. We obtained MPRAGE, FGATIR, DTI, and RS-fMRI sequences. We identified the VIM via classical stereotactic reference location and connectomically, and then targeted it to 135 Gy dmax in a manner dosimetrically roughly equivalent to 4 mm GK shot. We adjusted each target such that the 25 Gy isodose line did not overlap the posterior limb of the capsule. We immobilized patients in a highly rigid thermoplastic mask and delivered treatment on an Edge™ LINAC with HDMLC. Intrafraction optical surface monitoring (OSMS) ensured patient immobility. We surveilled post-treatment imaging and recorded tremor scoring, QOL outcomes, and adverse events. RESULTS We accrued 42 patients (16 female, 26 males; median age 72.5) over 36 months. 38 had essential tremor, 4 had tremor-dominant Parkinson's; 2 withdrew prior to treatment. Ten patients were on therapeutic anti-coagulation, and were not required to discontinue. At time of submission 39 patients had follow-up ≥ 6 months. 35/39 (89.7%) exhibited clinically meaningful tremor reduction. Mean limb tremor reduction among responders was 43.5% (range: 9 - 100%). Time to patient-reported tremor improvement was 0.3 to 15 months. One patient experienced Gr 3, and 4 patients experienced Gr 1-2 toxicity. CONCLUSION MLC-based SRS thalamotomy is safe and effective for refractory tremor treatment. Multidisciplinary management is key for proper patient selection, treatment, and monitoring. Our outcomes appear congruent to historical GK controls as well as more modern MRgFUS outcomes.
Collapse
Affiliation(s)
- N K Ankrah
- University of Alabama Hospital Birmingham Alabama, Birmingham, AL
| | - E M Thomas
- Department of Radiation Oncology, The Ohio State University Wexner Medical Center, Columbus, OH
| | - M Bredel
- University of Alabama at Birmingham Department of Radiation Oncology, Birmingham, AL
| | | | - H Walker
- University of Alabama at Birmingham, Birmingham, AL
| | - J B Fiveash
- University of Alabama at Birmingham Department of Radiation Oncology, Birmingham, AL
| | - B L Guthrie
- University of Alabama Hospital Birmingham, Birmingham, AL
| | - R A Popple
- University of Alabama at Birmingham, Birmingham, AL
| | - J Roper
- University of Auburn, Auburn, AL
| | - S Brinkerhoff
- University of Alabama Hospital Birmingham, Birmingham, AL
| |
Collapse
|
15
|
Wynne JF, Lei Y, Pan S, Wang T, Pasha M, Luca K, Roper J, Patel P, Patel SA, Godette K, Jani AB, Yang X. Rapid unpaired CBCT-based synthetic CT for CBCT-guided adaptive radiotherapy. J Appl Clin Med Phys 2023; 24:e14064. [PMID: 37345557 PMCID: PMC10562022 DOI: 10.1002/acm2.14064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 04/18/2023] [Accepted: 05/15/2023] [Indexed: 06/23/2023] Open
Abstract
In this work, we demonstrate a method for rapid synthesis of high-quality CT images from unpaired, low-quality CBCT images, permitting CBCT-based adaptive radiotherapy. We adapt contrastive unpaired translation (CUT) to be used with medical images and evaluate the results on an institutional pelvic CT dataset. We compare the method against cycleGAN using mean absolute error, structural similarity index, root mean squared error, and Frèchet Inception Distance and show that CUT significantly outperforms cycleGAN while requiring less time and fewer resources. The investigated method improves the feasibility of online adaptive radiotherapy over the present state-of-the-art.
Collapse
Affiliation(s)
- Jacob F. Wynne
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Mosa Pasha
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Kirk Luca
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Sagar A. Patel
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Karen Godette
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
16
|
Lei Y, Wang T, Roper J, Tian S, Patel P, Bradley JD, Jani AB, Liu T, Yang X. Automatic segmentation of neurovascular bundle on mri using deep learning based topological modulated network. Med Phys 2023; 50:5479-5488. [PMID: 36939189 PMCID: PMC10509305 DOI: 10.1002/mp.16378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 01/20/2023] [Accepted: 03/09/2023] [Indexed: 03/21/2023] Open
Abstract
PURPOSE Radiation damage on neurovascular bundles (NVBs) may be the cause of sexual dysfunction after radiotherapy for prostate cancer. However, it is challenging to delineate NVBs as organ-at-risks from planning CTs during radiotherapy. Recently, the integration of MR into radiotherapy made NVBs contour delineating possible. In this study, we aim to develop an MRI-based deep learning method for automatic NVB segmentation. METHODS The proposed method, named topological modulated network, consists of three subnetworks, that is, a focal modulation, a hierarchical block and a topological fully convolutional network (FCN). The focal modulation is used to derive the location and bounds of left and right NVBs', namely the candidate volume-of-interests (VOIs). The hierarchical block aims to highlight the NVB boundaries information on derived feature map. The topological FCN then segments the NVBs inside the VOIs by considering the topological consistency nature of the vascular delineating. Based on the location information of candidate VOIs, the segmentations of NVBs can then be brought back to the input MRI's coordinate system. RESULTS A five-fold cross-validation study was performed on 60 patient cases to evaluate the performance of the proposed method. The segmented results were compared with manual contours. The Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (HD95 ) are (left NVB) 0.81 ± 0.10, 1.49 ± 0.88 mm, and (right NVB) 0.80 ± 0.15, 1.54 ± 1.22 mm, respectively. CONCLUSION We proposed a novel deep learning-based segmentation method for NVBs on pelvic MR images. The good segmentation agreement of our method with the manually drawn ground truth contours supports the feasibility of the proposed method, which can be potentially used to spare NVBs during proton and photon radiotherapy and thereby improve the quality of life for prostate cancer patients.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
17
|
Lei Y, Tian Z, Wang T, Roper J, Xie H, Kesarwala AH, Higgins K, Bradley JD, Liu T, Yang X. Deep learning-based fast volumetric imaging using kV and MV projection images for lung cancer radiotherapy: A feasibility study. Med Phys 2023; 50:5518-5527. [PMID: 36939395 PMCID: PMC10509310 DOI: 10.1002/mp.16377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 03/08/2023] [Accepted: 03/09/2023] [Indexed: 03/21/2023] Open
Abstract
PURPOSE The long acquisition time of CBCT discourages repeat verification imaging, therefore increasing treatment uncertainty. In this study, we present a fast volumetric imaging method for lung cancer radiation therapy using an orthogonal 2D kV/MV image pair. METHODS The proposed model is a combination of 2D and 3D networks. The proposed model consists of five major parts: (1) kV and MV feature extractors are used to extract deep features from the perpendicular kV and MV projections. (2) The feature-matching step is used to re-align the feature maps to their projection angle in a Cartesian coordinate system. By using a residual module, the feature map can focus more on the difference between the estimated and ground truth images. (3) In addition, the feature map is downsized to include more global semantic information for the 3D estimation, which is useful to reduce inhomogeneity. By using convolution-based reweighting, the model is able to further increase the uniformity of image. (4) To reduce the blurry noise of generated 3D volume, the Laplacian latent space loss calculated via the feature map that is extracted via specifically-learned Gaussian kernel is used to supervise the network. (5) Finally, the 3D volume is derived from the trained model. We conducted a proof-of-concept study using 50 patients with lung cancer. An orthogonal kV/MV pair was generated by ray tracing through CT of each phase in a 4D CT scan. Orthogonal kV/MV pairs from nine respiratory phases were used to train this patient-specific model while the kV/MV pair of the remaining phase was held for model testing. RESULTS The results are based on simulation data and phantom results from a real Linac system. The mean absolute error (MAE) values achieved by our method were 57.5 HU and 77.4 HU within body and tumor region-of-interest (ROI), respectively. The mean achieved peak-signal-to-noise ratios (PSNR) were 27.6 dB and 19.2 dB within the body and tumor ROI, respectively. The achieved mean normalized cross correlation (NCC) values were 0.97 and 0.94 within the body and tumor ROI, respectively. A phantom study demonstrated that the proposed method can accurately re-position the phantom after shift. It is also shown that the proposed method using both kV and MV is superior to current method using kV or MV only in image quality. CONCLUSION These results demonstrate the feasibility and accuracy of our proposed fast volumetric imaging method from an orthogonal kV/MV pair, which provides a potential solution for daily treatment setup and verification of patients receiving radiation therapy for lung cancer.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Zhen Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Radiation and Cellular Oncology, University of Chicago, Chicago, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Huiqiao Xie
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Aparna H Kesarwala
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
18
|
Wang Q, Xie H, Wang T, Roper J, Gao H, Tian Z, Tang X, Bradley JD, Liu T, Yang X. One-step Iterative Estimation of Effective Atomic Number and Electron Density for Dual Energy CT. ArXiv 2023:arXiv:2308.01290v1. [PMID: 37576122 PMCID: PMC10418524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Dual-energy computed tomography (DECT) is a promising technology that has shown a number of clinical advantages over conventional X-ray CT, such as improved material identification, artifact suppression, etc. For proton therapy treatment planning, besides material-selective images, maps of effective atomic number (Z) and relative electron density to that of water ($\rho_e$) can also be achieved and further employed to improve stopping power ratio accuracy and reduce range uncertainty. In this work, we propose a one-step iterative estimation method, which employs multi-domain gradient $L_0$-norm minimization, for Z and $\rho_e$ maps reconstruction. The algorithm was implemented on GPU to accelerate the predictive procedure and to support potential real-time adaptive treatment planning. The performance of the proposed method is demonstrated via both phantom and patient studies.
Collapse
|
19
|
Chang CW, Lei Y, Wang T, Tian S, Roper J, Lin L, Bradley J, Liu T, Zhou J, Yang X. Deep learning-based Fast Volumetric Image Generation for Image-guided Proton FLASH Radiotherapy. Res Sq 2023:rs.3.rs-3112632. [PMID: 37546731 PMCID: PMC10402267 DOI: 10.21203/rs.3.rs-3112632/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
Objective FLASH radiotherapy leverages ultra-high dose-rate radiation to enhance the sparing of organs at risk without compromising tumor control probability. This may allow dose escalation, toxicity mitigation, or both. To prepare for the ultra-high dose-rate delivery, we aim to develop a deep learning (DL)-based image-guide framework to enable fast volumetric image reconstruction for accurate target localization for proton FLASH beam delivery. Approach The proposed framework comprises four modules, including orthogonal kV x-ray projection acquisition, DL-based volumetric image generation, image quality analyses, and water equivalent thickness (WET) evaluation. We investigated volumetric image reconstruction using kV projection pairs with four different source angles. Thirty patients with lung targets were identified from an institutional database, each patient having a four-dimensional computed tomography (CT) dataset with ten respiratory phases. Leave-phase-out cross-validation was performed to investigate the DL model's robustness for each patient. Main results The proposed framework reconstructed patients' volumetric anatomy, including tumors and organs at risk from orthogonal x-ray projections. Considering all evaluation metrics, the kV projections with source angles of 135° and 225° yielded the optimal volumetric images. The patient-averaged mean absolute error, peak signal-to-noise ratio, structural similarity index measure, and WET error were 75±22 HU, 19±3.7 dB, 0.938±0.044, and -1.3%±4.1%. Significance The proposed framework has been demonstrated to reconstruct volumetric images with a high degree of accuracy using two orthogonal x-ray projections. The embedded WET module can be used to detect potential proton beam-specific patient anatomy variations. This framework can rapidly deliver volumetric images to potentially guide proton FLASH therapy treatment delivery systems.
Collapse
|
20
|
Gebru T, Luca K, Wolf J, Kayode O, Yang X, Roper J, Zhang J. Evaluating Pareto optimal tradeoffs for hippocampal avoidance whole brain radiotherapy with knowledge-based multicriteria optimization. Med Dosim 2023; 48:273-278. [PMID: 37495460 DOI: 10.1016/j.meddos.2023.07.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 07/12/2023] [Accepted: 07/18/2023] [Indexed: 07/28/2023]
Abstract
The goal of this study is to investigate the Pareto optimal tradeoffs between target coverage and hippocampal sparing using knowledge-based multicriteria optimization (MCO). Ten prior clinical cases were selected that were treated with hippocampal avoidance whole brain radiotherapy (HA-WBRT) using VMAT. A new, balanced plan was generated for each case using an in-house RapidPlan model in the Eclipse V16.1 treatment planning system. The MCO decision support tool was used to create 4 Pareto optimal plans. The Pareto optimal plans were created using PTV Dmin and hippocampus Dmax as tradeoff criteria. The tradeoff plans were generated for each patient by adjusting PTV Dmin from the value achieved by the corresponding balanced plan in fixed intervals as follows: -4 Gy, -2 Gy, +2 Gy, and +4 Gy. All plans were normalized so that 95% of the PTV was covered by the prescription dose. A 1-way ANOVA, with Geisser-Greenhouse correction, was used for statistical analysis. When evaluating the achieved PTV Dmin and D98%, the results showed the dose to the hippocampus decreased as coverage lowered and in comparison, D98% was higher when the PTV coverage was increased. When comparing multiple tradeoffs, the p-value for PTV D98% was 0.0026, and the p-values for PTV D2%, PTV Dmin, Hippocampus Dmax, Dmin, and Dmean were all less than 0.0001, indicating that the tradeoff plans achieved statistically significant differences. The results also showed that Pareto optimal plans failed to reduce hippocampal dose beyond a certain point, indicating more limited achievability of the MCO-navigated plans than the interface suggested. This study presents valuable data for planning results for HA-WBRT using MCO. MCO has shown to be mostly effective in adjusting the tradeoff between PTV coverage and hippocampal dose.
Collapse
Affiliation(s)
- Tsegawbizu Gebru
- Medical Dosimetry Program, Southern Illinois University, Carbondale, IL, USA
| | - Kirk Luca
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | | | - Oluwatosin Kayode
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Justin Roper
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Jiahan Zhang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.
| |
Collapse
|
21
|
Adams J, Luca K, Yang X, Patel P, Jani A, Roper J, Zhang J. Plan Quality Analysis of Automated Treatment Planning Workflow With Commercial Auto-Segmentation Tools and Clinical Knowledge-Based Planning Models for Prostate Cancer. Cureus 2023; 15:e41260. [PMID: 37529805 PMCID: PMC10389787 DOI: 10.7759/cureus.41260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/01/2023] [Indexed: 08/03/2023] Open
Abstract
This study evaluated the feasibility of using artificial intelligence (AI) segmentation software for volume-modulated arc therapy (VMAT) prostate planning in conjunction with knowledge-based planning to facilitate a fully automated workflow. Two commercially available AI software programs, Radformation AutoContour (Radformation, New York, NY) and Siemens AI-Rad Companion (Siemens Healthineers, Malvern, PA) were used to auto-segment the rectum, bladder, femoral heads, and bowel bag on 30 retrospective clinical cases (10 intact prostate, 10 prostate bed, and 10 prostate and lymph node). Physician-segmented target volumes were transferred to AI structure sets. In-house RapidPlan models were used to generate plans using the original, physician-segmented structure sets as well as Radformation and Siemens AI-generated structure sets. Thus, there were three plans for each of the 30 cases, totaling 90 plans. Following RapidPlan optimization, planning target volume (PTV) coverage was set to 95%. Then, the plans optimized using AI structures were recalculated on the physician structure set with fixed monitor units. In this way, physician contours were used as the gold standard for identifying any clinically relevant differences in dose distributions. One-way analysis of variation (ANOVA) was used for statistical analysis. No statistically significant differences were observed across the three sets of plans for intact prostate, prostate bed, or prostate and lymph nodes. The results indicate that an automated volumetric modulated arc therapy (VMAT) prostate planning workflow can consistently achieve high plan quality. However, our results also show that small but consistent differences in contouring preferences may lead to subtle differences in planning results. Therefore, the clinical implementation of auto-contouring should be carefully validated.
Collapse
Affiliation(s)
- Jacob Adams
- Department of Radiation Oncology, Emory University, Atlanta, USA
| | - Kirk Luca
- Department of Radiation Oncology, Emory University, Atlanta, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, USA
| | - Pretesh Patel
- Department of Radiation Oncology, Emory University, Atlanta, USA
| | - Ashesh Jani
- Department of Radiation Oncology, Emory University, Atlanta, USA
| | - Justin Roper
- Department of Radiation Oncology, Emory University, Atlanta, USA
| | - Jiahan Zhang
- Department of Radiation Oncology, Emory University, Atlanta, USA
| |
Collapse
|
22
|
Lei Y, Ding Y, Qiu RL, Wang T, Roper J, Fu Y, Shu HK, Mao H, Yang X. Hippocampus Substructure Segmentation Using Morphological Vision Transformer Learning. ArXiv 2023:arXiv:2306.08723v1. [PMID: 37396614 PMCID: PMC10312910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Background The hippocampus plays a crucial role in memory and cognition. Because of the associated toxicity from whole brain radiotherapy, more advanced treatment planning techniques prioritize hippocampal avoidance, which depends on an accurate segmentation of the small and complexly shaped hippocampus. Purpose: To achieve accurate segmentation of the anterior and posterior regions of the hippocampus from T1 weighted (T1w) MRI images, we developed a novel model, Hippo-Net, which uses a mutually enhanced strategy. Methods The proposed model consists of two major parts: 1) a localization model is used to detect the volume-of-interest (VOI) of hippocampus. 2) An end-to-end morphological vision transformer network is used to perform substructures segmentation within the hippocampus VOI. The substructures include the anterior and posterior regions of the hippocampus, which are defined as the hippocampus proper and parts of the subiculum. The vision transformer incorporates the dominant features extracted from MRI images, which are further improved by learning-based morphological operators. The integration of these morphological operators into the vision transformer increases the accuracy and ability to separate hippocampus structure into its two distinct substructures.A total of 260 T1w MRI datasets from Medical Segmentation Decathlon dataset were used in this study. We conducted a five-fold cross-validation on the first 200 T1w MR images and then performed a hold-out test on the remaining 60 T1w MR images with the model trained on the first 200 images. The segmentations were evaluated with two indicators, 1) multiple metrics including the Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), volume difference (VD) and center-of-mass distance (COMD); 2) Volumetric Pearson correlation analysis. Results In five-fold cross-validation, the DSCs were 0.900±0.029 and 0.886±0.031 for the hippocampus proper and parts of the subiculum, respectively. The MSD were 0.426±0.115mm and 0.401±0.100 mm for the hippocampus proper and parts of the subiculum, respectively. Conclusions: The proposed method showed great promise in automatically delineating hippocampus substructures on T1w MRI images. It may facilitate the current clinical workflow and reduce the physicians' effort.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
| | - Yabo Fu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065
| | - Hui-Kuo Shu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Atlanta, GA 30308
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
| |
Collapse
|
23
|
Pan S, Chang CW, Axente M, Wang T, Shelton J, Liu T, Roper J, Yang X. Data-Driven Volumetric Image Generation from Surface Structures using a Patient-Specific Deep Leaning Model. ArXiv 2023:arXiv:2304.14594v2. [PMID: 37163137 PMCID: PMC10168423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
The advent of computed tomography significantly improves patients' health regarding diagnosis, prognosis, and treatment planning and verification. However, tomographic imaging escalates concomitant radiation doses to patients, inducing potential secondary cancer by 4%. We demonstrate the feasibility of a data-driven approach to synthesize volumetric images using patients' surface images, which can be obtained from a zero-dose surface imaging system. This study includes 500 computed tomography (CT) image sets from 50 patients. Compared to the ground truth CT, the synthetic images result in the evaluation metric values of 26.9 ± 4.1 Hounsfield units, 39.1 ± 1.0 dB, and 0.965 ± 0.011 regarding the mean absolute error, peak signal-to-noise ratio, and structural similarity index measure. This approach provides a data integration solution that can potentially enable real-time imaging, which is free of radiation-induced risk and could be applied to image-guided medical procedures.
Collapse
Affiliation(s)
- Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30308
- These authors contributed equally: Shaoyan Pan, Chih-Wei Chang
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
- These authors contributed equally: Shaoyan Pan, Chih-Wei Chang
| | - Marian Axente
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065
| | - Joseph Shelton
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
| | - Tian Liu
- Department of Radiation Oncology, Mount Sinai Medical Center, New York, NY, 10029
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30308
| |
Collapse
|
24
|
Pan S, Wang T, Qiu RLJ, Axente M, Chang CW, Peng J, Patel AB, Shelton J, Patel SA, Roper J, Yang X. 2D medical image synthesis using transformer-based denoising diffusion probabilistic model. Phys Med Biol 2023; 68. [PMID: 37015231 PMCID: PMC10160739 DOI: 10.1088/1361-6560/acca5c] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 04/04/2023] [Indexed: 04/06/2023]
Abstract
OBJECTIVE Artificial intelligence (AI) methods have gained popularity in medical imaging research. The size and scope of the training image datasets needed for successful AI model deployment does not always have the desired scale. In this paper, we introduce a medical image synthesis framework aimed at addressing the challenge of limited training datasets for AI models.

Approach: The proposed 2D image synthesis framework is based on a diffusion model using a Swin-transformer-based network. This model consists of a forward Gaussian noise process and a reverse process using the Transformer-based Diffusion model for denoising. Training data includes four image datasets: chest X-rays, heart MRI, pelvic CT, and abdomen CT. We evaluated the authenticity, quality, and diversity of the synthetic images using visual Turing assessments conducted by three medical physicists, and four quantitative evaluations: the Inception score (IS), Fréchet Inception Distance score (FID), feature similarity and diversity score (DS, indicating diversity similarity) between the synthetic and true images. To leverage the framework value for training AI models, we conducted COVID-19 classification tasks using real images, synthetic images, and mixtures of both images. 

Main results: Visual Turing assessments showed an average accuracy of 0.64 (accuracy converging to 50% indicates a better realistic visual appearance of the synthetic images), sensitivity of 0.79, and specificity of 0.50. Average quantitative accuracy obtained from all datasets were IS=2.28, FID=37.27, FDS=0.20, and DS=0.86. For the COVID-19 classification task, the baseline network obtained an accuracy of 0.88 using a pure real dataset, 0.89 using a pure synthetic dataset, and 0.93 using a dataset mixed of real and synthetic data.

Significance: A image synthesis framework was demonstrated for medical image synthesis, which can generate high-quality medical images of different imaging modalities with the purpose of supplementing existing training sets for AI model deployment. This method has potential applications in many data-driven medical imaging research.
.
Collapse
Affiliation(s)
- Shaoyan Pan
- Department of Radiology Oncology, Emory University, 1365 CLIFTON RD NE, ATLANTA, ATLANTA, Georgia, 30322, UNITED STATES
| | - Tonghe Wang
- Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, New York, 10065, UNITED STATES
| | - Richard L J Qiu
- Department of Radiology and Sciences Imaging Department of Radiology Oncology, Emory University, 1365 E Clifton Rd NE Building C, Atlanta, Georgia, 30322, UNITED STATES
| | - Marian Axente
- Department of Radiology Oncology, Emory University, Clifton road, Atlanta, Georgia, 30322, UNITED STATES
| | - Chih-Wei Chang
- Department of Radiology Oncology, Emory University, 1365 CLIFTON RD NE, ATLANTA, ATLANTA, Georgia, 30322, UNITED STATES
| | - Junbo Peng
- Medical Physics Program, Georgia Institute of Technology, 770 State St NW, Atlanta, Atlanta, Georgia, 30332, UNITED STATES
| | - Ashish B Patel
- Emory University, 1365 E Clifton Rd NE Building C, Atlanta, Georgia, 30322-1007, UNITED STATES
| | - Joseph Shelton
- Department of Radiology and Sciences Imaging Department of Radiology Oncology, Emory University, 1365 E Clifton Rd NE Building C, Atlanta, Georgia, 30322, UNITED STATES
| | - Sagar A Patel
- Emory University, 1365 E Clifton Rd NE Building C, Atlanta, Georgia, 30322-1007, UNITED STATES
| | - Justin Roper
- Department of Radiology and Sciences Imaging Department of Radiology Oncology, Emory University, 1365 E Clifton Rd NE Building C, Atlanta, Georgia, 30322, UNITED STATES
| | - Xiaofeng Yang
- Department of Radiology Oncology, Emory University, 1365 E Clifton Rd NE Building C, Atlanta, Georgia, 30322, UNITED STATES
| |
Collapse
|
25
|
Xie H, Lei Y, Fu Y, Wang T, Roper J, Bradley JD, Patel P, Liu T, Yang X. Inter-fraction deformable image registration using unsupervised deep learning for CBCT-guided abdominal radiotherapy. Phys Med Biol 2023; 68. [PMID: 36958049 PMCID: PMC10099091 DOI: 10.1088/1361-6560/acc721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 03/23/2023] [Indexed: 03/25/2023]
Abstract
CBCTs in image-guided radiotherapy provide crucial anatomy information for patient setup and plan evaluation. Longitudinal CBCT image registration could quantify the inter-fractional anatomic changes, e.g. tumor shrinkage, daily OAR variation throughout the course of treatment. The purpose of this study is to propose an unsupervised deep learning based CBCT-CBCT deformable image registration which enables quantitative anatomic variation analysis. The proposed deformable registration workflow consists of training and inference stages that share the same feed-forward path through a spatial transformation-based network (STN). The STN consists of a global generative adversarial network (GlobalGAN) and a local GAN (LocalGAN) to predict the coarse- and fine-scale motions, respectively. The network was trained by minimizing the image similarity loss and the deformable vector field (DVF) regularization loss without the supervision of ground truth DVFs. During the inference stage, patches of local DVF were predicted by the trained LocalGAN and fused to form a whole-image DVF. The local whole-image DVF was subsequently combined with the GlobalGAN generated DVF to obtain final DVF. The proposed method was evaluated using 100 fractional CBCTs from 20 abdominal cancer patients in the experiments and 105 fractional CBCTs from a cohort of 21 different abdominal cancer patients in a holdout test. Qualitatively, the registration results show good alignment between the deformed CBCT images and the target CBCT image. Quantitatively, the average target registration error (TRE) calculated on the fiducial markers and manually identified landmarks was 1.91±1.18 mm. The average mean absolute error (MAE), normalized cross correlation (NCC) between the deformed CBCT and target CBCT were 33.42±7.48 HU, 0.94±0.04, respectively. In summary, an unsupervised deep learning-based CBCT-CBCT registration method is proposed and its feasibility and performance in fractionated image-guided radiotherapy is investigated. This promising registration method could provide fast and accurate longitudinal CBCT alignment to facilitate inter-fractional anatomic changes analysis and prediction.
Collapse
Affiliation(s)
- Huiqiao Xie
- Radiology and Imaging Sciences, Emory University School of Medicine, 1365 CLIFTON RD NE, ATLANTA, ATLANTA, Georgia, 30322, UNITED STATES
| | - Yang Lei
- Radiation Oncology, Emory Univeristy, 1365 CLIFTON RD NE, ATLANTA, ATLANTA, Georgia, 30322, UNITED STATES
| | - Yabo Fu
- Department of Radiology and Sciences Imaging Department of Radiology Oncology, Emory University, 1365 CLIFTON RD NE, ATLANTA, ATLANTA, Georgia, 30322, UNITED STATES
| | - Tonghe Wang
- Department of Radiology and Sciences Imaging Department of Radiology Oncology, Emory University, 1365 CLIFTON RD NE, ATLANTA, ATLANTA, Georgia, 30322, UNITED STATES
| | - Justin Roper
- Department of Radiology and Sciences Imaging Department of Radiology Oncology, Emory University, 1365 CLIFTON RD NE, ATLANTA, ATLANTA, Georgia, 30322, UNITED STATES
| | - Jeffrey D Bradley
- Radiation Oncology, Emory University School of Medicine, 1365 CLIFTON RD NE, ATLANTA, ATLANTA, Georgia, 30322, UNITED STATES
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, 1365 CLIFTON RD NE, ATLANTA, ATLANTA, Georgia, 30322, UNITED STATES
| | - Tian Liu
- Department of Radiology and Sciences Imaging Department of Radiology Oncology, Emory University, 1365 CLIFTON RD NE, ATLANTA, ATLANTA, Georgia, 30322, UNITED STATES
| | - Xiaofeng Yang
- Department of Radiology Oncology, Emory University, 1365 CLIFTON RD NE, ATLANTA, ATLANTA, Georgia, 30322, UNITED STATES
| |
Collapse
|
26
|
Zhang J, Sheng Y, Roper J, Yang X. Editorial: Machine learning-based adaptive radiotherapy treatments: From bench top to bedside. Front Oncol 2023; 13:1188788. [PMID: 37152006 PMCID: PMC10154686 DOI: 10.3389/fonc.2023.1188788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 03/27/2023] [Indexed: 05/09/2023] Open
Affiliation(s)
- Jiahan Zhang
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States
- *Correspondence: Jiahan Zhang,
| | - Yang Sheng
- Department of Radiation Oncology, Duke University, Durham, NC, United States
| | - Justin Roper
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States
| |
Collapse
|
27
|
Lei Y, Wang T, Jeong JJ, Janopaul-Naylor J, Kesarwala AH, Roper J, Tian S, Bradley JD, Liu T, Higgins K, Yang X. Automated lung tumor delineation on positron emission tomography/computed tomography via a hybrid regional network. Med Phys 2023; 50:274-283. [PMID: 36203393 PMCID: PMC9868056 DOI: 10.1002/mp.16001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 09/20/2022] [Accepted: 09/20/2022] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Multimodality positron emission tomography/computed tomography (PET/CT) imaging combines the anatomical information of CT with the functional information of PET. In the diagnosis and treatment of many cancers, such as non-small cell lung cancer (NSCLC), PET/CT imaging allows more accurate delineation of tumor or involved lymph nodes for radiation planning. PURPOSE In this paper, we propose a hybrid regional network method of automatically segmenting lung tumors from PET/CT images. METHODS The hybrid regional network architecture synthesizes the functional and anatomical information from the two image modalities, whereas the mask regional convolutional neural network (R-CNN) and scoring fine-tune the regional location and quality of the output segmentation. This model consists of five major subnetworks, that is, a dual feature representation network (DFRN), a regional proposal network (RPN), a specific tumor-wise R-CNN, a mask-Net, and a score head. Given a PET/CT image as inputs, the DFRN extracts feature maps from the PET and CT images. Then, the RPN and R-CNN work together to localize lung tumors and reduce the image size and feature map size by removing irrelevant regions. The mask-Net is used to segment tumor within a volume-of-interest (VOI) with a score head evaluating the segmentation performed by the mask-Net. Finally, the segmented tumor within the VOI was mapped back to the volumetric coordinate system based on the location information derived via the RPN and R-CNN. We trained, validated, and tested the proposed neural network using 100 PET/CT images of patients with NSCLC. A fivefold cross-validation study was performed. The segmentation was evaluated with two indicators: (1) multiple metrics, including the Dice similarity coefficient, Jacard, 95th percentile Hausdorff distance, mean surface distance (MSD), residual mean square distance, and center-of-mass distance; (2) Bland-Altman analysis and volumetric Pearson correlation analysis. RESULTS In fivefold cross-validation, this method achieved Dice and MSD of 0.84 ± 0.15 and 1.38 ± 2.2 mm, respectively. A new PET/CT can be segmented in 1 s by this model. External validation on The Cancer Imaging Archive dataset (63 PET/CT images) indicates that the proposed model has superior performance compared to other methods. CONCLUSION The proposed method shows great promise to automatically delineate NSCLC tumors on PET/CT images, thereby allowing for a more streamlined clinical workflow that is faster and reduces physician effort.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Jiwoong J Jeong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - James Janopaul-Naylor
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Aparna H Kesarwala
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| |
Collapse
|
28
|
Roper J, Lin M, Rong Y. Extensive upfront validation and testing are needed prior to the clinical implementation of AI-based auto-segmentation tools. J Appl Clin Med Phys 2022; 24:e13873. [PMID: 36545883 PMCID: PMC9859989 DOI: 10.1002/acm2.13873] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 11/30/2022] [Accepted: 12/01/2022] [Indexed: 12/24/2022] Open
Affiliation(s)
- Justin Roper
- Department of Radiation OncologyWinship Cancer Institute of Emory UniversityAtlantaGeorgiaUSA
| | - Mu‐Han Lin
- Department of Radiation OncologyUniversity of Texas Southwestern Medical CenterDallasTexasUSA
| | - Yi Rong
- Department of Radiation OncologyMayo Clinic HospitalsPhoenixArizonaUSA
| |
Collapse
|
29
|
Pan S, Chang CW, Wang T, Wynne J, Hu M, Lei Y, Liu T, Patel P, Roper J, Yang X. Abdomen CT multi-organ segmentation using token-based MLP-Mixer. Med Phys 2022; 50:3027-3038. [PMID: 36463516 PMCID: PMC10175083 DOI: 10.1002/mp.16135] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 11/11/2022] [Accepted: 11/15/2022] [Indexed: 12/05/2022] Open
Abstract
BACKGROUND Manual contouring is very labor-intensive, time-consuming, and subject to intra- and inter-observer variability. An automated deep learning approach to fast and accurate contouring and segmentation is desirable during radiotherapy treatment planning. PURPOSE This work investigates an efficient deep-learning-based segmentation algorithm in abdomen computed tomography (CT) to facilitate radiation treatment planning. METHODS In this work, we propose a novel deep-learning model utilizing U-shaped multi-layer perceptron mixer (MLP-Mixer) and convolutional neural network (CNN) for multi-organ segmentation in abdomen CT images. The proposed model has a similar structure to V-net, while a proposed MLP-Convolutional block replaces each convolutional block. The MLP-Convolutional block consists of three components: an early convolutional block for local features extraction and feature resampling, a token-based MLP-Mixer layer for capturing global features with high efficiency, and a token projector for pixel-level detail recovery. We evaluate our proposed network using: (1) an institutional dataset with 60 patient cases and (2) a public dataset (BCTV) with 30 patient cases. The network performance was quantitatively evaluated in three domains: (1) volume similarity between the ground truth contours and the network predictions using the Dice score coefficient (DSC), sensitivity, and precision; (2) surface similarity using Hausdorff distance (HD), mean surface distance (MSD) and residual mean square distance (RMS); and (3) the computational complexity reported by the number of network parameters, training time, and inference time. The performance of the proposed network is compared with other state-of-the-art networks. RESULTS In the institutional dataset, the proposed network achieved the following volume similarity measures when averaged over all organs: DSC = 0.912, sensitivity = 0.917, precision = 0.917, average surface similarities were HD = 11.95 mm, MSD = 1.90 mm, RMS = 3.86 mm. The proposed network achieved DSC = 0.786 and HD = 9.04 mm on the public dataset. The network also shows statistically significant improvement, which is evaluated by a two-tailed Wilcoxon Mann-Whitney U test, on right lung (MSD where the maximum p-value is 0.001), spinal cord (sensitivity, precision, HD, RMSD where p-value ranges from 0.001 to 0.039), and stomach (DSC where the maximum p-value is 0.01) over all other competing networks. On the public dataset, the network report statistically significant improvement, which is shown by the Wilcoxon Mann-Whitney test, on pancreas (HD where the maximum p-value is 0.006), left (HD where the maximum p-value is 0.022) and right adrenal glands (DSC where the maximum p-value is 0.026). In both datasets, the proposed method can generate contours in less than 5 s. Overall, the proposed MLP-Vnet demonstrates comparable or better performance than competing methods with much lower memory complexity and higher speed. CONCLUSIONS The proposed MLP-Vnet demonstrates superior segmentation performance, in terms of accuracy and efficiency, relative to state-of-the-art methods. This reliable and efficient method demonstrates potential to streamline clinical workflows in abdominal radiotherapy, which may be especially important for online adaptive treatments.
Collapse
Affiliation(s)
- Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jacob Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Mingzhe Hu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology, Mount Sinai Medical Center, New York, New York, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
30
|
Momin S, Wolf J, Roper J, Lei Y, Liu T, Bradley JD, Higgins K, Yang X, Zhang J. Enhanced cardiac substructure sparing through knowledge-based treatment planning for non-small cell lung cancer radiotherapy. Front Oncol 2022; 12:1055428. [DOI: 10.3389/fonc.2022.1055428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 11/10/2022] [Indexed: 12/03/2022] Open
Abstract
Radiotherapy (RT) doses to cardiac substructures from the definitive treatment of locally advanced non-small cell lung cancers (NSCLC) have been linked to post-RT cardiac toxicities. With modern treatment delivery techniques, it is possible to focus radiation doses to the planning target volume while reducing cardiac substructure doses. However, it is often challenging to design such treatment plans due to complex tradeoffs involving numerous cardiac substructures. Here, we built a cardiac-substructure-based knowledge-based planning (CS-KBP) model and retrospectively evaluated its performance against a cardiac-based KBP (C-KBP) model and manually optimized patient treatment plans. CS-KBP/C-KBP models were built with 27 previously-treated plans that preferentially spare the heart. While the C-KBP training plans were created with whole heart structures, the CS-KBP model training plans each have 15 cardiac substructures (coronary arteries, valves, great vessels, and chambers of the heart). CS-KBP training plans reflect cardiac-substructure sparing preferences. We evaluated both models on 28 additional patients. Three sets of treatment plans were compared: (1) manually optimized, (2) C-KBP model-generated, and (3) CS-KBP model-generated. Plans were normalized to receive the prescribed dose to at least 95% of the PTV. A two-tailed paired-sample t-test was performed for clinically relevant dose-volume metrics to evaluate the performance of the CS-KBP model against the C-KBP model and clinical plans, respectively. Overall results show significantly improved cardiac substructure sparing by CS-KBP in comparison to C-KBP and the clinical plans. For instance, the average left anterior descending artery volume receiving 15 Gy (V15 Gy) was significantly lower (p < 0.01) for CS-KBP (0.69 ± 1.57 cc) compared to the clinical plans (1.23 ± 1.76 cc) and C-KBP plans (1.05 ± 1.68 cc). In conclusion, the CS-KBP model significantly improved cardiac-substructure sparing without exceeding the tolerances of other OARs or compromising PTV coverage.
Collapse
|
31
|
Lei Y, Fu Y, Tian Z, Wang T, Dai X, Roper J, Yu DS, McDonald M, Bradley JD, Liu T, Zhou J, Yang X. Deformable CT image registration via a dual feasible neural network. Med Phys 2022; 49:7545-7554. [PMID: 35869866 PMCID: PMC9792435 DOI: 10.1002/mp.15875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 05/23/2022] [Accepted: 07/15/2022] [Indexed: 12/30/2022] Open
Abstract
PURPOSE A quality assurance (QA) CT scans are usually acquired during cancer radiotherapy to assess for any anatomical changes, which may cause an unacceptable dose deviation and therefore warrant a replan. Accurate and rapid deformable image registration (DIR) is needed to support contour propagation from the planning CT (pCT) to the QA CT to facilitate dose volume histogram (DVH) review. Further, the generated deformation maps are used to track the anatomical variations throughout the treatment course and calculate the corresponding accumulated dose from one or more treatment plans. METHODS In this study, we aim to develop a deep learning (DL)-based method for automatic deformable registration to align the pCT and the QA CT. Our proposed method, named dual-feasible framework, was implemented by a mutual network that functions as both a forward module and a backward module. The mutual network was trained to predict two deformation vector fields (DVFs) simultaneously, which were then used to register the pCT and QA CT in both directions. A novel dual feasible loss was proposed to train the mutual network. The dual-feasible framework was able to provide additional DVF regularization during network training, which preserves the topology and reduces folding problems. We conducted experiments on 65 head-and-neck cancer patients (228 CTs in total), each with 1 pCT and 2-6 QA CTs. For evaluations, we calculated the mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), structural similarity index (SSIM), target registration error (TRE) between the deformed and target images and the Jacobian determinant of the predicted DVFs. RESULTS Within the body contour, the mean MAE, PSNR, SSIM, and TRE are 122.7 HU, 21.8 dB, 0.62 and 4.1 mm before registration and are 40.6 HU, 30.8 dB, 0.94, and 2.0 mm after registration using the proposed method. These results demonstrate the feasibility and efficacy of our proposed method for pCT and QA CT DIR. CONCLUSION In summary, we proposed a DL-based method for automatic DIR to match the pCT to the QA CT. Such DIR method would not only benefit current workflow of evaluating DVHs on QA CTs but may also facilitate studies of treatment response assessment and radiomics that depend heavily on the accurate localization of tissues across longitudinal images.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Zhen Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - David S Yu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Mark McDonald
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
32
|
Luca K, Roper J, Wolf J, Kayode O, Bradley J, Stokes WA, Zhang J. Evaluating the plan quality of a general head-and-neck knowledge-based planning model versus separate unilateral/bilateral models. Med Dosim 2022; 48:44-50. [PMID: 36400649 DOI: 10.1016/j.meddos.2022.10.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 09/22/2022] [Accepted: 10/16/2022] [Indexed: 11/18/2022]
Abstract
The implementation of knowledge-based planning (KBP) continues to grow in radiotherapy clinics. KBP guides radiation treatment design by generating clinically acceptable plans in a timely and resource-efficient manner. The role of multiple KBP models tailored for variations within a disease site remains undefined in part because of the substantial effort and number of training cases required to create a high-quality KBP model. In this study, our aim was to explore whether site-specific KBP models lead to clinically meaningful differences in plan quality for head-and-neck (HN) patients when compared to a general model. One KBP model was created from prior volumetric-modulated arc therapy (VMAT) cases that treated unilateral HN lymph nodes while another model was created from VMAT cases that treated bilateral HN nodes. Thirty cases from each model (60 cases total) were randomly selected to create a third, general model. These models were applied to 60 HN test cases - 30 unilateral and 30 bilateral - to generate 180 VMAT plans in Eclipse. Clinically relevant dose metrics were compared between models. Paired-sample t-tests were used for statistical analysis, with the threshold for statistical significance set a priori at 0.007, taking into consideration multiple hypothesis testing to avoid type I error. For unilateral test cases, the unilateral model-generated plans had significantly lower spinal cord maximum doses (12.1 Gy vs 19.3 Gy, p < 0.001) and oral cavity mean doses (20.8 Gy vs 23.0 Gy, p < 0.001), compared with the bilateral model-generated plans. The unilateral and general models generated comparable plans for unilateral HN test cases. For bilateral test cases, the bilateral model created plans had significantly lower brainstem maximum doses (10.8 Gy vs 12.2 Gy, p < 0.001) and parotid mean doses (24.0 Gy vs 25.5 Gy, p < 0.001) when compared to the unilateral model. Right parotid mean doses were lower for bilateral model plans compared to general model plans (23.8 Gy vs 24.4 Gy). The general model created plans with significantly lower brainstem maximum doses (10.3 Gy vs 10.8 Gy) and oral cavity mean doses (35.3 Gy vs 36.7 Gy) when compared with bilateral model-generated plans. The general model outperformed the bilateral model in several dose metrics but they were not deemed clinically significant. For both case sets, the unilateral and general model created plans had higher monitor units when compared to the bilateral model, likely due to more stringent constraint settings. All other dose metrics were comparable. This study demonstrates that a balanced general HN model created using carefully curated treatment plans can produce high quality plans comparable to dedicated unilateral and bilateral models.
Collapse
Affiliation(s)
- Kirk Luca
- Emory Department of Radiation Oncology, Atlanta, GA, USA.
| | - Justin Roper
- Emory Department of Radiation Oncology, Atlanta, GA, USA
| | - Jonathan Wolf
- Emory Department of Radiation Oncology, Atlanta, GA, USA
| | | | | | | | - Jiahan Zhang
- Emory Department of Radiation Oncology, Atlanta, GA, USA
| |
Collapse
|
33
|
Pan S, Lei Y, Wang T, Wynne J, Chang CW, Roper J, Jani AB, Patel P, Bradley JD, Liu T, Yang X. Male pelvic multi-organ segmentation using token-based transformer Vnet. Phys Med Biol 2022; 67:10.1088/1361-6560/ac95f7. [PMID: 36170872 PMCID: PMC9671083 DOI: 10.1088/1361-6560/ac95f7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 09/28/2022] [Indexed: 11/12/2022]
Abstract
Objective. This work aims to develop an automated segmentation method for the prostate and its surrounding organs-at-risk in pelvic computed tomography to facilitate prostate radiation treatment planning.Approach. In this work, we propose a novel deep learning algorithm combining a U-shaped convolutional neural network (CNN) and vision transformer (VIT) for multi-organ (i.e. bladder, prostate, rectum, left and right femoral heads) segmentation in male pelvic CT images. The U-shaped model consists of three components: a CNN-based encoder for local feature extraction, a token-based VIT for capturing global dependencies from the CNN features, and a CNN-based decoder for predicting the segmentation outcome from the VIT's output. The novelty of our network is a token-based multi-head self-attention mechanism used in the transformer, which encourages long-range dependencies and forwards informative high-resolution feature maps from the encoder to the decoder. In addition, a knowledge distillation strategy is deployed to further enhance the learning capability of the proposed network.Main results. We evaluated the network using: (1) a dataset collected from 94 patients with prostate cancer; (2) and a public dataset CT-ORG. A quantitative evaluation of the proposed network's performance was performed on each organ based on (1) volume similarity between the segmented contours and ground truth using Dice score, segmentation sensitivity, and precision, (2) surface similarity evaluated by Hausdorff distance (HD), mean surface distance (MSD) and residual mean square distance (RMS), (3) and percentage volume difference (PVD). The performance was then compared against other state-of-art methods. Average volume similarity measures obtained by the network overall organs were Dice score = 0.91, sensitivity = 0.90, precision = 0.92, average surface similarities were HD = 3.78 mm, MSD = 1.24 mm, RMS = 2.03 mm; average percentage volume difference was PVD = 9.9% on the first dataset. The network also obtained Dice score = 0.93, sensitivity = 0.93, precision = 0.93, average surface similarities were HD = 5.82 mm, MSD = 1.16 mm, RMS = 1.24 mm; average percentage volume difference was PVD = 6.6% on the CT-ORG dataset.Significance. In summary, we propose a token-based transformer network with knowledge distillation for multi-organ segmentation using CT images. This method provides accurate and reliable segmentation results for each organ using CT imaging, facilitating the prostate radiation clinical workflow.
Collapse
Affiliation(s)
- Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Jacob Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30322, United States of America
| |
Collapse
|
34
|
Momin S, Lei Y, McCall NS, Zhang J, Roper J, Harms J, Tian S, Lloyd MS, Liu T, Bradley JD, Higgins K, Yang X. Mutual enhancing learning-based automatic segmentation of CT cardiac substructure. Phys Med Biol 2022; 67. [PMID: 35447610 PMCID: PMC9148580 DOI: 10.1088/1361-6560/ac692d] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 04/21/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Current segmentation practice for thoracic cancer RT considers the whole heart as a single organ despite increased risks of cardiac toxicities from irradiation of specific cardiac substructures. Segmenting up to 15 different cardiac substructures can be a very time-intensive process, especially due to their different volume sizes and anatomical variations amongst different patients. In this work, a new deep learning (DL)-based mutual enhancing strategy is introduced for accurate and automatic segmentation, especially of smaller substructures such as coronary arteries. Approach. Our proposed method consists of three subnetworks: retina U-net, classification module, and segmentation module. Retina U-net is used as a backbone network architecture that aims to learn deep features from the whole heart. Whole heart feature maps from retina U-net are then transferred to four different sets of classification modules to generate classification localization maps of coronary arteries, great vessels, chambers of the heart, and valves of the heart. Each classification module is in sync with its corresponding subsequent segmentation module in a bootstrapping manner, allowing them to share their encoding paths to generate a mutual enhancing strategy. We evaluated our method on three different datasets: institutional CT datasets (55 subjects) 2) publicly available Multi-Modality Whole Heart Segmentation (MM-WHS) challenge datasets (120 subjects), and Automated Cardiac Diagnosis Challenge (ACDC) datasets (100 subjects). For institutional datasets, we performed five-fold cross-validation on training data (45 subjects) and performed inference on separate hold-out data (10 subjects). For each subject, 15 cardiac substructures were manually contoured by a resident physician and evaluated by an attending radiation oncologist. For the MM-WHS dataset, we trained the network on 100 datasets and performed an inference on a separate hold-out dataset with 20 subjects, each with 7 cardiac substructures. For ACDC datasets, we performed five-fold cross-validation on 100 datasets, each with 3 cardiac substructures. We compared the proposed method against four different network architectures: 3D U-net, mask R-CNN, mask scoring R-CNN, and proposed network without classification module. Segmentation accuracies were statistically compared through dice similarity coefficient, Jaccard, 95% Hausdorff distance, mean surface distance, root mean square distance, center of mass distance, and volume difference. Main results. The proposed method generated cardiac substructure segmentations with significantly higher accuracy (P < 0.05) for small substructures, especially for coronary arteries such as left anterior descending artery (CA-LADA) and right coronary artery (CA-RCA) in comparison to four competing methods. For large substructures (i.e. chambers of the heart), our method yielded comparable results to mask scoring R-CNN method, resulting in significantly (P < 0.05) improved segmentation accuracy in comparison to 3D U-net and mask R-CNN. Significance. A new DL-based mutual enhancing strategy was introduced for automatic segmentation of cardiac substructures. Overall results of this work demonstrate the ability of the proposed method to improve segmentation accuracies of smaller substructures such as coronary arteries without largely compromising the segmentation accuracies of larger substructures. Fast and accurate segmentations of up to 15 substructures can possibly be used as a tool to rapidly generate substructure segmentations followed by physicians’ reviews to improve clinical workflow.
Collapse
|
35
|
Eidex Z, Wang T, Lei Y, Axente M, Akin-Akintayo OO, Ojo OAA, Akintayo AA, Roper J, Bradley JD, Liu T, Schuster DM, Yang X. MRI-based prostate and dominant lesion segmentation using cascaded scoring convolutional neural network. Med Phys 2022; 49:5216-5224. [PMID: 35533237 PMCID: PMC9388615 DOI: 10.1002/mp.15687] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 03/18/2022] [Accepted: 04/16/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Dose escalation to dominant intraprostatic lesions (DILs) is a novel treatment strategy to improve the treatment outcome of prostate radiation therapy. Treatment planning requires accurate and fast delineation of the prostate and DILs. In this study, a 3D cascaded scoring convolutional neural network is proposed to automatically segment the prostate and DILs from MRI. METHODS AND MATERIALS The proposed cascaded scoring convolutional neural network performs end-to-end segmentation by locating a region-of-interest (ROI), identifying the object within the ROI, and defining the target. A scoring strategy, which is learned to judge the segmentation quality of DIL, is integrated into cascaded convolutional neural network to solve the challenge of segmenting the irregular shapes of the DIL. To evaluate the proposed method, 77 patients who underwent MRI and PET/CT were retrospectively investigated. The prostate and DIL ground truth contours were delineated by experienced radiologists. The proposed method was evaluated with five-fold cross validation and holdout testing. RESULTS The average centroid distance, volume difference, and Dice similarity coefficient (DSC) value for prostate/DIL are 4.3±7.5mm/3.73±3.78mm, 4.5±7.9cc/0.41±0.59cc and 89.6±8.9%/84.3±11.9%, respectively. Comparable results were obtained in the holdout test. Similar or superior segmentation outcomes were seen when compared the results of the proposed method to those of competing segmentation approaches CONCLUSIONS: : The proposed automatic segmentation method can accurately and simultaneously segment both the prostate and DILs. The intended future use for this algorithm is focal boost prostate radiation therapy. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology, Emory University, Atlanta, GA.,School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA
| | - Marian Axente
- Department of Radiation Oncology, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | | | | | | | - Justin Roper
- Department of Radiation Oncology, Emory University, Atlanta, GA.,School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jeffery D Bradley
- Department of Radiation Oncology, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA.,School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| |
Collapse
|
36
|
Xie H, Lei Y, Wang T, Roper J, Axente M, Bradley JD, Liu T, Yang X. Magnetic resonance imaging contrast enhancement synthesis using cascade networks with local supervision. Med Phys 2022; 49:3278-3287. [PMID: 35229344 DOI: 10.1002/mp.15578] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 12/03/2021] [Accepted: 02/22/2022] [Indexed: 12/22/2022] Open
Abstract
PURPOSE Gadolinium-based contrast agents (GBCAs) are widely administrated in MR imaging for diagnostic studies and treatment planning. Although GBCAs are generally thought to be safe, various health and environmental concerns have been raised recently about their use in MR imaging. The purpose of this work is to derive synthetic contrast enhance MR images from unenhanced counterpart images, thereby eliminating the need for GBCAs, using a cascade deep learning workflow that incorporates contour information into the network. METHODS AND MATERIALS The proposed workflow consists of two sequential networks: (1) a retina U-Net, which is first trained to derive semantic features from the non-contrast MR images in representing the tumor regions; and (2) a synthesis module, which is trained after the retina U-Net to take the concatenation of the semantic feature maps and non-contrast MR image as input and to generate the synthetic contrast enhanced MR images. After network training, only the non-contrast enhanced MR images are required for the input in the proposed workflow. The MR images of 369 patients from the multimodal brain tumor segmentation challenge 2020 (BraTS2020) dataset were used in this study to evaluate the proposed workflow for synthesizing contrast enhanced MR images (200 patients for five-fold cross-validation and 169 patients for hold-out test). Quantitative evaluations were conducted by calculating the normalized mean absolute error (NMAE), structural similarity index measurement (SSIM), and Pearson correlation coefficient (PCC). The original contrast enhanced MR images were considered as the ground truth in this analysis. RESULTS The proposed cascade deep learning workflow synthesized contrast enhanced MR images that are not visually differentiable from the ground truth with and without supervision of the tumor contours during the network training. Difference images and profiles of the synthetic contrast enhanced MR images revealed that intensity differences could be observed in the tumor region if the contour information was not incorporated in network training. Among the hold-out test patients, mean values and standard deviations of the NMAE, SSIM, and PCC were 0.063±0.022, 0.991±0.007 and 0.995±0.006, respectively, for the whole brain; and were 0.050±0.025, 0.993±0.008 and 0.999±0.003, respectively, for the tumor contour regions. Quantitative evaluations with five-fold cross-validation and hold-out test showed that the calculated metrics can be significantly enhanced (p-values ≤ 0.002) with the tumor contour supervision in network training. CONCLUSION The proposed workflow was able to generate synthetic contrast enhanced MR images that closely resemble the ground truth images from non-contrast enhanced MR images when the network training included tumor contours. These results suggest that it may be possible to minimize the use of GBCAs in cranial MR imaging studies.
Collapse
Affiliation(s)
- Huiqiao Xie
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Marian Axente
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
37
|
Momin S, Lei Y, Tian Z, Roper J, Lin J, Kahn S, Shu HK, Bradley J, Liu T, Yang X. Cascaded mutual enhancing networks for brain tumor subregion segmentation in multiparametric MRI. Phys Med Biol 2022; 67. [PMID: 35299156 PMCID: PMC9066378 DOI: 10.1088/1361-6560/ac5ed8] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Accepted: 03/17/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Accurate segmentation of glioma and its subregions plays an important role in radiotherapy treatment planning. Due to a very populated multiparameter magnetic resonance imaging image, manual segmentation tasks can be very time-consuming, meticulous, and prone to subjective errors. Here, we propose a novel deep learning framework based on mutual enhancing networks to automatically segment brain tumor subregions. The proposed framework is suitable for the segmentation of brain tumor subregions owing to the contribution of Retina U-Net followed by the implementation of a mutual enhancing strategy between the classification localization map (CLM) module and segmentation module. Retina U-Net is trained to accurately identify view-of-interest and feature maps of the whole tumor (WT), which are then transferred to the CLM module and segmentation module. Subsequently, CLM generated by the CLM module is integrated with the segmentation module to bring forth a mutual enhancing strategy. In this way, our proposed framework first focuses on WT through Retina U-Net, and since WT consists of subregions, a mutual enhancing strategy then further aims to classify and segment subregions embedded within WT. We implemented and evaluated our proposed framework on the BraTS 2020 dataset consisting of 369 cases. We performed a 5-fold cross-validation on 200 datasets and a hold-out test on the remaining 169 cases. To demonstrate the effectiveness of our network design, we compared our method against the networks without Retina U-Net, mutual enhancing strategy, and a recently published Cascaded U-Net architecture. Results of all four methods were compared to the ground truth for segmentation and localization accuracies. Our method yielded significantly (P < 0.01) better values of dice-similarity-coefficient, center-of-mass-distance, and volume difference compared to all three competing methods across all tumor labels (necrosis and non-enhancing, edema, enhancing tumor, WT, tumor core) on both validation and hold-out dataset. Overall quantitative and statistical results of this work demonstrate the ability of our method to both accurately and automatically segment brain tumor subregions.
Collapse
|
38
|
Zhang J, Sheng Y, Wolf J, Kayode O, Bradley J, Ge Y, Wu QJ, Yang X, Liu T, Roper J. Technical Note: Determining the applicability of a clinical knowledge‐based learning model via prospective outlier detection. Med Phys 2022; 49:2193-2202. [DOI: 10.1002/mp.15516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 01/17/2022] [Accepted: 01/20/2022] [Indexed: 11/10/2022] Open
Affiliation(s)
| | | | | | | | | | - Yaorong Ge
- The University of North Carolina at Charlotte Charlotte NC 28223
| | | | | | - Tian Liu
- Emory University Atlanta GA 30322
| | | |
Collapse
|
39
|
Xie H, Lei Y, Wang T, Roper J, Dhabaan AH, Bradley JD, Liu T, Mao H, Yang X. Synthesizing high-resolution magnetic resonance imaging using parallel cycle-consistent generative adversarial networks for fast magnetic resonance imaging. Med Phys 2022; 49:357-369. [PMID: 34821395 DOI: 10.1002/mp.15380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 11/07/2021] [Accepted: 11/09/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE The common practice in acquiring the magnetic resonance (MR) images is to obtain two-dimensional (2D) slices at coarse locations while keeping the high in-plane resolution in order to ensure enough body coverage while shortening the MR scan time. The aim of this study is to propose a novel method to generate HR MR images from low-resolution MR images along the longitudinal direction. In order to address the difficulty of collecting paired low- and high-resolution MR images in clinical settings and to gain the advantage of parallel cycle consistent generative adversarial networks (CycleGANs) in synthesizing realistic medical images, we developed a parallel CycleGANs based method using a self-supervised strategy. METHODS AND MATERIALS The proposed workflow consists of two parallely trained CycleGANs to independently predict the HR MR images in the two planes along the directions that are orthogonal to the longitudinal MR scan direction. Then, the final synthetic HR MR images are generated by fusing the two predicted images. MR images, including T1-weighted (T1), contrast enhanced T1-weighted (T1CE), T2-weighted (T2), and T2 Fluid Attenuated Inversion Recovery (FLAIR), of the multimodal brain tumor segmentation challenge 2020 (BraTS2020) dataset were processed to evaluate the proposed workflow along the cranial-caudal (CC), lateral, and anterior-posterior directions. Institutional collected MR images were also processed for evaluation of the proposed method. The performance of the proposed method was investigated via both qualitative and quantitative evaluations. Metrics of normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), edge keeping index (EKI), structural similarity index measurement (SSIM), information fidelity criterion (IFC), and visual information fidelity in pixel domain (VIFP) were calculated. RESULTS It is shown that the proposed method can generate HR MR images visually indistinguishable from the ground truth in the investigations on the BraTS2020 dataset. In addition, the intensity profiles, difference images and SSIM maps can also confirm the feasibility of the proposed method for synthesizing HR MR images. Quantitative evaluations on the BraTS2020 dataset shows that the calculated metrics of synthetic HR MR images can all be enhanced for the T1, T1CE, T2, and FLAIR images. The enhancements in the numerical metrics over the low-resolution and bi-cubic interpolated MR images, as well as those genearted with a comparative deep learning method, are statistically significant. Qualitative evaluation of the synthetic HR MR images of the clinical collected dataset could also confirm the feasibility of the proposed method. CONCLUSIONS The proposed method is feasible to synthesize HR MR images using self-supervised parallel CycleGANs, which can be expected to shorten MR acquisition time in clinical practices.
Collapse
Affiliation(s)
- Huiqiao Xie
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Anees H Dhabaan
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Hui Mao
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
40
|
Matkovic LA, Wang T, Lei Y, Akin-Akintayo OO, Ojo OAA, Akintayo AA, Roper J, Bradley JD, Liu T, Schuster DM, Yang X. Prostate and dominant intraprostatic lesion segmentation on PET/CT using cascaded regional-net. Phys Med Biol 2021; 66:10.1088/1361-6560/ac3c13. [PMID: 34808603 PMCID: PMC8725511 DOI: 10.1088/1361-6560/ac3c13] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Accepted: 11/22/2021] [Indexed: 12/22/2022]
Abstract
Focal boost to dominant intraprostatic lesions (DILs) has recently been proposed for prostate radiation therapy. Accurate and fast delineation of the prostate and DILs is thus required during treatment planning. In this paper, we develop a learning-based method using positron emission tomography (PET)/computed tomography (CT) images to automatically segment the prostate and its DILs. To enable end-to-end segmentation, a deep learning-based method, called cascaded regional-Net, is utilized. The first network, referred to as dual attention network, is used to segment the prostate via extracting comprehensive features from both PET and CT images. A second network, referred to as mask scoring regional convolutional neural network (MSR-CNN), is used to segment the DILs from the PET and CT within the prostate region. Scoring strategy is used to diminish the misclassification of the DILs. For DIL segmentation, the proposed cascaded regional-Net uses two steps to remove normal tissue regions, with the first step cropping images based on prostate segmentation and the second step using MSR-CNN to further locate the DILs. The binary masks of DILs and prostates of testing patients are generated on the PET/CT images by the trained model. For evaluation, we retrospectively investigated 49 prostate cancer patients with PET/CT images acquired. The prostate and DILs of each patient were contoured by radiation oncologists and set as the ground truths and targets. We used five-fold cross-validation and a hold-out test to train and evaluate our method. The mean surface distance and DSC values were 0.666 ± 0.696 mm and 0.932 ± 0.059 for the prostate and 0.814 ± 1.002 mm and 0.801 ± 0.178 for the DILs among all 49 patients. The proposed method has shown promise for facilitating prostate and DIL delineation for DIL focal boost prostate radiation therapy.
Collapse
Affiliation(s)
- Luke A. Matkovic
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Yang Lei
- Department of Radiation Oncology, Emory University,
Atlanta, GA
| | | | | | | | - Justin Roper
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Jeffery D. Bradley
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Tian Liu
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - David M. Schuster
- Department of Radiology and Imaging Sciences, Emory
University, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| |
Collapse
|
41
|
Dai X, Lei Y, Roper J, Chen Y, Bradley JD, Curran WJ, Liu T, Yang X. Deep learning-based motion tracking using ultrasound images. Med Phys 2021; 48:7747-7756. [PMID: 34724712 DOI: 10.1002/mp.15321] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 10/13/2021] [Accepted: 10/22/2021] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Ultrasound (US) imaging is an established imaging modality capable of offering video-rate volumetric images without ionizing radiation. It has the potential for intra-fraction motion tracking in radiation therapy. In this study, a deep learning-based method has been developed to tackle the challenges in motion tracking using US imaging. METHODS We present a Markov-like network, which is implemented via generative adversarial networks, to extract features from sequential US frames (one tracked frame followed by untracked frames) and thereby estimate a set of deformation vector fields (DVFs) through the registration of the tracked frame and the untracked frames. The positions of the landmarks in the untracked frames are finally determined by shifting landmarks in the tracked frame according to the estimated DVFs. The performance of the proposed method was evaluated on the testing dataset by calculating the tracking error (TE) between the predicted and ground truth landmarks on each frame. RESULTS The proposed method was evaluated using the MICCAI CLUST 2015 dataset which was collected using seven US scanners with eight types of transducers and the Cardiac Acquisitions for Multi-structure Ultrasound Segmentation (CAMUS) dataset which was acquired using GE Vivid E95 ultrasound scanners. The CLUST dataset contains 63 2D and 22 3D US image sequences respectively from 42 and 18 subjects, and the CAMUS dataset includes 2D US images from 450 patients. On CLUST dataset, our proposed method achieved a mean tracking error of 0.70 ± 0.38 mm for the 2D sequences and 1.71 ± 0.84 mm for the 3D sequences for those public available annotations. And on CAMUS dataset, a mean tracking error of 0.54 ± 1.24 mm for the landmarks in the left atrium was achieved. CONCLUSIONS A novel motion tracking algorithm using US images based on modern deep learning techniques has been demonstrated in this study. The proposed method can offer millimeter-level tumor motion prediction in real time, which has the potential to be adopted into routine tumor motion management in radiation therapy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yue Chen
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University School of Medicine, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University School of Medicine, Atlanta, Georgia, USA
| |
Collapse
|
42
|
Dai X, Lei Y, Wynne J, Janopaul-Naylor J, Wang T, Roper J, Curran WJ, Liu T, Patel P, Yang X. Synthetic CT-aided multiorgan segmentation for CBCT-guided adaptive pancreatic radiotherapy. Med Phys 2021; 48:7063-7073. [PMID: 34609745 PMCID: PMC8595847 DOI: 10.1002/mp.15264] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 09/15/2021] [Accepted: 09/17/2021] [Indexed: 12/19/2022] Open
Abstract
PURPOSE The delineation of organs at risk (OARs) is fundamental to cone-beam CT (CBCT)-based adaptive radiotherapy treatment planning, but is time consuming, labor intensive, and subject to interoperator variability. We investigated a deep learning-based rapid multiorgan delineation method for use in CBCT-guided adaptive pancreatic radiotherapy. METHODS To improve the accuracy of OAR delineation, two innovative solutions have been proposed in this study. First, instead of directly segmenting organs on CBCT images, a pretrained cycle-consistent generative adversarial network (cycleGAN) was applied to generating synthetic CT images given CBCT images. Second, an advanced deep learning model called mask-scoring regional convolutional neural network (MS R-CNN) was applied on those synthetic CT to detect the positions and shapes of multiple organs simultaneously for final segmentation. The OAR contours delineated by the proposed method were validated and compared with expert-drawn contours for geometric agreement using the Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), and residual mean square distance (RMS). RESULTS Across eight abdominal OARs including duodenum, large bowel, small bowel, left and right kidneys, liver, spinal cord, and stomach, the geometric comparisons between automated and expert contours are as follows: 0.92 (0.89-0.97) mean DSC, 2.90 mm (1.63-4.19 mm) mean HD95, 0.89 mm (0.61-1.36 mm) mean MSD, and 1.43 mm (0.90-2.10 mm) mean RMS. Compared to the competing methods, our proposed method had significant improvements (p < 0.05) in all the metrics for all the eight organs. Once the model was trained, the contours of eight OARs can be obtained on the order of seconds. CONCLUSIONS We demonstrated the feasibility of a synthetic CT-aided deep learning framework for automated delineation of multiple OARs on CBCT. The proposed method could be implemented in the setting of pancreatic adaptive radiotherapy to rapidly contour OARs with high accuracy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jacob Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - James Janopaul-Naylor
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
43
|
Momin S, Lei Y, Tian Z, Wang T, Roper J, Kesarwala AH, Higgins K, Bradley JD, Liu T, Yang X. Lung tumor segmentation in 4D CT images using motion convolutional neural networks. Med Phys 2021; 48:7141-7153. [PMID: 34469001 DOI: 10.1002/mp.15204] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 08/19/2021] [Accepted: 08/25/2021] [Indexed: 01/01/2023] Open
Abstract
PURPOSE Manual delineation on all breathing phases of lung cancer 4D CT image datasets can be challenging, exhaustive, and prone to subjective errors because of both the large number of images in the datasets and variations in the spatial location of tumors secondary to respiratory motion. The purpose of this work is to present a new deep learning-based framework for fast and accurate segmentation of lung tumors on 4D CT image sets. METHODS The proposed DL framework leverages motion region convolutional neural network (R-CNN). Through integration of global and local motion estimation network architectures, the network can learn both major and minor changes caused by tumor motion. Our network design first extracts tumor motion information by feeding 4D CT images with consecutive phases into an integrated backbone network architecture, locating volume-of-interest (VOIs) via a regional proposal network and removing irrelevant information via a regional convolutional neural network. Extracted motion information is then advanced into the subsequent global and local motion head network architecture to predict corresponding deformation vector fields (DVFs) and further adjust tumor VOIs. Binary masks of tumors are then segmented within adjusted VOIs via a mask head. A self-attention strategy is incorporated in the mask head network to remove any noisy features that might impact segmentation performance. We performed two sets of experiments. In the first experiment, a five-fold cross-validation on 20 4D CT datasets, each consisting of 10 breathing phases (i.e., 200 3D image volumes in total). The network performance was also evaluated on an additional unseen 200 3D images volumes from 20 hold-out 4D CT datasets. In the second experiment, we trained another model with 40 patients' 4D CT datasets from experiment 1 and evaluated on additional unseen nine patients' 4D CT datasets. The Dice similarity coefficient (DSC), center of mass distance (CMD), 95th percentile Hausdorff distance (HD95 ), mean surface distance (MSD), and volume difference (VD) between the manual and segmented tumor contour were computed to evaluate tumor detection and segmentation accuracy. The performance of our method was quantitatively evaluated against four different methods (VoxelMorph, U-Net, network without global and local networks, and network without attention gate strategy) across all evaluation metrics through a paired t-test. RESULTS The proposed fully automated DL method yielded good overall agreement with the ground truth for contoured tumor volume and segmentation accuracy. Our model yielded significantly better values of evaluation metrics (p < 0.05) than all four competing methods in both experiments. On hold-out datasets of experiment 1 and 2, our method yielded DSC of 0.86 and 0.90 compared to 0.82 and 0.87, 0.75 and 0.83, 081 and 0.89, and 0.81 and 0.89 yielded by VoxelMorph, U-Net, network without global and local networks, and networks without attention gate strategy. Tumor VD between ground truth and our method was the smallest with the value of 0.50 compared to 0.99, 1.01, 0.92, and 0.93 for between ground truth and VoxelMorph, U-Net, network without global and local networks, and networks without attention gate strategy, respectively. CONCLUSIONS Our proposed DL framework of tumor segmentation on lung cancer 4D CT datasets demonstrates a significant promise for fully automated delineation. The promising results of this work provide impetus for its integration into the 4D CT treatment planning workflow to improve the accuracy and efficiency of lung radiotherapy.
Collapse
Affiliation(s)
- Shadab Momin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Zhen Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Aparna H Kesarwala
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
44
|
Lei Y, Wang T, Fu Y, Roper J, Jani AB, Liu T, Patel P, Yang X. Catheter position prediction using deep-learning-based multi-atlas registration for high-dose rate prostate brachytherapy. Med Phys 2021; 48:7261-7270. [PMID: 34480801 DOI: 10.1002/mp.15206] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 07/26/2021] [Accepted: 08/28/2021] [Indexed: 12/19/2022] Open
Abstract
PURPOSE High-dose-rate (HDR) prostate brachytherapy involves treatment catheter placement, which is currently empirical and physician dependent. The lack of proper catheter placement guidance during the procedure has left the physicians to rely on a heuristic thinking-while-doing technique, which may cause large catheter placement variation and increased plan quality uncertainty. Therefore, the achievable dose distribution could not be quantified prior to the catheter placement. To overcome this challenge, we proposed a learning-based method to provide HDR catheter placement guidance for prostate cancer patients undergoing HDR brachytherapy. METHODS The proposed framework consists of deformable registration via registration network (Reg-Net), multi-atlas ranking, and catheter regression. To model the global spatial relationship among multiple organs, binary masks of the prostate and organs-at-risk are transformed into distance maps, which describe the distance of each local voxel to the organ surfaces. For a new patient, the generated distance map is used as fixed image. Reg-Net is utilized to deformably register the distance maps from multi-atlas set to match this patient's distance map and then bring catheter maps from multi-atlas to this patient via spatial transformation. Several criteria, namely prostate volume similarity, multi-organ semantic image similarity, and catheter position criteria (far from the urethra and within the partial prostate), are used for multi-atlas ranking. The top-ranked atlas' deformed catheter positions are selected as the predicted catheter positions for this patient. Finally, catheter regression is used to refine the final catheter positions. A retrospective study on 90 patients with a fivefold cross-validation scheme was used to evaluate the proposed method's feasibility. In order to investigate the impact of plan quality from the predicted catheter pattern, we optimized the source dwell position and time for both the clinical catheter pattern and predicted catheter pattern with the same optimization settings. Comparisons of clinically relevant dose volume histogram (DVH) metrics were completed. RESULTS For all patients, on average, both the clinical plan dose and predicted plan dose meet the common dose constraints when prostate dose coverage is kept at V100 = 95%. The plans from the predicted catheter pattern have slightly higher hotspot in terms of V150 by 5.0% and V200 by 2.9% on average. For bladder V75, rectum V75, and urethra V125, the average difference is close to zero, and the range of most patients is within ±1 cc. CONCLUSION We developed a new catheter placement prediction method for HDR prostate brachytherapy based on a deep-learning-based multi-atlas registration algorithm. It has great clinical potential since it can provide catheter location estimation prior to catheter placement, which could reduce the dependence on physicians' experience in catheter implantation and improve the quality of prostate HDR treatment plans. This approach merits further clinical evaluation and validation as a method of quality control for HDR prostate brachytherapy.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yabo Fu
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Ashesh B Jani
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Pretesh Patel
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
45
|
Dai X, Lei Y, Wang T, Zhou J, Roper J, McDonald M, Beitler JJ, Curran WJ, Liu T, Yang X. Automated delineation of head and neck organs at risk using synthetic MRI-aided mask scoring regional convolutional neural network. Med Phys 2021; 48:5862-5873. [PMID: 34342878 DOI: 10.1002/mp.15146] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 06/30/2021] [Accepted: 07/25/2021] [Indexed: 01/10/2023] Open
Abstract
PURPOSE Auto-segmentation algorithms offer a potential solution to eliminate the labor-intensive, time-consuming, and observer-dependent manual delineation of organs-at-risk (OARs) in radiotherapy treatment planning. This study aimed to develop a deep learning-based automated OAR delineation method to tackle the current challenges remaining in achieving reliable expert performance with the state-of-the-art auto-delineation algorithms. METHODS The accuracy of OAR delineation is expected to be improved by utilizing the complementary contrasts provided by computed tomography (CT) (bony-structure contrast) and magnetic resonance imaging (MRI) (soft-tissue contrast). Given CT images, synthetic MR images were firstly generated by a pre-trained cycle-consistent generative adversarial network. The features of CT and synthetic MRI were then extracted and combined for the final delineation of organs using mask scoring regional convolutional neural network. Both in-house and public datasets containing CT scans from head-and-neck (HN) cancer patients were adopted to quantitatively evaluate the performance of the proposed method against current state-of-the-art algorithms in metrics including Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), and residual mean square distance (RMS). RESULTS Across all of 18 OARs in our in-house dataset, the proposed method achieved an average DSC, HD95, MSD, and RMS of 0.77 (0.58-0.90), 2.90 mm (1.32-7.63 mm), 0.89 mm (0.42-1.85 mm), and 1.44 mm (0.71-3.15 mm), respectively, outperforming the current state-of-the-art algorithms by 6%, 16%, 25%, and 36%, respectively. On public datasets, for all nine OARs, an average DSC of 0.86 (0.73-0.97) were achieved, 6% better than the competing methods. CONCLUSION We demonstrated the feasibility of a synthetic MRI-aided deep learning framework for automated delineation of OARs in HN radiotherapy treatment planning. The proposed method could be adopted into routine HN cancer radiotherapy treatment planning to rapidly contour OARs with high accuracy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Mark McDonald
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jonathan J Beitler
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
46
|
Yang X, Lei Y, Roper J, Patel P, Jani A, Bradley J, Liu T. SP-0476 The use of deep-learning based CBCT segmentation in adaptive radiotherapy. Radiother Oncol 2021. [DOI: 10.1016/s0167-8140(21)08602-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
47
|
Momin S, Fu Y, Lei Y, Roper J, Bradley JD, Curran WJ, Liu T, Yang X. Knowledge-based radiation treatment planning: A data-driven method survey. J Appl Clin Med Phys 2021; 22:16-44. [PMID: 34231970 PMCID: PMC8364264 DOI: 10.1002/acm2.13337] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 04/26/2021] [Accepted: 06/02/2021] [Indexed: 12/18/2022] Open
Abstract
This paper surveys the data-driven dose prediction methods investigated for knowledge-based planning (KBP) in the last decade. These methods were classified into two major categories-traditional KBP methods and deep-learning (DL) methods-according to their techniques of utilizing previous knowledge. Traditional KBP methods include studies that require geometric or anatomical features to either find the best-matched case(s) from a repository of prior treatment plans or to build dose prediction models. DL methods include studies that train neural networks to make dose predictions. A comprehensive review of each category is presented, highlighting key features, methods, and their advancements over the years. We separated the cited works according to the framework and cancer site in each category. Finally, we briefly discuss the performance of both traditional KBP methods and DL methods, then discuss future trends of both data-driven KBP methods to dose prediction.
Collapse
Affiliation(s)
- Shadab Momin
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Jeffrey D. Bradley
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
48
|
Xie H, Lei Y, Wang T, Tian Z, Roper J, Bradley JD, Curran WJ, Tang X, Liu T, Yang X. High through-plane resolution CT imaging with self-supervised deep learning. Phys Med Biol 2021; 66. [PMID: 34049297 DOI: 10.1088/1361-6560/ac0684] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 05/28/2021] [Indexed: 12/11/2022]
Abstract
CT images for radiotherapy planning are usually acquired in thick slices to reduce the imaging dose, especially for pediatric patients, and to lessen the need for contouring and treatment planning on more slices. However, low through-plane resolution may degrade the accuracy of dose calculations. In this paper, a self-supervised deep learning workflow is proposed to synthesize high through-plane resolution CT images by learning from their high in-plane resolution features. The proposed workflow was designed to facilitate neural networks to learn the mapping from low-resolution (LR) to high-resolution (HR) images in the axial plane. During the inference step, the HR sagittal and coronal images were generated by feeding two parallel-trained neural networks with the respective LR sagittal and coronal images to the trained neural networks. The CT simulation images of a cohort of 75 patients with head and neck cancer (1 mm slice thickness) and 200 CT images of a cohort of 20 lung cancer patients (3 mm slice thickness) were retrospectively investigated in a cross-validation manner. The HR images generated with the proposed method were qualitatively (visual quality, image intensity profiles and preliminary observer study) and quantitatively (mean absolute error, edge keeping index, structural similarity index measurement, information fidelity criterion and visual information fidelity in pixel domain) inspected, while taking the original CT images of the head and neck and lung cancer patients as the reference. The qualitative results showed the capability of the proposed method for generating high through-plane resolution CT images with data from both groups of cancer patients. All the improvements in the measure metrics were confirmed to be statistically significant with paired two-samplet-test analysis. The innovative point of the work is that the proposed deep learning workflow for CT image generation with high through-plane resolution in radiotherapy is self-supervised, meaning that it does not rely on ground truth CT images to train the network. In addition, the assumption that the in-plane HR information can supervise the through-plane HR generation is confirmed. We hope that this will inspire more research on this topic to further improve the through-plane resolution of medical images.
Collapse
Affiliation(s)
- Huiqiao Xie
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America.,Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Zhen Tian
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America.,Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Justin Roper
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America.,Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Jeffrey D Bradley
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America.,Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America.,Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Xiangyang Tang
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America.,Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, United States of America
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America.,Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America.,Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| |
Collapse
|
49
|
Momin S, Lei Y, Wang T, Zhang J, Roper J, Bradley JD, Curran WJ, Patel P, Liu T, Yang X. Learning-based dose prediction for pancreatic stereotactic body radiation therapy using dual pyramid adversarial network. Phys Med Biol 2021; 66:125019. [PMID: 34087807 DOI: 10.1088/1361-6560/ac0856] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2020] [Accepted: 06/04/2021] [Indexed: 12/24/2022]
Abstract
Treatment planning for pancreatic cancer stereotactic body radiation therapy (SBRT) is very challenging owing to vast spatial variations and close proximity of many organs-at-risk. Recently, deep learning (DL) based methods have been applied in dose prediction tasks of various treatment sites with the aim of relieving planning challenges. However, its effectiveness on pancreatic cancer SBRT is yet to be fully explored due to limited investigations in the literature. This study aims to further current knowledge in DL-based dose prediction tasks by implementing and demonstrating the feasibility of a new dual pyramid networks (DPNs) integrated DL model for predicting dose distributions of pancreatic SBRT. The proposed framework is composed of four parts: CT-only feature pyramid network (FPN), contour-only FPN, late fusion network and an adversarial network. During each phase of the network, combination of mean absolute error, gradient difference error, histogram matching, and adversarial loss is used for supervision. The performance of proposed model was demonstrated for pancreatic cancer SBRT plans with doses prescribed between 33 and 50 Gy across as many as three planning target volumes (PTVs) in five fractions. Five-fold cross validation was performed on 30 patients, and another 20 patients were used as holdout tests of trained model. Predicted plans were compared with clinically approved plans through dose volume parameters and two-paired t-test. For the same sets, our results were compared with three different DL architectures: 3D U-Net, 3D U-Net with adversarial learning, and DPN without adversarial learning. The proposed framework was able to predict 87% and 91% of clinically relevant dose parameters for cross validation sets and holdout sets, respectively, without any significant differences (P > 0.05). Dose distribution predicted by our framework was also able to predict the intentional hotspots as feature characteristics of SBRT plans. Our method achieved higher correlation coefficients with the ground truth in 22/26, 24/26 and 20/26 dose volume parameters compared to the network without adversarial learning, 3D U-Net, and 3D U-Net with adversarial learning, respectively. Overall, the proposed model was able to predict doses to cases with both single and multiple PTVs. In conclusion, the DPN integrated DL model was successfully implemented, and demonstrated good dose prediction accuracy and dosimetric characteristics for pancreatic cancer SBRT.
Collapse
Affiliation(s)
- Shadab Momin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Jiahan Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| |
Collapse
|
50
|
Wang T, Lei Y, Roper J, Ghavidel B, Beitler JJ, McDonald M, Curran WJ, Liu T, Yang X. Head and neck multi-organ segmentation on dual-energy CT using dual pyramid convolutional neural networks. Phys Med Biol 2021; 66. [PMID: 33915524 DOI: 10.1088/1361-6560/abfce2] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 04/29/2021] [Indexed: 11/11/2022]
Abstract
Organ delineation is crucial to diagnosis and therapy, while it is also labor-intensive and observer-dependent. Dual energy CT (DECT) provides additional image contrast than conventional single energy CT (SECT), which may facilitate automatic organ segmentation. This work aims to develop an automatic multi-organ segmentation approach using deep learning for head-and-neck region on DECT. We proposed a mask scoring regional convolutional neural network (R-CNN) where comprehensive features are firstly learnt from two independent pyramid networks and are then combined via deep attention strategy to highlight the informative ones extracted from both two channels of low and high energy CT. To perform multi-organ segmentation and avoid misclassification, a mask scoring subnetwork was integrated into the Mask R-CNN framework to build the correlation between the class of potential detected organ's region-of-interest (ROI) and the shape of that organ's segmentation within that ROI. We evaluated our model on DECT images from 127 head-and-neck cancer patients (66 training, 61 testing) with manual contours of 19 organs as training target and ground truth. For large- and mid-sized organs such as brain and parotid, the proposed method successfully achieved average Dice similarity coefficient (DSC) larger than 0.8. For small-sized organs with very low contrast such as chiasm, cochlea, lens and optic nerves, the DSCs ranged between around 0.5 and 0.8. With the proposed method, using DECT images outperforms using SECT in almost all 19 organs with statistical significance in DSC (p<0.05). Meanwhile, by using the DECT, the proposed method is also significantly superior to a recently developed FCN-based method in most of organs in terms of DSC and the 95th percentile Hausdorff distance. Quantitative results demonstrated the feasibility of the proposed method, the superiority of using DECT to SECT, and the advantage of the proposed R-CNN over FCN on the head-and-neck patient study. The proposed method has the potential to facilitate the current head-and-neck cancer radiation therapy workflow in treatment planning.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Beth Ghavidel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Jonathan J Beitler
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Mark McDonald
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| |
Collapse
|