1
|
Rossi M, Belotti G, Mainardi L, Baroni G, Cerveri P. Feasibility of proton dosimetry overriding planning CT with daily CBCT elaborated through generative artificial intelligence tools. Comput Assist Surg (Abingdon) 2024; 29:2327981. [PMID: 38468391 DOI: 10.1080/24699322.2024.2327981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024] Open
Abstract
Radiotherapy commonly utilizes cone beam computed tomography (CBCT) for patient positioning and treatment monitoring. CBCT is deemed to be secure for patients, making it suitable for the delivery of fractional doses. However, limitations such as a narrow field of view, beam hardening, scattered radiation artifacts, and variability in pixel intensity hinder the direct use of raw CBCT for dose recalculation during treatment. To address this issue, reliable correction techniques are necessary to remove artifacts and remap pixel intensity into Hounsfield Units (HU) values. This study proposes a deep-learning framework for calibrating CBCT images acquired with narrow field of view (FOV) systems and demonstrates its potential use in proton treatment planning updates. Cycle-consistent generative adversarial networks (cGAN) processes raw CBCT to reduce scatter and remap HU. Monte Carlo simulation is used to generate CBCT scans, enabling the possibility to focus solely on the algorithm's ability to reduce artifacts and cupping effects without considering intra-patient longitudinal variability and producing a fair comparison between planning CT (pCT) and calibrated CBCT dosimetry. To showcase the viability of the approach using real-world data, experiments were also conducted using real CBCT. Tests were performed on a publicly available dataset of 40 patients who received ablative radiation therapy for pancreatic cancer. The simulated CBCT calibration led to a difference in proton dosimetry of less than 2%, compared to the planning CT. The potential toxicity effect on the organs at risk decreased from about 50% (uncalibrated) up the 2% (calibrated). The gamma pass rate at 3%/2 mm produced an improvement of about 37% in replicating the prescribed dose before and after calibration (53.78% vs 90.26%). Real data also confirmed this with slightly inferior performances for the same criteria (65.36% vs 87.20%). These results may confirm that generative artificial intelligence brings the use of narrow FOV CBCT scans incrementally closer to clinical translation in proton therapy planning updates.
Collapse
Affiliation(s)
- Matteo Rossi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- Laboratory of Innovation in Sleep Medicine, Istituto Auxologico Italiano, Milan, Italy
| | - Gabriele Belotti
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Luca Mainardi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Guido Baroni
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- Bioengineering Unit, Clinical Department, National Center for Oncological Hadrontherapy (CNAO), Pavia, Italy
| | - Pietro Cerveri
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- Laboratory of Innovation in Sleep Medicine, Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
2
|
Sun P, Yang J, Tian X, Yuan G. Image fusion-based low-dose CBCT enhancement method for visualizing miniscrew insertion in the infrazygomatic crest. BMC Med Imaging 2024; 24:114. [PMID: 38760689 PMCID: PMC11100247 DOI: 10.1186/s12880-024-01289-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 05/03/2024] [Indexed: 05/19/2024] Open
Abstract
Digital dental technology covers oral cone-beam computed tomography (CBCT) image processing and low-dose CBCT dental applications. A low-dose CBCT image enhancement method based on image fusion is proposed to address the need for subzygomatic small screw insertion. Specifically, firstly, a sharpening correction module is proposed, where the CBCT image is sharpened to compensate for the loss of details in the underexposed/over-exposed region. Secondly, a visibility restoration module based on type II fuzzy sets is designed, and a contrast enhancement module using curve transformation is designed. In addition to this, we propose a perceptual fusion module that fuses visibility and contrast of oral CBCT images. As a result, the problems of overexposure/underexposure, low visibility, and low contrast that occur in oral CBCT images can be effectively addressed with consistent interpretability. The proposed algorithm was analyzed in comparison experiments with a variety of algorithms, as well as ablation experiments. After analysis, compared with advanced enhancement algorithms, this algorithm achieved excellent results in low-dose CBCT enhancement and effective observation of subzygomatic small screw implantation. Compared with the best performing method, the evaluation metric is 0.07-2 higher on both datasets. The project can be found at: https://github.com/sunpeipei2024/low-dose-CBCT .
Collapse
Affiliation(s)
- Peipei Sun
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China
- Department of Pediatric Dentistry, School and Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Jinghui Yang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China
- Department of Pediatric Dentistry, School and Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Xue Tian
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China
- Department of Pediatric Dentistry, School and Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Guohua Yuan
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, China.
- Department of Pediatric Dentistry, School and Hospital of Stomatology, Wuhan University, Wuhan, China.
- Frontier Science Center for Immunology and Metabolism, Wuhan University, Wuhan, China.
| |
Collapse
|
3
|
Liu H, Zhou Y, Gou S, Luo Z. Tumor conspicuity enhancement-based segmentation model for liver tumor segmentation and RECIST diameter measurement in non-contrast CT images. Comput Biol Med 2024; 174:108420. [PMID: 38613896 DOI: 10.1016/j.compbiomed.2024.108420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 04/04/2024] [Accepted: 04/04/2024] [Indexed: 04/15/2024]
Abstract
BACKGROUND AND OBJECTIVE Liver tumor segmentation (LiTS) accuracy on contrast-enhanced computed tomography (CECT) images is higher than that on non-contrast computed tomography (NCCT) images. However, CECT requires contrast medium and repeated scans to obtain multiphase enhanced CT images, which is time-consuming and cost-increasing. Therefore, despite the lower accuracy of LiTS on NCCT images, which still plays an irreplaceable role in some clinical settings, such as guided brachytherapy, ablation, or evaluation of patients with renal function damage. In this study, we intend to generate enhanced high-contrast pseudo-color CT (PCCT) images to improve the accuracy of LiTS and RECIST diameter measurement on NCCT images. METHODS To generate high-contrast CT liver tumor region images, an intensity-based tumor conspicuity enhancement (ITCE) model was first developed. In the ITCE model, a pseudo color conversion function from an intensity distribution of the tumor was established, and it was applied in NCCT to generate enhanced PCCT images. Additionally, we design a tumor conspicuity enhancement-based liver tumor segmentation (TCELiTS) model, which was applied to improve the segmentation of liver tumors on NCCT images. The TCELiTS model consists of three components: an image enhancement module based on the ITCE model, a segmentation module based on a deep convolutional neural network, and an attention loss module based on restricted activation. Segmentation performance was analyzed using the Dice similarity coefficient (DSC), sensitivity, specificity, and RECIST diameter error. RESULTS To develop the deep learning model, 100 patients with histopathologically confirmed liver tumors (hepatocellular carcinoma, 64 patients; hepatic hemangioma, 36 patients) were randomly divided into a training set (75 patients) and an independent test set (25 patients). Compared with existing tumor automatic segmentation networks trained on CECT images (U-Net, nnU-Net, DeepLab-V3, Modified U-Net), the DSCs achieved on the enhanced PCCT images are both improved compared with those on NCCT images. We observe improvements of 0.696-0.713, 0.715 to 0.776, 0.748 to 0.788, and 0.733 to 0.799 in U-Net, nnU-Net, DeepLab-V3, and Modified U-Net, respectively, in terms of DSC values. In addition, an observer study including 5 doctors was conducted to compare the segmentation performance of enhanced PCCT images with that of NCCT images and showed that enhanced PCCT images are more advantageous for doctors to segment tumor regions. The results showed an accuracy improvement of approximately 3%-6%, but the time required to segment a single CT image was reduced by approximately 50 %. CONCLUSIONS Experimental results show that the ITCE model can generate high-contrast enhanced PCCT images, especially in liver regions, and the TCELiTS model can improve LiTS accuracy in NCCT images.
Collapse
Affiliation(s)
- Haofeng Liu
- School of Artificial Intelligence, Xidian University, Xi'An, 710071, China
| | - Yanyan Zhou
- Department of Interventional Radiology, Tangdu Hospital, Airforce Medical University, Xi'an, 710038, China
| | - Shuiping Gou
- School of Artificial Intelligence, Xidian University, Xi'An, 710071, China
| | - Zhonghua Luo
- Department of Interventional Radiology, Tangdu Hospital, Airforce Medical University, Xi'an, 710038, China.
| |
Collapse
|
4
|
Du Y, Li D, Hu Z, Liu S, Xia Q, Zhu J, Xu J, Yu T, Zhu D. Dual-Channel in Spatial-Frequency Domain CycleGAN for perceptual enhancement of transcranial cortical vascular structure and function. Comput Biol Med 2024; 173:108377. [PMID: 38569233 DOI: 10.1016/j.compbiomed.2024.108377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 02/20/2024] [Accepted: 03/24/2024] [Indexed: 04/05/2024]
Abstract
Observing cortical vascular structures and functions using laser speckle contrast imaging (LSCI) at high resolution plays a crucial role in understanding cerebral pathologies. Usually, open-skull window techniques have been applied to reduce scattering of skull and enhance image quality. However, craniotomy surgeries inevitably induce inflammation, which may obstruct observations in certain scenarios. In contrast, image enhancement algorithms provide popular tools for improving the signal-to-noise ratio (SNR) of LSCI. The current methods were less than satisfactory through intact skulls because the transcranial cortical images were of poor quality. Moreover, existing algorithms do not guarantee the accuracy of dynamic blood flow mappings. In this study, we develop an unsupervised deep learning method, named Dual-Channel in Spatial-Frequency Domain CycleGAN (SF-CycleGAN), to enhance the perceptual quality of cortical blood flow imaging by LSCI. SF-CycleGAN enabled convenient, non-invasive, and effective cortical vascular structure observation and accurate dynamic blood flow mappings without craniotomy surgeries to visualize biodynamics in an undisturbed biological environment. Our experimental results showed that SF-CycleGAN achieved a SNR at least 4.13 dB higher than that of other unsupervised methods, imaged the complete vascular morphology, and enabled the functional observation of small cortical vessels. Additionally, the proposed method showed remarkable robustness and could be generalized to various imaging configurations and image modalities, including fluorescence images, without retraining.
Collapse
Affiliation(s)
- Yuwei Du
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Dongyu Li
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China; School of Optical and Electronic Information-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Zhengwu Hu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Shaojun Liu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Qing Xia
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jingtan Zhu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jianyi Xu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Tingting Yu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Dan Zhu
- Britton Chance Center for Biomedical Photonics - MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics - Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, 430074, China.
| |
Collapse
|
5
|
Fukuda M, Kotaki S, Nozawa M, Kuwada C, Kise Y, Ariji E, Ariji Y. A cycle generative adversarial network for generating synthetic contrast-enhanced computed tomographic images from non-contrast images in the internal jugular lymph node-bearing area. Odontology 2024:10.1007/s10266-024-00933-1. [PMID: 38607582 DOI: 10.1007/s10266-024-00933-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 03/24/2024] [Indexed: 04/13/2024]
Abstract
The objectives of this study were to create a mutual conversion system between contrast-enhanced computed tomography (CECT) and non-CECT images using a cycle generative adversarial network (cycleGAN) for the internal jugular region. Image patches were cropped from CT images in 25 patients who underwent both CECT and non-CECT imaging. Using a cycleGAN, synthetic CECT and non-CECT images were generated from original non-CECT and CECT images, respectively. The peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were calculated. Visual Turing tests were used to determine whether oral and maxillofacial radiologists could tell the difference between synthetic versus original images, and receiver operating characteristic (ROC) analyses were used to assess the radiologists' performances in discriminating lymph nodes from blood vessels. The PSNR of non-CECT images was higher than that of CECT images, while the SSIM was higher in CECT images. The Visual Turing test showed a higher perceptual quality in CECT images. The area under the ROC curve showed almost perfect performances in synthetic as well as original CECT images. In conclusion, synthetic CECT images created by cycleGAN appeared to have the potential to provide effective information in patients who could not receive contrast enhancement.
Collapse
Affiliation(s)
- Motoki Fukuda
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, 1-5-17 Otemae, Chuo-Ku, Osaka, Japan.
| | - Shinya Kotaki
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, 1-5-17 Otemae, Chuo-Ku, Osaka, Japan
| | - Michihito Nozawa
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, 1-5-17 Otemae, Chuo-Ku, Osaka, Japan
| | - Chiaki Kuwada
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Yoshiko Ariji
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, 1-5-17 Otemae, Chuo-Ku, Osaka, Japan
| |
Collapse
|
6
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
7
|
Sun H, Yang Z, Zhu J, Li J, Gong J, Chen L, Wang Z, Yin Y, Ren G, Cai J, Zhao L. Pseudo-medical image-guided technology based on 'CBCT-only' mode in esophageal cancer radiotherapy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 245:108007. [PMID: 38241802 DOI: 10.1016/j.cmpb.2024.108007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 11/03/2023] [Accepted: 01/03/2024] [Indexed: 01/21/2024]
Abstract
Purpose To minimize the various errors introduced by image-guided radiotherapy (IGRT) in the application of esophageal cancer treatment, this study proposes a novel technique based on the 'CBCT-only' mode of pseudo-medical image guidance. Methods The framework of this technology consists of two pseudo-medical image synthesis models in the CBCT→CT and the CT→PET direction. The former utilizes a dual-domain parallel deep learning model called AWM-PNet, which incorporates attention waning mechanisms. This model effectively suppresses artifacts in CBCT images in both the sinogram and spatial domains while efficiently capturing important image features and contextual information. The latter leverages tumor location and shape information provided by clinical experts. It introduces a PRAM-GAN model based on a prior region aware mechanism to establish a non-linear mapping relationship between CT and PET image domains. As a result, it enables the generation of pseudo-PET images that meet the clinical requirements for radiotherapy. Results The NRMSE and multi-scale SSIM (MS-SSIM) were utilized to evaluate the test set, and the results were presented as median values with lower quartile and upper quartile ranges. For the AWM-PNet model, the NRMSE and MS-SSIM values were 0.0218 (0.0143, 0.0255) and 0.9325 (0.9141, 0.9410), respectively. The PRAM-GAN model produced NRMSE and MS-SSIM values of 0.0404 (0.0356, 0.0476) and 0.9154 (0.8971, 0.9294), respectively. Statistical analysis revealed significant differences (p < 0.05) between these models and others. The numerical results of dose metrics, including D98 %, Dmean, and D2 %, validated the accuracy of HU values in the pseudo-CT images synthesized by the AWM-PNet. Furthermore, the Dice coefficient results confirmed statistically significant differences (p < 0.05) in GTV delineation between the pseudo-PET images synthesized using the PRAM-GAN model and other compared methods. Conclusion The AWM-PNet and PRAM-GAN models have the capability to generate accurate pseudo-CT and pseudo-PET images, respectively. The pseudo-image-guided technique based on the 'CBCT-only' mode shows promising prospects for application in esophageal cancer radiotherapy.
Collapse
Affiliation(s)
- Hongfei Sun
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Zhi Yang
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Jiarui Zhu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jie Li
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Jie Gong
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Liting Chen
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Zhongfei Wang
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Yutian Yin
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
| | - Lina Zhao
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China.
| |
Collapse
|
8
|
Curcuru AN, Yang D, An H, Cuculich PS, Robinson CG, Gach HM. Technical note: Minimizing CIED artifacts on a 0.35 T MRI-Linac using deep learning. J Appl Clin Med Phys 2024; 25:e14304. [PMID: 38368615 DOI: 10.1002/acm2.14304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 01/11/2024] [Accepted: 02/03/2024] [Indexed: 02/20/2024] Open
Abstract
BACKGROUND Artifacts from implantable cardioverter defibrillators (ICDs) are a challenge to magnetic resonance imaging (MRI)-guided radiotherapy (MRgRT). PURPOSE This study tested an unsupervised generative adversarial network to mitigate ICD artifacts in balanced steady-state free precession (bSSFP) cine MRIs and improve image quality and tracking performance for MRgRT. METHODS Fourteen healthy volunteers (Group A) were scanned on a 0.35 T MRI-Linac with and without an MR conditional ICD taped to their left pectoral to simulate an implanted ICD. bSSFP MRI data from 12 of the volunteers were used to train a CycleGAN model to reduce ICD artifacts. The data from the remaining two volunteers were used for testing. In addition, the dataset was reorganized three times using a Leave-One-Out scheme. Tracking metrics [Dice similarity coefficient (DSC), target registration error (TRE), and 95 percentile Hausdorff distance (95% HD)] were evaluated for whole-heart contours. Image quality metrics [normalized root mean square error (nRMSE), peak signal-to-noise ratio (PSNR), and multiscale structural similarity (MS-SSIM) scores] were evaluated. The technique was also tested qualitatively on three additional ICD datasets (Group B) including a patient with an implanted ICD. RESULTS For the whole-heart contour with CycleGAN reconstruction: 1) Mean DSC rose from 0.910 to 0.935; 2) Mean TRE dropped from 4.488 to 2.877 mm; and 3) Mean 95% HD dropped from 10.236 to 7.700 mm. For the whole-body slice with CycleGAN reconstruction: 1) Mean nRMSE dropped from 0.644 to 0.420; 2) Mean MS-SSIM rose from 0.779 to 0.819; and 3) Mean PSNR rose from 18.744 to 22.368. The three Group B datasets evaluated qualitatively displayed a reduction in ICD artifacts in the heart. CONCLUSION CycleGAN-generated reconstructions significantly improved both tracking and image quality metrics when used to mitigate artifacts from ICDs.
Collapse
Affiliation(s)
- Austen N Curcuru
- Department of Radiation Oncology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Deshan Yang
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Hongyu An
- Departments of Radiology, Biomedical Engineering and Neurology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Phillip S Cuculich
- Departments of Cardiovascular Medicine and Radiation Oncology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Clifford G Robinson
- Department of Radiation Oncology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - H Michael Gach
- Departments of Radiation Oncology, Radiology and Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| |
Collapse
|
9
|
Peng J, Qiu RLJ, Wynne JF, Chang CW, Pan S, Wang T, Roper J, Liu T, Patel PR, Yu DS, Yang X. CBCT-Based synthetic CT image generation using conditional denoising diffusion probabilistic model. Med Phys 2024; 51:1847-1859. [PMID: 37646491 DOI: 10.1002/mp.16704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 07/17/2023] [Accepted: 08/08/2023] [Indexed: 09/01/2023] Open
Abstract
BACKGROUND Daily or weekly cone-beam computed tomography (CBCT) scans are commonly used for accurate patient positioning during the image-guided radiotherapy (IGRT) process, making it an ideal option for adaptive radiotherapy (ART) replanning. However, the presence of severe artifacts and inaccurate Hounsfield unit (HU) values prevent its use for quantitative applications such as organ segmentation and dose calculation. To enable the clinical practice of online ART, it is crucial to obtain CBCT scans with a quality comparable to that of a CT scan. PURPOSE This work aims to develop a conditional diffusion model to perform image translation from the CBCT to the CT distribution for the image quality improvement of CBCT. METHODS The proposed method is a conditional denoising diffusion probabilistic model (DDPM) that utilizes a time-embedded U-net architecture with residual and attention blocks to gradually transform the white Gaussian noise sample to the target CT distribution conditioned on the CBCT. The model was trained on deformed planning CT (dpCT) and CBCT image pairs, and its feasibility was verified in brain patient study and head-and-neck (H&N) patient study. The performance of the proposed algorithm was evaluated using mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and normalized cross-correlation (NCC) metrics on generated synthetic CT (sCT) samples. The proposed method was also compared to four other diffusion model-based sCT generation methods. RESULTS In the brain patient study, the MAE, PSNR, and NCC of the generated sCT were 25.99 HU, 30.49 dB, and 0.99, respectively, compared to 40.63 HU, 27.87 dB, and 0.98 of the CBCT images. In the H&N patient study, the metrics were 32.56 HU, 27.65 dB, 0.98 and 38.99 HU, 27.00, 0.98 for sCT and CBCT, respectively. Compared to the other four diffusion models and one Cycle generative adversarial network (Cycle GAN), the proposed method showed superior results in both visual quality and quantitative analysis. CONCLUSIONS The proposed conditional DDPM method can generate sCT from CBCT with accurate HU numbers and reduced artifacts, enabling accurate CBCT-based organ segmentation and dose calculation for online ART.
Collapse
Affiliation(s)
- Junbo Peng
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Nuclear and Radiological Engineering and Medical physics Programs, George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jacob F Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Pretesh R Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - David S Yu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Nuclear and Radiological Engineering and Medical physics Programs, George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| |
Collapse
|
10
|
Zhuang T, Parsons D, Desai N, Gibbard G, Keilty D, Lin MH, Cai B, Nguyen D, Chiu T, Godley A, Pompos A, Jiang S. Simulation and pre-planning omitted radiotherapy (SPORT): a feasibility study for prostate cancer. Biomed Phys Eng Express 2024; 10:025019. [PMID: 38241733 DOI: 10.1088/2057-1976/ad20aa] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Accepted: 01/19/2024] [Indexed: 01/21/2024]
Abstract
This study explored the feasibility of on-couch intensity modulated radiotherapy (IMRT) planning for prostate cancer (PCa) on a cone-beam CT (CBCT)-based online adaptive RT platform without an individualized pre-treatment plan and contours. Ten patients with PCa previously treated with image-guided IMRT (60 Gy/20 fractions) were selected. In contrast to the routine online adaptive RT workflow, a novel approach was employed in which the same preplan that was optimized on one reference patient was adapted to generate individual on-couch/initial plans for the other nine test patients using Ethos emulator. Simulation CTs of the test patients were used as simulated online CBCT (sCBCT) for emulation. Quality assessments were conducted on synthetic CTs (sCT). Dosimetric comparisons were performed between on-couch plans, on-couch plans recomputed on the sCBCT and individually optimized plans for test patients. The median value of mean absolute difference between sCT and sCBCT was 74.7 HU (range 69.5-91.5 HU). The average CTV/PTV coverage by prescription dose was 100.0%/94.7%, and normal tissue constraints were met for the nine test patients in on-couch plans on sCT. Recalculating on-couch plans on the sCBCT showed about 0.7% reduction of PTV coverage and a 0.6% increasing of hotspot, and the dose difference of the OARs was negligible (<0.5 Gy). Hence, initial IMRT plans for new patients can be generated by adapting a reference patient's preplan with online contours, which had similar qualities to the conventional approach of individually optimized plan on the simulation CT. Further study is needed to identify selection criteria for patient anatomy most amenable to this workflow.
Collapse
Affiliation(s)
- Tingliang Zhuang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - David Parsons
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Neil Desai
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Grant Gibbard
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Dana Keilty
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Mu-Han Lin
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Bin Cai
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Dan Nguyen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Tsuicheng Chiu
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Andrew Godley
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Arnold Pompos
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Steve Jiang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| |
Collapse
|
11
|
Kawamura M, Kamomae T, Yanagawa M, Kamagata K, Fujita S, Ueda D, Matsui Y, Fushimi Y, Fujioka T, Nozaki T, Yamada A, Hirata K, Ito R, Fujima N, Tatsugami F, Nakaura T, Tsuboyama T, Naganawa S. Revolutionizing radiation therapy: the role of AI in clinical practice. JOURNAL OF RADIATION RESEARCH 2024; 65:1-9. [PMID: 37996085 PMCID: PMC10803173 DOI: 10.1093/jrr/rrad090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 09/25/2023] [Accepted: 10/16/2023] [Indexed: 11/25/2023]
Abstract
This review provides an overview of the application of artificial intelligence (AI) in radiation therapy (RT) from a radiation oncologist's perspective. Over the years, advances in diagnostic imaging have significantly improved the efficiency and effectiveness of radiotherapy. The introduction of AI has further optimized the segmentation of tumors and organs at risk, thereby saving considerable time for radiation oncologists. AI has also been utilized in treatment planning and optimization, reducing the planning time from several days to minutes or even seconds. Knowledge-based treatment planning and deep learning techniques have been employed to produce treatment plans comparable to those generated by humans. Additionally, AI has potential applications in quality control and assurance of treatment plans, optimization of image-guided RT and monitoring of mobile tumors during treatment. Prognostic evaluation and prediction using AI have been increasingly explored, with radiomics being a prominent area of research. The future of AI in radiation oncology offers the potential to establish treatment standardization by minimizing inter-observer differences in segmentation and improving dose adequacy evaluation. RT standardization through AI may have global implications, providing world-standard treatment even in resource-limited settings. However, there are challenges in accumulating big data, including patient background information and correlating treatment plans with disease outcomes. Although challenges remain, ongoing research and the integration of AI technology hold promise for further advancements in radiation oncology.
Collapse
Affiliation(s)
- Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Takeshi Kamomae
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, 565-0871, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1 Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Shohei Fujita
- Department of Radiology, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata-cho, Kitaku, Okayama, 700-8558, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawaharacho, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, 3-1-1 Asahi, Matsumoto, Nagano, 390-8621, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Faculty of Medicine, Hokkaido University, Kita15, Nishi7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Kita15, Nishi7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, 1-1-1 Honjo, Chuo-ku, Kumamoto, 860-8556, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, 565-0871, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| |
Collapse
|
12
|
Liu X, Yang R, Xiong T, Yang X, Li W, Song L, Zhu J, Wang M, Cai J, Geng L. CBCT-to-CT Synthesis for Cervical Cancer Adaptive Radiotherapy via U-Net-Based Model Hierarchically Trained with Hybrid Dataset. Cancers (Basel) 2023; 15:5479. [PMID: 38001738 PMCID: PMC10670900 DOI: 10.3390/cancers15225479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/11/2023] [Accepted: 11/14/2023] [Indexed: 11/26/2023] Open
Abstract
PURPOSE To develop a deep learning framework based on a hybrid dataset to enhance the quality of CBCT images and obtain accurate HU values. MATERIALS AND METHODS A total of 228 cervical cancer patients treated in different LINACs were enrolled. We developed an encoder-decoder architecture with residual learning and skip connections. The model was hierarchically trained and validated on 5279 paired CBCT/planning CT images and tested on 1302 paired images. The mean absolute error (MAE), peak signal to noise ratio (PSNR), and structural similarity index (SSIM) were utilized to access the quality of the synthetic CT images generated by our model. RESULTS The MAE between synthetic CT images generated by our model and planning CT was 10.93 HU, compared to 50.02 HU for the CBCT images. The PSNR increased from 27.79 dB to 33.91 dB, and the SSIM increased from 0.76 to 0.90. Compared with synthetic CT images generated by the convolution neural networks with residual blocks, our model had superior performance both in qualitative and quantitative aspects. CONCLUSIONS Our model could synthesize CT images with enhanced image quality and accurate HU values. The synthetic CT images preserved the edges of tissues well, which is important for downstream tasks in adaptive radiotherapy.
Collapse
Affiliation(s)
- Xi Liu
- School of Physics, Beihang University, Beijing 102206, China; (X.L.); (X.Y.)
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China; (R.Y.)
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Ruijie Yang
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China; (R.Y.)
| | - Tianyu Xiong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Xueying Yang
- School of Physics, Beihang University, Beijing 102206, China; (X.L.); (X.Y.)
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China; (R.Y.)
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Liming Song
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Jiarui Zhu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Mingqing Wang
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China; (R.Y.)
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Lisheng Geng
- School of Physics, Beihang University, Beijing 102206, China; (X.L.); (X.Y.)
- Beijing Key Laboratory of Advanced Nuclear Materials and Physics, Beihang University, Beijing 102206, China
- Peng Huanwu Collaborative Center for Research and Education, Beihang University, Beijing 100191, China
| |
Collapse
|
13
|
Pang B, Si H, Liu M, Fu W, Zeng Y, Liu H, Cao T, Chang Y, Quan H, Yang Z. Comparison and evaluation of different deep learning models of synthetic CT generation from CBCT for nasopharynx cancer adaptive proton therapy. Med Phys 2023; 50:6920-6930. [PMID: 37800874 DOI: 10.1002/mp.16777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/09/2023] [Accepted: 09/17/2023] [Indexed: 10/07/2023] Open
Abstract
BACKGROUND Cone-beam computed tomography (CBCT) scanning is used for patient setup in image-guided radiotherapy. However, its inaccurate CT numbers limit its applicability in dose calculation and treatment planning. PURPOSE This study compares four deep learning methods for generating synthetic CT (sCT) to determine which method is more appropriate and offers potential for further clinical exploration in adaptive proton therapy for nasopharynx cancer. METHODS CBCTs and deformed planning CT (dCT) from 75 patients (60/5/10 for training, validation and testing) were used to compare cycle-consistent Generative Adversarial Network (cycleGAN), Unet, Unet+cycleGAN and conditionalGenerative Adversarial Network (cGAN) for sCT generation. The sCT images generated by each method were evaluated against dCT images using mean absolute error (MAE), structural similarity (SSIM), peak signal-to-noise ratio (PSNR), spatial non-uniformity (SNU) and radial averaging in the frequency domain. In addition, dosimetric accuracy was assessed through gamma analysis, differences in water equivalent thickness (WET), and dose-volume histogram metrics. RESULTS The cGAN model has demonstrated optimal performance in the four models across various indicators. In terms of image quality under global condition, the average MAE has been reduced to 16.39HU, SSIM has increased to 95.24%, and PSNR has increased to 28.98. Regarding dosimetric accuracy, the gamma passing rate (2%/2 mm) has reached 99.02%, and the WET difference is only 1.28 mm. The D95 value of CTVs coverage and Dmax value of spinal cord, brainstem show no significant differences between dCT and sCT generated by cGAN model. CONCLUSIONS The cGAN model has been shown to be a more suitable approach for generating sCT using CBCT, considering its characteristics and concepts. The resulting sCT has the potential for application in adaptive proton therapy.
Collapse
Affiliation(s)
- Bo Pang
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Hang Si
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Muyu Liu
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Wensheng Fu
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yiling Zeng
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Hongyuan Liu
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ting Cao
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yu Chang
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Hong Quan
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Zhiyong Yang
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
14
|
Gao L, Xie K, Sun J, Lin T, Sui J, Yang G, Ni X. A transformer-based dual-domain network for reconstructing FOV extended cone-beam CT images from truncated sinograms in radiation therapy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 241:107767. [PMID: 37633083 DOI: 10.1016/j.cmpb.2023.107767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 08/15/2023] [Accepted: 08/15/2023] [Indexed: 08/28/2023]
Abstract
BACKGROUND AND OBJECTIVE Cone-beam computed tomography (CBCT) is widely used in clinical radiotherapy, but its small field of view (sFOV) limits its application potential. In this study, a transformer-based dual-domain network (dual_swin), which combined image domain restoration and sinogram domain restoration, was proposed for the reconstruction of complete CBCT images with extended FOV from truncated sinograms. METHODS The planning CT images with large FOV (LFOV) of 330 patients who received radiation therapy were collected. The synthetic CBCT (sCBCT) images with LFOV were generated from CT images by the trained cycleGAN network, and CBCT images with sFOV were obtained through forward projection, projection truncation, and filtered back projection (FBP), comprising the training and test data. The proposed dual_swin includes sinogram domain restoration, image domain restoration, and FBP layer, and the swin transformer blocks were used as the basic feature extraction module in the network to improve the global feature extraction ability. The proposed dual_swin was compared with the image domain method, the sinogram domain method, the U-Net based dual domain network (dual_Unet), and the traditional iterative reconstruction method based on prior image and conjugate gradient least-squares (CGLS) in the test of sCBCT images and clinical CBCT images. The HU accuracy and body contour accuracy of the predicted images by each method were evaluated. RESULTS The images generated using the CGLS method were fuzzy and obtained the lowest structural similarity (SSIM) among all methods in the test of sCBCT and clinical CBCT images. The predicted images by the image domain methods are quite different from the ground truth and have low accuracy on HU value and body contour. In comparison with image domain methods, sinogram domain methods improved the accuracy of HU value and body contour but introduced secondary artifacts and distorted bone tissue. The proposed dual_swin achieved the highest HU and contour accuracy with mean absolute error (MAE) of 23.0 HU, SSIM of 95.7%, dice similarity coefficient (DSC) of 99.6%, and Hausdorff distance (HD) of 4.1 mm in the test of sCBCT images. In the test of clinical patients, images that were predicted by dual_swin yielded MAE, SSIM, DSC, and HD of 38.2 HU, 91.7%, 99.0%, and 5.4 mm, respectively. The predicted images by the proposed dual_swin has significantly higher accuracy than the other methods (P < 0.05). CONCLUSIONS The proposed dual_swin can accurately reconstruct FOV extended CBCT images from the truncated sinogram to improve the application potential of CBCT images in radiotherapy.
Collapse
Affiliation(s)
- Liugang Gao
- School of Computer Science and Engineering, Southeast University, Nanjing, China; The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Kai Xie
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Jiawei Sun
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Tao Lin
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Jianfeng Sui
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Guanyu Yang
- School of Computer Science and Engineering, Southeast University, Nanjing, China.
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China.
| |
Collapse
|
15
|
Yoganathan S, Aouadi S, Ahmed S, Paloor S, Torfeh T, Al-Hammadi N, Hammoud R. Generating synthetic images from cone beam computed tomography using self-attention residual UNet for head and neck radiotherapy. Phys Imaging Radiat Oncol 2023; 28:100512. [PMID: 38111501 PMCID: PMC10726231 DOI: 10.1016/j.phro.2023.100512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 11/09/2023] [Accepted: 11/09/2023] [Indexed: 12/20/2023] Open
Abstract
Background and purpose Accurate CT numbers in Cone Beam CT (CBCT) are crucial for precise dose calculations in adaptive radiotherapy (ART). This study aimed to generate synthetic CT (sCT) from CBCT using deep learning (DL) models in head and neck (HN) radiotherapy. Materials and methods A novel DL model, the 'self-attention-residual-UNet' (ResUNet), was developed for accurate sCT generation. ResUNet incorporates a self-attention mechanism in its long skip connections to enhance information transfer between the encoder and decoder. Data from 93 HN patients, each with planning CT (pCT) and first-day CBCT images were used. Model performance was evaluated using two DL approaches (non-adversarial and adversarial training) and two model types (2D axial only vs. 2.5D axial, sagittal, and coronal). ResUNet was compared with the traditional UNet through image quality assessment (Mean Absolute Error (MAE), Peak-Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM)) and dose calculation accuracy evaluation (DVH deviation and gamma evaluation (1 %/1mm)). Results Image similarity evaluation results for the 2.5D-ResUNet and 2.5D-UNet models were: MAE: 46±7 HU vs. 51±9 HU, PSNR: 66.6±2.0 dB vs. 65.8±1.8 dB, and SSIM: 0.81±0.04 vs. 0.79±0.05. There were no significant differences in dose calculation accuracy between DL models. Both models demonstrated DVH deviation below 0.5 % and a gamma-pass-rate (1 %/1mm) exceeding 97 %. Conclusions ResUNet enhanced CT number accuracy and image quality of sCT and outperformed UNet in sCT generation from CBCT. This method holds promise for generating precise sCT for HN ART.
Collapse
Affiliation(s)
- S.A. Yoganathan
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Souha Aouadi
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Sharib Ahmed
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Satheesh Paloor
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Tarraf Torfeh
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Noora Al-Hammadi
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Rabih Hammoud
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| |
Collapse
|
16
|
Wynne JF, Lei Y, Pan S, Wang T, Pasha M, Luca K, Roper J, Patel P, Patel SA, Godette K, Jani AB, Yang X. Rapid unpaired CBCT-based synthetic CT for CBCT-guided adaptive radiotherapy. J Appl Clin Med Phys 2023; 24:e14064. [PMID: 37345557 PMCID: PMC10562022 DOI: 10.1002/acm2.14064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 04/18/2023] [Accepted: 05/15/2023] [Indexed: 06/23/2023] Open
Abstract
In this work, we demonstrate a method for rapid synthesis of high-quality CT images from unpaired, low-quality CBCT images, permitting CBCT-based adaptive radiotherapy. We adapt contrastive unpaired translation (CUT) to be used with medical images and evaluate the results on an institutional pelvic CT dataset. We compare the method against cycleGAN using mean absolute error, structural similarity index, root mean squared error, and Frèchet Inception Distance and show that CUT significantly outperforms cycleGAN while requiring less time and fewer resources. The investigated method improves the feasibility of online adaptive radiotherapy over the present state-of-the-art.
Collapse
Affiliation(s)
- Jacob F. Wynne
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Mosa Pasha
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Kirk Luca
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Sagar A. Patel
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Karen Godette
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
17
|
Aouadi S, Yoganathan SA, Torfeh T, Paloor S, Caparrotti P, Hammoud R, Al-Hammadi N. Generation of synthetic CT from CBCT using deep learning approaches for head and neck cancer patients. Biomed Phys Eng Express 2023; 9:055020. [PMID: 37489854 DOI: 10.1088/2057-1976/acea27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Accepted: 07/25/2023] [Indexed: 07/26/2023]
Abstract
Purpose.To create a synthetic CT (sCT) from daily CBCT using either deep residual U-Net (DRUnet), or conditional generative adversarial network (cGAN) for adaptive radiotherapy planning (ART).Methods.First fraction CBCT and planning CT (pCT) were collected from 93 Head and Neck patients who underwent external beam radiotherapy. The dataset was divided into training, validation, and test sets of 58, 10 and 25 patients respectively. Three methods were used to generate sCT, 1. Nonlocal means patch based method was modified to include multiscale patches defining the multiscale patch based method (MPBM), 2. An encoder decoder 2D Unet with imbricated deep residual units was implemented, 3. DRUnet was integrated to the generator part of cGAN whereas a convolutional PatchGAN classifier was used as the discriminator. The accuracy of sCT was evaluated geometrically using Mean Absolute Error (MAE). Clinical Volumetric Modulated Arc Therapy (VMAT) plans were copied from pCT to registered CBCT and sCT and dosimetric analysis was performed by comparing Dose Volume Histogram (DVH) parameters of planning target volumes (PTVs) and organs at risk (OARs). Furthermore, 3D Gamma analysis (2%/2mm, global) between the dose on the sCT or CBCT and that on the pCT was performed.Results. The average MAE calculated between pCT and CBCT was 180.82 ± 27.37HU. Overall, all approaches significantly reduced the uncertainties in CBCT. Deep learning approaches outperformed patch-based methods with MAE = 67.88 ± 8.39HU (DRUnet) and MAE = 72.52 ± 8.43HU (cGAN) compared to MAE = 90.69 ± 14.3HU (MPBM). The percentages of DVH metric deviations were below 0.55% for PTVs and 1.17% for OARs using DRUnet. The average Gamma pass rate was 99.45 ± 1.86% for sCT generated using DRUnet.Conclusion.DL approaches outperformed MPBM. Specifically, DRUnet could be used for the generation of sCT with accurate intensities and realistic description of patient anatomy. This could be beneficial for CBCT based ART.
Collapse
Affiliation(s)
- Souha Aouadi
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - S A Yoganathan
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Tarraf Torfeh
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Satheesh Paloor
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Palmira Caparrotti
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Rabih Hammoud
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Noora Al-Hammadi
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| |
Collapse
|
18
|
Uh J, Wang C, Jordan JA, Pirlepesov F, Becksfort JB, Ates O, Krasin MJ, Hua CH. A hybrid method of correcting CBCT for proton range estimation with deep learning and deformable image registration. Phys Med Biol 2023; 68:10.1088/1361-6560/ace754. [PMID: 37442128 PMCID: PMC10846632 DOI: 10.1088/1361-6560/ace754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 07/13/2023] [Indexed: 07/15/2023]
Abstract
Objective. This study aimed to develop a novel method for generating synthetic CT (sCT) from cone-beam CT (CBCT) of the abdomen/pelvis with bowel gas pockets to facilitate estimation of proton ranges.Approach. CBCT, the same-day repeat CT, and the planning CT (pCT) of 81 pediatric patients were used for training (n= 60), validation (n= 6), and testing (n= 15) of the method. The proposed method hybridizes unsupervised deep learning (CycleGAN) and deformable image registration (DIR) of the pCT to CBCT. The CycleGAN and DIR are respectively applied to generate the geometry-weighted (high spatial-frequency) and intensity-weighted (low spatial-frequency) components of the sCT, thereby each process deals with only the component weighted toward its strength. The resultant sCT is further improved in bowel gas regions and other tissues by iteratively feeding back the sCT to adjust incorrect DIR and by increasing the contribution of the deformed pCT in regions of accurate DIR.Main results. The hybrid sCT was more accurate than deformed pCT and CycleGAN-only sCT as indicated by the smaller mean absolute error in CT numbers (28.7 ± 7.1 HU versus 38.8 ± 19.9 HU/53.2 ± 5.5 HU;P≤ 0.012) and higher Dice similarity of the internal gas regions (0.722 ± 0.088 versus 0.180 ± 0.098/0.659 ± 0.129;P≤ 0.002). Accordingly, the hybrid method resulted in more accurate proton range for the beams intersecting gas pockets (11 fields in 6 patients) than the individual methods (the 90th percentile error in 80% distal fall-off, 1.8 ± 0.6 mm versus 6.5 ± 7.8 mm/3.7 ± 1.5 mm;P≤ 0.013). The gamma passing rates also showed a significant dosimetric advantage by the hybrid method (99.7 ± 0.8% versus 98.4 ± 3.1%/98.3 ± 1.8%;P≤ 0.007).Significance. The hybrid method significantly improved the accuracy of sCT and showed promises in CBCT-based proton range verification and adaptive replanning of abdominal/pelvic proton therapy even when gas pockets are present in the beam path.
Collapse
Affiliation(s)
- Jinsoo Uh
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Chuang Wang
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Jacob A Jordan
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
- College of Medicine, The University of Tennessee Health Science Center, Memphis, TN, United States of America
| | - Fakhriddin Pirlepesov
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Jared B Becksfort
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Ozgur Ates
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Matthew J Krasin
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Chia-Ho Hua
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| |
Collapse
|
19
|
Jihong C, Kerun Q, Kaiqiang C, Xiuchun Z, Yimin Z, Penggang B. CBCT-based synthetic CT generated using CycleGAN with HU correction for adaptive radiotherapy of nasopharyngeal carcinoma. Sci Rep 2023; 13:6624. [PMID: 37095147 PMCID: PMC10125979 DOI: 10.1038/s41598-023-33472-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 04/13/2023] [Indexed: 04/26/2023] Open
Abstract
This study aims to utilize a hybrid approach of phantom correction and deep learning for synthesized CT (sCT) images generation based on cone-beam CT (CBCT) images for nasopharyngeal carcinoma (NPC). 52 CBCT/CT paired images of NPC patients were used for model training (41), validation (11). Hounsfield Units (HU) of the CBCT images was calibrated by a commercially available CIRS phantom. Then the original CBCT and the corrected CBCT (CBCT_cor) were trained separately with the same cycle generative adversarial network (CycleGAN) to generate SCT1 and SCT2. The mean error and mean absolute error (MAE) were used to quantify the image quality. For validations, the contours and treatment plans in CT images were transferred to original CBCT, CBCT_cor, SCT1 and SCT2 for dosimetric comparison. Dose distribution, dosimetric parameters and 3D gamma passing rate were analyzed. Compared with rigidly registered CT (RCT), the MAE of CBCT, CBCT_cor, SCT1 and SCT2 were 346.11 ± 13.58 HU, 145.95 ± 17.64 HU, 105.62 ± 16.08 HU and 83.51 ± 7.71 HU, respectively. Moreover, the average dosimetric parameter differences for the CBCT_cor, SCT1 and SCT2 were 2.7% ± 1.4%, 1.2% ± 1.0% and 0.6% ± 0.6%, respectively. Using the dose distribution of RCT images as reference, the 3D gamma passing rate of the hybrid method was significantly better than the other methods. The effectiveness of CBCT-based sCT generated using CycleGAN with HU correction for adaptive radiotherapy of nasopharyngeal carcinoma was confirmed. The image quality and dose accuracy of SCT2 were outperform the simple CycleGAN method. This finding has great significance for the clinical application of adaptive radiotherapy for NPC.
Collapse
Affiliation(s)
- Chen Jihong
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, 350014, Fujian, China
| | - Quan Kerun
- Department of Radiation Oncology, Xiangtan City Central Hospital, Xiangtan, 411100, Hunan, China
| | - Chen Kaiqiang
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, 350014, Fujian, China
| | - Zhang Xiuchun
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, 350014, Fujian, China
| | - Zhou Yimin
- School of Nuclear Science and Technology, University of South China, Hengyang, 421001, China
| | - Bai Penggang
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, 350014, Fujian, China.
| |
Collapse
|
20
|
Xie K, Gao L, Xi Q, Zhang H, Zhang S, Zhang F, Sun J, Lin T, Sui J, Ni X. New technique and application of truncated CBCT processing in adaptive radiotherapy for breast cancer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107393. [PMID: 36739623 DOI: 10.1016/j.cmpb.2023.107393] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 01/26/2023] [Accepted: 01/31/2023] [Indexed: 06/18/2023]
Abstract
OBJECTIVE A generative adversarial network (TCBCTNet) was proposed to generate synthetic computed tomography (sCT) from truncated low-dose cone-beam computed tomography (CBCT) and planning CT (pCT). The sCT was applied to the dose calculation of radiotherapy for patients with breast cancer. METHODS The low-dose CBCT and pCT images of 80 female thoracic patients were used for training. The CBCT, pCT, and replanning CT (rCT) images of 20 thoracic patients and 20 patients with breast cancer were used for testing. All patients were fixed in the same posture with a vacuum pad. The CBCT images were scanned under the Fast Chest M20 protocol with a 50% reduction in projection frames compared with the standard Chest M20 protocol. Rigid registration was performed between pCT and CBCT, and deformation registration was performed between rCT and CBCT. In the training stage of the TCBCTNet, truncated CBCT images obtained from complete CBCT images by simulation were used. The input of the CBCT→CT generator was truncated CBCT and pCT, and TCBCTNet was applied to patients with breast cancer after training. The accuracy of the sCT was evaluated by anatomy and dosimetry and compared with the generative adversarial network with UNet and ResNet as the generators (named as UnetGAN, ResGAN). RESULTS The three models could improve the image quality of CBCT and reduce the scattering artifacts while preserving the anatomical geometry of CBCT. For the chest test set, TCBCTNet achieved the best mean absolute error (MAE, 21.18±3.76 HU), better than 23.06±3.90 HU in UnetGAN and 22.47±3.57 HU in ResGAN. When applied to patients with breast cancer, TCBCTNet performance decreased, and MAE was 25.34±6.09 HU. Compared with rCT, sCT by TCBCTNet showed consistent dose distribution and subtle absolute dose differences between the target and the organ at risk. The 3D gamma pass rates were 98.98%±0.64% and 99.69%±0.22% at 2 mm/2% and 3 mm/3%, respectively. Ablation experiments confirmed that pCT and content loss played important roles in TCBCTNet. CONCLUSIONS High-quality sCT images could be synthesized from truncated low-dose CBCT and pCT by using the proposed TCBCTNet model. In addition, sCT could be used to accurately calculate the dose distribution for patients with breast cancer.
Collapse
Affiliation(s)
- Kai Xie
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Liugang Gao
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Qianyi Xi
- Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China
| | - Heng Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China
| | - Sai Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China
| | - Fan Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China
| | - Jiawei Sun
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Tao Lin
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Jianfeng Sui
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Xinye Ni
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China; Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China.
| |
Collapse
|
21
|
Suwanraksa C, Bridhikitti J, Liamsuwan T, Chaichulee S. CBCT-to-CT Translation Using Registration-Based Generative Adversarial Networks in Patients with Head and Neck Cancer. Cancers (Basel) 2023; 15:cancers15072017. [PMID: 37046678 PMCID: PMC10093508 DOI: 10.3390/cancers15072017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 03/27/2023] [Indexed: 03/31/2023] Open
Abstract
Recently, deep learning with generative adversarial networks (GANs) has been applied in multi-domain image-to-image translation. This study aims to improve the image quality of cone-beam computed tomography (CBCT) by generating synthetic CT (sCT) that maintains the patient’s anatomy as in CBCT, while having the image quality of CT. As CBCT and CT are acquired at different time points, it is challenging to obtain paired images with aligned anatomy for supervised training. To address this limitation, the study incorporated a registration network (RegNet) into GAN during training. RegNet can dynamically estimate the correct labels, allowing supervised learning with noisy labels. The study developed and evaluated the approach using imaging data from 146 patients with head and neck cancer. The results showed that GAN trained with RegNet performed better than those trained without RegNet. Specifically, in the UNIT model trained with RegNet, the mean absolute error (MAE) was reduced from 40.46 to 37.21, the root mean-square error (RMSE) was reduced from 119.45 to 108.86, the peak signal-to-noise ratio (PSNR) was increased from 28.67 to 29.55, and the structural similarity index (SSIM) was increased from 0.8630 to 0.8791. The sCT generated from the model had fewer artifacts and retained the anatomical information as in CBCT.
Collapse
|
22
|
Joseph J, Biji I, Babu N, Pournami PN, Jayaraj PB, Puzhakkal N, Sabu C, Patel V. Fan beam CT image synthesis from cone beam CT image using nested residual UNet based conditional generative adversarial network. Phys Eng Sci Med 2023; 46:703-717. [PMID: 36943626 DOI: 10.1007/s13246-023-01244-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 03/09/2023] [Indexed: 03/23/2023]
Abstract
A radiotherapy technique called Image-Guided Radiation Therapy adopts frequent imaging throughout a treatment session. Fan Beam Computed Tomography (FBCT) based planning followed by Cone Beam Computed Tomography (CBCT) based radiation delivery drastically improved the treatment accuracy. Furtherance in terms of radiation exposure and cost can be achieved if FBCT could be replaced with CBCT. This paper proposes a Conditional Generative Adversarial Network (CGAN) for CBCT-to-FBCT synthesis. Specifically, a new architecture called Nested Residual UNet (NR-UNet) is introduced as the generator of the CGAN. A composite loss function, which comprises adversarial loss, Mean Squared Error (MSE), and Gradient Difference Loss (GDL), is used with the generator. The CGAN utilises the inter-slice dependency in the input by taking three consecutive CBCT slices to generate an FBCT slice. The model is trained using Head-and-Neck (H&N) FBCT-CBCT images of 53 cancer patients. The synthetic images exhibited a Peak Signal-to-Noise Ratio of 34.04±0.93 dB, Structural Similarity Index Measure of 0.9751±0.001 and a Mean Absolute Error of 14.81±4.70 HU. On average, the proposed model guarantees an improvement in Contrast-to-Noise Ratio four times better than the input CBCT images. The model also minimised the MSE and alleviated blurriness. Compared to the CBCT-based plan, the synthetic image results in a treatment plan closer to the FBCT-based plan. The three-slice to single-slice translation captures the three-dimensional contextual information in the input. Besides, it withstands the computational complexity associated with a three-dimensional image synthesis model. Furthermore, the results demonstrate that the proposed model is superior to the state-of-the-art methods.
Collapse
Affiliation(s)
- Jiffy Joseph
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India.
| | - Ivan Biji
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Naveen Babu
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - P N Pournami
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - P B Jayaraj
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Niyas Puzhakkal
- Department of Medical Physics, MVR Cancer Centre & Research Institute, Poolacode, Calicut, Kerala, 673601, India
| | - Christy Sabu
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Vedkumar Patel
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| |
Collapse
|
23
|
Gao L, Xie K, Sun J, Lin T, Sui J, Yang G, Ni X. Streaking artifact reduction for CBCT-based synthetic CT generation in adaptive radiotherapy. Med Phys 2023; 50:879-893. [PMID: 36183234 DOI: 10.1002/mp.16017] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 09/02/2022] [Accepted: 09/25/2022] [Indexed: 11/07/2022] Open
Abstract
BACKGROUND Cone-beam computed tomography (CBCT) is widely used for daily image guidance in radiation therapy, enhancing the reproducibility of patient setup. However, its application in adaptive radiotherapy (ART) is limited by many imaging artifacts and inaccurate Hounsfield units (HUs). The correction of CBCT image is necessary and of great value for CBCT-based ART. PURPOSE To explore the synthetic CT (sCT) generation from CBCT images of thorax and abdomen patients, which usually surfer from serious artifacts duo to organ state changes. In this study, a streaking artifact reduction network (SARN) is proposed to reduce artifacts and combine with cycleGAN to generate high-quality sCT images from CBCT and achieve an accurate dose calculation. METHODS The proposed SARN was trained in a self-supervised manner. Artifact-CT images were generated from planning CT by random deformation and projection replacement, and SARN was trained based on paired artifact-CT and CT images. The planning CT and CBCT images of 260 patients with cancer, including 120 thoracic and 140 abdominal CT scans, were used to train and evaluate neural networks. The CBCT images of another 12 patients in late treatment fractions, which contained large anatomy changes, were also tested by trained models. The trained models include commonly used U-Net, cycleGAN, attention-gated cycleGAN (cycAT), and cascade models combined SARN with cycleGAN or cycAT. The generated sCT images were compared in terms of image quality and dose calculation accuracy. RESULTS The sCT images generated by SARN combined with cycleGAN and cycAT showed the best image quality, removed the most artifacts, and retained the normal anatomical structure. The SARN+cycleGAN performed best in streaking artifacts removal with the maximum percent integrity uniformity (PIUm ) of 91.0% and minimum standard deviation (SD) of 35.4 HU for delineated artifact regions among all models. The mean absolute error (MAE) of CBCT images in the thorax and abdomen were 71.6 and 55.2 HU, respectively, using planning CT images after deformable registration as ground truth. Compared with CBCT, the thoracic and abdominal sCT images generated by each model had significantly improved image quality with smaller MAE (p < 0.05). The SARN+cycAT obtained the minimum MAEs of 42.5 HU in the thorax while SARN+cycleGAN got the minimum MAEs of 32.0 HU in the abdomen. The sCT generated by U-Net had a remarkably lower anatomical structure accuracy compared with the other models. The thoracic and abdominal sCT images generated by SARN+cycleGAN showed optimal dose calculation accuracy with gamma passing rates (2 mm/2%) of 98.2% and 96.9%, respectively. CONCLUSIONS The proposed SARN can reduce serious streaking artifacts in CBCT images. The SARN combined with cycleGAN can generate high-quality sCT images with fewer artifacts, high-accuracy HU values, and accurate anatomical structures, thus providing reliable dose calculation in ART.
Collapse
Affiliation(s)
- Liugang Gao
- School of Computer Science and Engineering, Southeast University, Nanjing, China
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Kai Xie
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Jiawei Sun
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Tao Lin
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Jianfeng Sui
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Guanyu Yang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| |
Collapse
|
24
|
Pai S, Hadzic I, Rao C, Zhovannik I, Dekker A, Traverso A, Asteriadis S, Hortal E. Frequency-Domain-Based Structure Losses for CycleGAN-Based Cone-Beam Computed Tomography Translation. SENSORS (BASEL, SWITZERLAND) 2023; 23:1089. [PMID: 36772129 PMCID: PMC9920313 DOI: 10.3390/s23031089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 12/28/2022] [Accepted: 01/05/2023] [Indexed: 06/18/2023]
Abstract
Research exploring CycleGAN-based synthetic image generation has recently accelerated in the medical community due to its ability to leverage unpaired images effectively. However, a commonly established drawback of the CycleGAN, the introduction of artifacts in generated images, makes it unreliable for medical imaging use cases. In an attempt to address this, we explore the effect of structure losses on the CycleGAN and propose a generalized frequency-based loss that aims at preserving the content in the frequency domain. We apply this loss to the use-case of cone-beam computed tomography (CBCT) translation to computed tomography (CT)-like quality. Synthetic CT (sCT) images generated from our methods are compared against baseline CycleGAN along with other existing structure losses proposed in the literature. Our methods (MAE: 85.5, MSE: 20433, NMSE: 0.026, PSNR: 30.02, SSIM: 0.935) quantitatively and qualitatively improve over the baseline CycleGAN (MAE: 88.8, MSE: 24244, NMSE: 0.03, PSNR: 29.37, SSIM: 0.935) across all investigated metrics and are more robust than existing methods. Furthermore, no observable artifacts or loss in image quality were observed. Finally, we demonstrated that sCTs generated using our methods have superior performance compared to the original CBCT images on selected downstream tasks.
Collapse
Affiliation(s)
- Suraj Pai
- GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands
| | - Ibrahim Hadzic
- GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands
| | - Chinmay Rao
- Division of Image Processing, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands
| | - Ivan Zhovannik
- GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands
| | - Andre Dekker
- GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands
| | - Alberto Traverso
- GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands
| | - Stylianos Asteriadis
- Department of Advanced Computing Sciences, Maastricht University, 6229 EN Maastricht, The Netherlands
| | - Enrique Hortal
- Department of Advanced Computing Sciences, Maastricht University, 6229 EN Maastricht, The Netherlands
| |
Collapse
|
25
|
Abbani N, Baudier T, Rit S, Franco FD, Okoli F, Jaouen V, Tilquin F, Barateau A, Simon A, de Crevoisier R, Bert J, Sarrut D. Deep learning-based segmentation in prostate radiation therapy using Monte Carlo simulated cone-beam computed tomography. Med Phys 2022; 49:6930-6944. [PMID: 36000762 DOI: 10.1002/mp.15946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 07/28/2022] [Accepted: 08/05/2022] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Segmenting organs in cone-beam CT (CBCT) images would allow to adapt the radiotherapy based on the organ deformations that may occur between treatment fractions. However, this is a difficult task because of the relative lack of contrast in CBCT images, leading to high inter-observer variability. Deformable image registration (DIR) and deep-learning based automatic segmentation approaches have shown interesting results for this task in the past years. However, they are either sensitive to large organ deformations, or require to train a convolutional neural network (CNN) from a database of delineated CBCT images, which is difficult to do without improvement of image quality. In this work, we propose an alternative approach: to train a CNN (using a deep learning-based segmentation tool called nnU-Net) from a database of artificial CBCT images simulated from planning CT, for which it is easier to obtain the organ contours. METHODS Pseudo-CBCT (pCBCT) images were simulated from readily available segmented planning CT images, using the GATE Monte Carlo simulation. CT reference delineations were copied onto the pCBCT, resulting in a database of segmented images used to train the neural network. The studied segmentation contours were: bladder, rectum, and prostate contours. We trained multiple nnU-Net models using different training: (1) segmented real CBCT, (2) pCBCT, (3) segmented real CT and tested on pseudo-CT (pCT) generated from CBCT with cycleGAN, and (4) a combination of (2) and (3). The evaluation was performed on different datasets of segmented CBCT or pCT by comparing predicted segmentations with reference ones thanks to Dice similarity score and Hausdorff distance. A qualitative evaluation was also performed to compare DIR-based and nnU-Net-based segmentations. RESULTS Training with pCBCT was found to lead to comparable results to using real CBCT images. When evaluated on CBCT obtained from the same hospital as the CT images used in the simulation of the pCBCT, the model trained with pCBCT scored mean DSCs of 0.92 ± 0.05, 0.87 ± 0.02, and 0.85 ± 0.04 and mean Hausdorff distance 4.67 ± 3.01, 3.91 ± 0.98, and 5.00 ± 1.32 for the bladder, rectum, and prostate contours respectively, while the model trained with real CBCT scored mean DSCs of 0.91 ± 0.06, 0.83 ± 0.07, and 0.81 ± 0.05 and mean Hausdorff distance 5.62 ± 3.24, 6.43 ± 5.11, and 6.19 ± 1.14 for the bladder, rectum, and prostate contours, respectively. It was also found to outperform models using pCT or a combination of both, except for the prostate contour when tested on a dataset from a different hospital. Moreover, the resulting segmentations demonstrated a clinical acceptability, where 78% of bladder segmentations, 98% of rectum segmentations, and 93% of prostate segmentations required minor or no corrections, and for 76% of the patients, all structures of the patient required minor or no corrections. CONCLUSION We proposed to use simulated CBCT images to train a nnU-Net segmentation model, avoiding the need to gather complex and time-consuming reference delineations on CBCT images.
Collapse
Affiliation(s)
- Nelly Abbani
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| | - Thomas Baudier
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| | - Simon Rit
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| | - Francesca di Franco
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| | - Franklin Okoli
- LaTIM, Université de Bretagne Occidentale, Inserm, Brest, France
| | - Vincent Jaouen
- LaTIM, Université de Bretagne Occidentale, Inserm, Brest, France
| | | | - Anaïs Barateau
- Univ Rennes, CLCC Eugène Marquis, Inserm, Rennes, France
| | - Antoine Simon
- Univ Rennes, CLCC Eugène Marquis, Inserm, Rennes, France
| | | | - Julien Bert
- LaTIM, Université de Bretagne Occidentale, Inserm, Brest, France
| | - David Sarrut
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| |
Collapse
|
26
|
O'Hara CJ, Bird D, Al-Qaisieh B, Speight R. Assessment of CBCT-based synthetic CT generation accuracy for adaptive radiotherapy planning. J Appl Clin Med Phys 2022; 23:e13737. [PMID: 36200179 DOI: 10.1002/acm2.13737] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 05/26/2022] [Accepted: 07/04/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Cone-beam CT (CBCT)-based synthetic CT (sCT) dose calculation has the potential to make the adaptive radiotherapy (ART) pathway more efficient while removing subjectivity. This study assessed four sCT generation methods using 15 head-and-neck rescanned ART patients. Each patient's planning CT (pCT), rescan CT (rCT), and CBCT post-rCT was acquired with the CBCT deformably registered to the rCT (dCBCT). METHODS The four methods investigated were as follows: method 1-deformably registering the pCT to the dCBCT. Method 2-assigning six mass density values to the dCBCT. Method 3-iteratively removing artifacts and correcting the dCBCT Hounsfield units (HU). Method 4-using a cycle general adversarial network machine learning model (trained with 45 paired pCT and CBCT). Treatment plans were created on the rCT and recalculated on each sCT. Planning target volume (PTV) and organ-at-risk (OAR) structures were contoured by clinicians on the rCT (high-dose PTV, low-dose PTV, spinal canal, larynx, brainstem, and parotids) to allow the assessment of dose-volume histogram statistics at clinically relevant points. RESULTS The HU mean absolute error (MAE) and minimum dose gamma index pass rate (2%/2 mm) were calculated, and the generation time was measured for 15 patients using the rCT as the comparator. For methods 1-4 the MAE, gamma index analysis, and generation time were as follows: 59.7 HU, 100.0%, and 143 s; 164.2 HU, 95.2%, and 232 s; 75.7 HU, 99.9%, and 153 s; and 79.4 HU, 99.8%, and 112 s, respectively. Dose differences for PTVs and OARs were all <0.3 Gy except for method 2 (<0.5 Gy). CONCLUSION All methods were considered clinically viable. The machine learning method was found to be most suitable for clinical implementation due to its high dosimetric accuracy and short generation time. Further investigation is required for larger anatomical changes between the CBCT and pCT and for other anatomical sites.
Collapse
Affiliation(s)
| | - David Bird
- Leeds Cancer Centre, Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | | | - Richard Speight
- Leeds Cancer Centre, Leeds Teaching Hospitals NHS Trust, Leeds, UK
| |
Collapse
|
27
|
Chen X, Liu Y, Yang B, Zhu J, Yuan S, Xie X, Liu Y, Dai J, Men K. A more effective CT synthesizer using transformers for cone-beam CT-guided adaptive radiotherapy. Front Oncol 2022; 12:988800. [PMID: 36091131 PMCID: PMC9454309 DOI: 10.3389/fonc.2022.988800] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 07/27/2022] [Indexed: 11/13/2022] Open
Abstract
PurposeThe challenge of cone-beam computed tomography (CBCT) is its low image quality, which limits its application for adaptive radiotherapy (ART). Despite recent substantial improvement in CBCT imaging using the deep learning method, the image quality still needs to be improved for effective ART application. Spurred by the advantages of transformers, which employs multi-head attention mechanisms to capture long-range contextual relations between image pixels, we proposed a novel transformer-based network (called TransCBCT) to generate synthetic CT (sCT) from CBCT. This study aimed to further improve the accuracy and efficiency of ART.Materials and methodsIn this study, 91 patients diagnosed with prostate cancer were enrolled. We constructed a transformer-based hierarchical encoder–decoder structure with skip connection, called TransCBCT. The network also employed several convolutional layers to capture local context. The proposed TransCBCT was trained and validated on 6,144 paired CBCT/deformed CT images from 76 patients and tested on 1,026 paired images from 15 patients. The performance of the proposed TransCBCT was compared with a widely recognized style transferring deep learning method, the cycle-consistent adversarial network (CycleGAN). We evaluated the image quality and clinical value (application in auto-segmentation and dose calculation) for ART need.ResultsTransCBCT had superior performance in generating sCT from CBCT. The mean absolute error of TransCBCT was 28.8 ± 16.7 HU, compared to 66.5 ± 13.2 for raw CBCT, and 34.3 ± 17.3 for CycleGAN. It can preserve the structure of raw CBCT and reduce artifacts. When applied in auto-segmentation, the Dice similarity coefficients of bladder and rectum between auto-segmentation and oncologist manual contours were 0.92 and 0.84 for TransCBCT, respectively, compared to 0.90 and 0.83 for CycleGAN. When applied in dose calculation, the gamma passing rate (1%/1 mm criterion) was 97.5% ± 1.1% for TransCBCT, compared to 96.9% ± 1.8% for CycleGAN.ConclusionsThe proposed TransCBCT can effectively generate sCT for CBCT. It has the potential to improve radiotherapy accuracy.
Collapse
Affiliation(s)
- Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- National Cancer Center/National Clinical Research Center for Cancer/Hebei Cancer Hospital, Chinese Academy of Medical Sciences, Langfang, China
| | - Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- School of Physics and Technology, Wuhan University, Wuhan, China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Siqi Yuan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xuejie Xie
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yueping Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- *Correspondence: Kuo Men,
| |
Collapse
|
28
|
Deng L, Zhang M, Wang J, Huang S, Yang X. Improving cone-beam CT quality using a cycle-residual connection with a dilated convolution-consistent generative adversarial network. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7b0a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 06/21/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective.Cone-Beam CT (CBCT) often results in severe image artifacts and inaccurate HU values, meaning poor quality CBCT images cannot be directly applied to dose calculation in radiotherapy. To overcome this, we propose a cycle-residual connection with a dilated convolution-consistent generative adversarial network (Cycle-RCDC-GAN). Approach. The cycle-consistent generative adversarial network (Cycle-GAN) was modified using a dilated convolution with different expansion rates to extract richer semantic features from input images. Thirty pelvic patients were used to investigate the effect of synthetic CT (sCT) from CBCT, and 55 head and neck patients were used to explore the generalizability of the model. Three generalizability experiments were performed and compared: the pelvis trained model was applied to the head and neck; the head and neck trained model was applied to the pelvis, and the two datasets were trained together. Main results. The mean absolute error (MAE), the root mean square error (RMSE), peak signal to noise ratio (PSNR), the structural similarity index (SSIM), and spatial nonuniformity (SNU) assessed the quality of the sCT generated from CBCT. Compared with CBCT images, the MAE improved from 28.81 to 18.48, RMSE from 85.66 to 69.50, SNU from 0.34 to 0.30, and PSNR from 31.61 to 33.07, while SSIM improved from 0.981 to 0.989. The sCT objective indicators of Cycle-RCDC-GAN were better than Cycle-GAN’s. The objective metrics for generalizability were also better than Cycle-GAN’s. Significance. Cycle-RCDC-GAN enhances CBCT image quality and has better generalizability than Cycle-GAN, which further promotes the application of CBCT in radiotherapy.
Collapse
|
29
|
Rusanov B, Hassan GM, Reynolds M, Sabet M, Kendrick J, Farzad PR, Ebert M. Deep learning methods for enhancing cone-beam CT image quality towards adaptive radiation therapy: A systematic review. Med Phys 2022; 49:6019-6054. [PMID: 35789489 PMCID: PMC9543319 DOI: 10.1002/mp.15840] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 05/21/2022] [Accepted: 06/16/2022] [Indexed: 11/11/2022] Open
Abstract
The use of deep learning (DL) to improve cone-beam CT (CBCT) image quality has gained popularity as computational resources and algorithmic sophistication have advanced in tandem. CBCT imaging has the potential to facilitate online adaptive radiation therapy (ART) by utilizing up-to-date patient anatomy to modify treatment parameters before irradiation. Poor CBCT image quality has been an impediment to realizing ART due to the increased scatter conditions inherent to cone-beam acquisitions. Given the recent interest in DL applications in radiation oncology, and specifically DL for CBCT correction, we provide a systematic theoretical and literature review for future stakeholders. The review encompasses DL approaches for synthetic CT generation, as well as projection domain methods employed in the CBCT correction literature. We review trends pertaining to publications from January 2018 to April 2022 and condense their major findings - with emphasis on study design and deep learning techniques. Clinically relevant endpoints relating to image quality and dosimetric accuracy are summarised, highlighting gaps in the literature. Finally, we make recommendations for both clinicians and DL practitioners based on literature trends and the current DL state of the art methods utilized in radiation oncology. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Branimir Rusanov
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia
| | - Mark Reynolds
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia
| | - Mahsheed Sabet
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Jake Kendrick
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Pejman Rowshan Farzad
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Martin Ebert
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| |
Collapse
|
30
|
Sun H, Xi Q, Sun J, Fan R, Xie K, Ni X, Yang J. Research on new treatment mode of radiotherapy based on pseudo-medical images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106932. [PMID: 35671601 DOI: 10.1016/j.cmpb.2022.106932] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 04/20/2022] [Accepted: 06/01/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Multi-modal medical images with multiple feature information are beneficial for radiotherapy. A new radiotherapy treatment mode based on triangle generative adversarial network (TGAN) model was proposed to synthesize pseudo-medical images between multi-modal datasets. METHODS CBCT, MRI and CT images of 80 patients with nasopharyngeal carcinoma were selected. The TGAN model based on multi-scale discriminant network was used for data training between different image domains. The generator of the TGAN model refers to cGAN and CycleGAN, and only one generation network can establish the non-linear mapping relationship between multiple image domains. The discriminator used multi-scale discrimination network to guide the generator to synthesize pseudo-medical images that are similar to real images from both shallow and deep aspects. The accuracy of pseudo-medical images was verified in anatomy and dosimetry. RESULTS In the three synthetic directions, namely, CBCT → CT, CBCT → MRI, and MRI → CT, significant differences (p < 0.05) in the three-fold-cross validation results on PSNR and SSIM metrics between the pseudo-medical images obtained based on TGAN and the real images. In the testing stage, for TGAN, the MAE metric results in the three synthesis directions (CBCT → CT, CBCT → MRI, and MRI → CT) were presented as mean (standard deviation), which were 68.67 (5.83), 83.14 (8.48), and 79.96 (7.59), and the NMI metric results were 0.8643 (0.0253), 0.8051 (0.0268), and 0.8146 (0.0267) respectively. In terms of dose verification, the differences in dose distribution between the pseudo-CT obtained by TGAN and the real CT were minimal. The H values of the measurement results of dose uncertainty in PGTV, PGTVnd, PTV1, and PTV2 were 42.510, 43.121, 17.054, and 7.795, respectively (P < 0.05). The differences were statistically significant. The gamma pass rate (2%/2 mm) of pseudo-CT obtained by the new model was 94.94% (0.73%), and the numerical results were better than those of the three other comparison models. CONCLUSIONS The pseudo-medical images acquired based on TGAN were close to the real images in anatomy and dosimetry. The pseudo-medical images synthesized by the TGAN model have good application prospects in clinical adaptive radiotherapy.
Collapse
Affiliation(s)
- Hongfei Sun
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| | - Qianyi Xi
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Jiawei Sun
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Rongbo Fan
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| | - Kai Xie
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| |
Collapse
|
31
|
Lemus OMD, Wang Y, Li F, Jambawalikar S, Horowitz DP, Xu Y, Wuu C. Dosimetric assessment of patient dose calculation on a deep learning-based synthesized computed tomography image for adaptive radiotherapy. J Appl Clin Med Phys 2022; 23:e13595. [PMID: 35332646 PMCID: PMC9278692 DOI: 10.1002/acm2.13595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 02/07/2022] [Accepted: 03/01/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose Dose computation using cone beam computed tomography (CBCT) images is inaccurate for the purpose of adaptive treatment planning. The main goal of this study is to assess the dosimetric accuracy of synthetic computed tomography (CT)‐based calculation for adaptive planning in the upper abdominal region. We hypothesized that deep learning‐based synthetically generated CT images will produce comparable results to a deformed CT (CTdef) in terms of dose calculation, while displaying a more accurate representation of the daily anatomy and therefore superior dosimetric accuracy. Methods We have implemented a cycle‐consistent generative adversarial networks (CycleGANs) architecture to synthesize CT images from the daily acquired CBCT image with minimal error. CBCT and CT images from 17 liver stereotactic body radiation therapy (SBRT) patients were used to train, test, and validate the algorithm. Results The synthetically generated images showed increased signal‐to‐noise ratio, contrast resolution, and reduced root mean square error, mean absolute error, noise, and artifact severity. Superior edge matching, sharpness, and preservation of anatomical structures from the CBCT images were observed for the synthetic images when compared to the CTdef registration method. Three verification plans (CBCT, CTdef, and synthetic) were created from the original treatment plan and dose volume histogram (DVH) statistics were calculated. The synthetic‐based calculation shows comparatively similar results to the CTdef‐based calculation with a maximum mean deviation of 1.5%. Conclusions Our findings show that CycleGANs can produce reliable synthetic images for the adaptive delivery framework. Dose calculations can be performed on synthetic images with minimal error. Additionally, enhanced image quality should translate into better daily alignment, increasing treatment delivery accuracy.
Collapse
Affiliation(s)
- Olga M. Dona Lemus
- Department of Radiation OncologyColumbia University Irving Medical CenterNew York CityNew YorkUSA
| | - Yi‐Fang Wang
- Department of Radiation OncologyColumbia University Irving Medical CenterNew York CityNew YorkUSA
| | - Fiona Li
- Department of Radiation OncologyColumbia University Irving Medical CenterNew York CityNew YorkUSA
| | - Sachin Jambawalikar
- Department of RadiologyColumbia University Irving Medical CenterNew York CityNew YorkUSA
| | - David P. Horowitz
- Department of Radiation OncologyColumbia University Irving Medical CenterNew York CityNew YorkUSA
- Herbert Irving Comprehensive Cancer CenterNew York CityNew YorkUSA
| | - Yuanguang Xu
- Department of Radiation OncologyColumbia University Irving Medical CenterNew York CityNew YorkUSA
| | - Cheng‐Shie Wuu
- Department of Radiation OncologyColumbia University Irving Medical CenterNew York CityNew YorkUSA
| |
Collapse
|
32
|
Ozaki S, Kaji S, Nawa K, Imae T, Aoki A, Nakamoto T, Ohta T, Nozawa Y, Yamashita H, Haga A, Nakagawa K. Training of deep cross-modality conversion models with a small dataset, and their application in megavoltage CT to kilovoltage CT conversion. Med Phys 2022; 49:3769-3782. [PMID: 35315529 DOI: 10.1002/mp.15626] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 02/21/2022] [Accepted: 03/14/2022] [Indexed: 11/06/2022] Open
Abstract
PURPOSE In recent years, deep-learning-based image processing has emerged as a valuable tool for medical imaging owing to its high performance. However, the quality of deep-learning-based methods heavily relies on the amount of training data; the high cost of acquiring a large dataset is a limitation to their utilization in medical fields. Herein, based on deep learning, we developed a computed tomography (CT) modality conversion method requiring only a few unsupervised images. METHODS The proposed method is based on CycleGAN with several extensions tailored for CT images, which aims at preserving the structure in the processed images and reducing the amount of training data. This method was applied to realize the conversion of megavoltage computed tomography (MVCT) to kilovoltage computed tomography (kVCT) images. Training was conducted using several datasets acquired from patients with head and neck cancer. The size of the datasets ranged from 16 slices (two patients) to 2745 slices (137 patients) for MVCT and 2824 slices (98 patients) for kVCT. RESULTS The required size of the training data was found to be as small as a few hundred slices. By statistical and visual evaluations, the quality improvement and structure preservation of the MVCT images converted by the proposed model were investigated. As a clinical benefit, it was observed by medical doctors that the converted images enhanced the precision of contouring. CONCLUSIONS We developed an MVCT to kVCT conversion model based on deep learning, which can be trained using only a few hundred unpaired images. The stability of the model against changes in data size was demonstrated. This study promotes the reliable use of deep learning in clinical medicine by partially answering commonly asked questions, such as "Is our data sufficient?" and "How much data should we acquire?" This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Sho Ozaki
- Graduate School of Medicine, University of Tokyo, Tokyo, 113-8655, Japan
| | - Shizuo Kaji
- Institute of Mathematics for Industry, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395, Japan
| | - Kanabu Nawa
- Department of Radiology, University of Tokyo Hospital, Tokyo, 113-8655, Japan
| | - Toshikazu Imae
- Department of Radiology, University of Tokyo Hospital, Tokyo, 113-8655, Japan
| | - Atsushi Aoki
- Department of Radiology, University of Tokyo Hospital, Tokyo, 113-8655, Japan
| | - Takahiro Nakamoto
- Department of Biological Science and Engineering, Faculty of Health Sciences, Hokkaido University, N12-W5, Kita-ku, Sapporo, Hokkaido, 060-0812, Japan
| | - Takeshi Ohta
- Department of Radiology, University of Tokyo Hospital, Tokyo, 113-8655, Japan
| | - Yuki Nozawa
- Department of Radiology, University of Tokyo Hospital, Tokyo, 113-8655, Japan
| | - Hideomi Yamashita
- Department of Radiology, University of Tokyo Hospital, Tokyo, 113-8655, Japan
| | - Akihiro Haga
- Graduate School of Biomedical Science, Tokushima University, Tokushima, 770-8503, Japan
| | - Keiichi Nakagawa
- Graduate School of Medicine, University of Tokyo, Tokyo, 113-8655, Japan
| |
Collapse
|
33
|
Wu W, Qu J, Cai J, Yang R. Multi-resolution residual deep neural network for improving pelvic CBCT image quality. Med Phys 2022; 49:1522-1534. [PMID: 35034367 DOI: 10.1002/mp.15460] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 11/16/2021] [Accepted: 12/20/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Cone-beam computed tomography (CBCT) is frequently used for accurate image guided radiation therapy (IGRT). However, the poor CBCT image quality prevents its further clinical use. Thus, it is important to improve the HU accuracy and structure preservation of CBCT images. METHODS In this study, we proposed a novel method to generate synthetic CT (sCT) images from CBCT images. A multi-resolution residual deep neural network (RDNN) was adopted for image regression from CBCT images to planning CT (pCT) images. At the coarse level, RDNN was first trained with a large amount of lower resolution images, which can make the network focus on coarse information and prevent overfitting problems. More fine information was obtained gradually by fine-tuning the coarse model using fewer number of higher resolution images. Our model was optimized by using aligned pCT and CBCT image pairs of a particular body region of 153 prostate cancer patients treated in our hospital (120 for training, 33 for testing). Five-fold cross-validation was used to tune the hyperparameters and the testing data were used to evaluate the performance of the final models. RESULTS The mean absolute error (MAE) between CBCT and pCT on the testing data was 352.56 HU, while the MAE between the sCT and pCT images was 52.18 HU for our proposed multi-resolution RDNN model, which reduced the MAE by 85.20% (p < 0.01). In addition, the average structural similarity index measure (SSIM) between the sCT and CBCT was 19.64% (p = 0.01) higher than that of pCT and CBCT. CONCLUSIONS The sCT images generated using our proposed multi-resolution RDNN have higher HU accuracy and structural fidelity, which may promote the further applications of CBCT images in the clinic for structure segmentation, dose calculation and adaptive radiotherapy planning. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Wangjiang Wu
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Junda Qu
- School of Biomedical Engineering, Capital Medical University, Beijing, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ruijie Yang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| |
Collapse
|
34
|
Liu J, Yan H, Cheng H, Liu J, Sun P, Wang B, Mao R, Du C, Luo S. CBCT-based synthetic CT generation using generative adversarial networks with disentangled representation. Quant Imaging Med Surg 2021; 11:4820-4834. [PMID: 34888192 DOI: 10.21037/qims-20-1056] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 06/02/2021] [Indexed: 11/06/2022]
Abstract
Background Cone-beam computed tomography (CBCT) plays a key role in image-guided radiotherapy (IGRT), however its poor image quality limited its clinical application. In this study, we developed a deep-learning based approach to translate CBCT image to synthetic CT (sCT) image that preserves both CT image quality and CBCT anatomical structures. Methods A novel synthetic CT generative adversarial network (sCTGAN) was proposed for CBCT-to-CT translation via disentangled representation. The approach of disentangled representation was employed to extract the anatomical information shared by CBCT and CT image domains. Both on-board CBCT and planning CT of 40 patients were used for network learning and those of another 12 patients were used for testing. Accuracy of our network was quantitatively evaluated using a series of statistical metrics, including the peak signal-to-noise ratio (PSNR), mean structural similarity index (SSIM), mean absolute error (MAE), and root-mean-square error (RMSE). Effectiveness of our network was compared against three state-of-the-art CycleGAN-based methods. Results The PSNR, SSIM, MAE, and RMSE between sCT generated by sCTGAN and deformed planning CT (dpCT) were 34.12 dB, 0.86, 32.70 HU, and 60.53 HU, while the corresponding values between original CBCT and dpCT were 28.67 dB, 0.64, 70.56 HU, and 112.13 HU. The RMSE (60.53±14.38 HU) of sCT generated by sCTGAN was less than that of sCT generated by all the three comparing methods (72.40±16.03 HU by CycleGAN, 71.60±15.09 HU by CycleGAN-Unet512, 64.93±14.33 HU by CycleGAN-AG). Conclusions The sCT generated by our sCTGAN network was closer to the ground truth (dpCT), in comparison to all the three comparing CycleGAN-based methods. It provides an effective way to generate high-quality sCT which has a wide application in IGRT and adaptive radiotherapy.
Collapse
Affiliation(s)
- Jiwei Liu
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Hui Yan
- Department of Radiation Oncology, National Clinical Research Center for Cancer, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Hanlin Cheng
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Jianfei Liu
- School of Electrical Engineering and Automation, Anhui University, Hefei, China
| | - Pengjian Sun
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Boyi Wang
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Ronghu Mao
- Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University, Henan Cancer Hospital, Zhengzhou, China
| | - Chi Du
- Cancer Center, The Second Peoples Hospital of Neijiang, Neijiang, China
| | - Shengquan Luo
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| |
Collapse
|
35
|
Hase T, Nakao M, Imanishi K, Nakamura M, Matsuda T. Improvement of Image Quality of Cone-beam CT Images by Three-dimensional Generative Adversarial Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2843-2846. [PMID: 34891840 DOI: 10.1109/embc46164.2021.9629952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Artifacts and defects in Cone-beam Computed Tomography (CBCT) images are a problem in radiotherapy and surgical procedures. Unsupervised learning-based image translation techniques have been studied to improve the image quality of head and neck CBCT images, but there have been few studies on improving the image quality of abdominal CBCT images, which are strongly affected by organ deformation due to posture and breathing. In this study, we propose a method for improving the image quality of abdominal CBCT images by translating the numerical values to the values of corresponding paired CT images using an unsupervised CycleGAN framework. This method preserves anatomical structure through adversarial learning that translates voxel values according to corresponding regions between CBCT and CT images of the same case. The image translation model was trained on 68 CT-CBCT datasets and then applied to 8 test datasets, and the effectiveness of the proposed method for improving the image quality of CBCT images was confirmed.
Collapse
|
36
|
Gao L, Xie K, Wu X, Lu Z, Li C, Sun J, Lin T, Sui J, Ni X. Generating synthetic CT from low-dose cone-beam CT by using generative adversarial networks for adaptive radiotherapy. Radiat Oncol 2021; 16:202. [PMID: 34649572 PMCID: PMC8515667 DOI: 10.1186/s13014-021-01928-w] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 06/17/2021] [Indexed: 11/10/2022] Open
Abstract
OBJECTIVE To develop high-quality synthetic CT (sCT) generation method from low-dose cone-beam CT (CBCT) images by using attention-guided generative adversarial networks (AGGAN) and apply these images to dose calculations in radiotherapy. METHODS The CBCT/planning CT images of 170 patients undergoing thoracic radiotherapy were used for training and testing. The CBCT images were scanned under a fast protocol with 50% less clinical projection frames compared with standard chest M20 protocol. Training with aligned paired images was performed using conditional adversarial networks (so-called pix2pix), and training with unpaired images was carried out with cycle-consistent adversarial networks (cycleGAN) and AGGAN, through which sCT images were generated. The image quality and Hounsfield unit (HU) value of the sCT images generated by the three neural networks were compared. The treatment plan was designed on CT and copied to sCT images to calculated dose distribution. RESULTS The image quality of sCT images by all the three methods are significantly improved compared with original CBCT images. The AGGAN achieves the best image quality in the testing patients with the smallest mean absolute error (MAE, 43.5 ± 6.69), largest structural similarity (SSIM, 93.7 ± 3.88) and peak signal-to-noise ratio (PSNR, 29.5 ± 2.36). The sCT images generated by all the three methods showed superior dose calculation accuracy with higher gamma passing rates compared with original CBCT image. The AGGAN offered the highest gamma passing rates (91.4 ± 3.26) under the strictest criteria of 1 mm/1% compared with other methods. In the phantom study, the sCT images generated by AGGAN demonstrated the best image quality and the highest dose calculation accuracy. CONCLUSIONS High-quality sCT images were generated from low-dose thoracic CBCT images by using the proposed AGGAN through unpaired CBCT and CT images. The dose distribution could be calculated accurately based on sCT images in radiotherapy.
Collapse
Affiliation(s)
- Liugang Gao
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Kai Xie
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Xiaojin Wu
- Oncology Department, Xuzhou No.1 People's Hospital, Xuzhou, 221000, China
| | - Zhengda Lu
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China.,School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing, 213000, China
| | - Chunying Li
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Jiawei Sun
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Tao Lin
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Jianfeng Sui
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Xinye Ni
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China. .,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China.
| |
Collapse
|
37
|
Lee D, Jeong SW, Kim SJ, Cho H, Park W, Han Y. Improvement of megavoltage computed tomography image quality for adaptive helical tomotherapy using cycleGAN-based image synthesis with small datasets. Med Phys 2021; 48:5593-5610. [PMID: 34418109 DOI: 10.1002/mp.15182] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 07/20/2021] [Accepted: 07/30/2021] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Megavoltage computed tomography (MVCT) offers an opportunity for adaptive helical tomotherapy. However, high noise and reduced contrast in the MVCT images due to a decrease in the imaging dose to patients limits its usability. Therefore, we propose an algorithm to improve the image quality of MVCT. METHODS The proposed algorithm generates kilovoltage CT (kVCT)-like images from MVCT images using a cycle-consistency generative adversarial network (cycleGAN)-based image synthesis model. Data augmentation using an affine transformation was applied to the training data to overcome the lack of data diversity in the network training. The mean absolute error (MAE), root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) were used to quantify the correction accuracy of the images generated by the proposed algorithm. The proposed method was validated by comparing the images generated with those obtained from conventional and deep learning-based image processing method through non-augmented datasets. RESULTS The average MAE, RMSE, PSNR, and SSIM values were 18.91 HU, 69.35 HU, 32.73 dB, and 95.48 using the proposed method, respectively, whereas cycleGAN with non-augmented data showed inferior results (19.88 HU, 70.55 HU, 32.62 dB, 95.19, respectively). The voxel values of the image obtained by the proposed method also indicated similar distributions to those of the kVCT image. The dose-volume histogram of the proposed method was also similar to that of electron density corrected MVCT. CONCLUSIONS The proposed algorithm generates synthetic kVCT images from MVCT images using cycleGAN with small patient datasets. The image quality achieved by the proposed method was correspondingly improved to the level of a kVCT image while maintaining the anatomical structure of an MVCT image. The evaluation of dosimetric effectiveness of the proposed method indicates the applicability of accurate treatment planning in adaptive radiation therapy.
Collapse
Affiliation(s)
- Dongyeon Lee
- Department of Radiation Convergence Engineering, Yonsei University, Wonju, Republic of Korea.,Department of Radiation Oncology, Samsung Medical Center, Seoul, Republic of Korea
| | - Sang Woon Jeong
- Department of Health Sciences and Technology, SAIHST,Sungkyunkwan University, Seoul, Republic of Korea.,Department of Radiation Oncology, Samsung Medical Center, Seoul, Republic of Korea
| | - Sung Jin Kim
- Department of Radiation Oncology, Samsung Medical Center, Seoul, Republic of Korea
| | - Hyosung Cho
- Department of Radiation Convergence Engineering, Yonsei University, Wonju, Republic of Korea
| | - Won Park
- Department of Health Sciences and Technology, SAIHST,Sungkyunkwan University, Seoul, Republic of Korea.,Department of Radiation Oncology, Samsung Medical Center, Seoul, Republic of Korea
| | - Youngyih Han
- Department of Health Sciences and Technology, SAIHST,Sungkyunkwan University, Seoul, Republic of Korea.,Department of Radiation Oncology, Samsung Medical Center, Seoul, Republic of Korea
| |
Collapse
|
38
|
Spadea MF, Maspero M, Zaffino P, Seco J. Deep learning based synthetic-CT generation in radiotherapy and PET: A review. Med Phys 2021; 48:6537-6566. [PMID: 34407209 DOI: 10.1002/mp.15150] [Citation(s) in RCA: 80] [Impact Index Per Article: 26.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 06/06/2021] [Accepted: 07/13/2021] [Indexed: 01/22/2023] Open
Abstract
Recently,deep learning (DL)-based methods for the generation of synthetic computed tomography (sCT) have received significant research attention as an alternative to classical ones. We present here a systematic review of these methods by grouping them into three categories, according to their clinical applications: (i) to replace computed tomography in magnetic resonance (MR) based treatment planning, (ii) facilitate cone-beam computed tomography based image-guided adaptive radiotherapy, and (iii) derive attenuation maps for the correction of positron emission tomography. Appropriate database searching was performed on journal articles published between January 2014 and December 2020. The DL methods' key characteristics were extracted from each eligible study, and a comprehensive comparison among network architectures and metrics was reported. A detailed review of each category was given, highlighting essential contributions, identifying specific challenges, and summarizing the achievements. Lastly, the statistics of all the cited works from various aspects were analyzed, revealing the popularity and future trends and the potential of DL-based sCT generation. The current status of DL-based sCT generation was evaluated, assessing the clinical readiness of the presented methods.
Collapse
Affiliation(s)
- Maria Francesca Spadea
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Matteo Maspero
- Division of Imaging & Oncology, Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands
| | - Paolo Zaffino
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Joao Seco
- Division of Biomedical Physics in Radiation Oncology, DKFZ German Cancer Research Center, Heidelberg, Germany.,Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
39
|
Taniguchi T, Hara T, Shimozato T, Hyodo F, Ono K, Nakaya S, Noda Y, Kato H, Tanaka O, Matsuo M. Effect of computed tomography value error on dose calculation in adaptive radiotherapy with Elekta X-ray volume imaging cone beam computed tomography. J Appl Clin Med Phys 2021; 22:271-279. [PMID: 34375008 PMCID: PMC8425939 DOI: 10.1002/acm2.13384] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 06/03/2021] [Accepted: 07/12/2021] [Indexed: 11/30/2022] Open
Abstract
Purpose We evaluated the effect of changing the scan mode of the Elekta X‐ray volume imaging cone beam computed tomography (CBCT) on the accuracy of dose calculation, which may be affected by computed tomography (CT) value errors in three dimensions. Methods We used the electron density phantom and measured the CT values in three dimensions. CT values were compared with planning computed tomography (pCT) values for various materials. The evaluated scan modes were for head and neck (S‐scan), chest (M‐scan), and pelvis (L‐scan) with various collimators and filter systems. To evaluate the effects of the CT value error of the CBCT on dose error, Monte Carlo calculations of dosimetry were performed using pCT and CBCT images. Results The L‐scan had a CT value error of approximately 800 HU at the isocenter compared with the pCT. Furthermore, inhomogeneity in the longitudinal CT value profile was observed in the bone material. The dose error for ±100 HU difference in CT values for the S‐scan and M‐scan was within ±2%. The center of the L‐scan had a CT error of approximately 800 HU and a dose error of approximately 6%. The dose error of the L‐scan occurred in the beam path in the case of both single field and two parallel opposed fields, and the maximum error occurred at the center of the phantom in the case of both the 4‐field box and single‐arc techniques. Conclusions We demonstrated the three‐dimensional CT value characteristics of the CBCT by evaluating the CT value error obtained under various imaging conditions. It was found that the L‐scan is considerably affected by not having a unique bowtie filter, and the S‐scan without the bowtie filter causes CT value errors in the longitudinal direction. Moreover, the CBCT dose errors for the 4‐field box and single‐arc irradiation techniques converge to the isocenter.
Collapse
Affiliation(s)
- Takuya Taniguchi
- Department of Radiation Oncology, Asahi University Hospital, Gifu, Japan.,Department of Radiology, Gifu University, Gifu, Japan
| | - Takanori Hara
- Department of Medical Technology, Nakatsugawa Municipal General Hospital, Gifu, Japan
| | - Tomohiro Shimozato
- Faculty of Radiological Technology, School of Health Sciences, Gifu University of Medical Science, Seki, Japan
| | - Fuminori Hyodo
- Department of Radiology Frontier Science for Imaging, School of Medicine, Gifu University, Gifu, Japan
| | - Kose Ono
- Department of Radiation Oncology, Asahi University Hospital, Gifu, Japan
| | - Shuto Nakaya
- Department of Radiation Oncology, Asahi University Hospital, Gifu, Japan
| | | | - Hiroki Kato
- Department of Radiology, Gifu University, Gifu, Japan
| | - Osamu Tanaka
- Department of Radiation Oncology, Asahi University Hospital, Gifu, Japan
| | | |
Collapse
|
40
|
Rossi M, Cerveri P. Comparison of Supervised and Unsupervised Approaches for the Generation of Synthetic CT from Cone-Beam CT. Diagnostics (Basel) 2021; 11:diagnostics11081435. [PMID: 34441369 PMCID: PMC8395013 DOI: 10.3390/diagnostics11081435] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 07/30/2021] [Accepted: 08/07/2021] [Indexed: 12/04/2022] Open
Abstract
Due to major artifacts and uncalibrated Hounsfield units (HU), cone-beam computed tomography (CBCT) cannot be used readily for diagnostics and therapy planning purposes. This study addresses image-to-image translation by convolutional neural networks (CNNs) to convert CBCT to CT-like scans, comparing supervised to unsupervised training techniques, exploiting a pelvic CT/CBCT publicly available dataset. Interestingly, quantitative results were in favor of supervised against unsupervised approach showing improvements in the HU accuracy (62% vs. 50%), structural similarity index (2.5% vs. 1.1%) and peak signal-to-noise ratio (15% vs. 8%). Qualitative results conversely showcased higher anatomical artifacts in the synthetic CBCT generated by the supervised techniques. This was motivated by the higher sensitivity of the supervised training technique to the pixel-wise correspondence contained in the loss function. The unsupervised technique does not require correspondence and mitigates this drawback as it combines adversarial, cycle consistency, and identity loss functions. Overall, two main impacts qualify the paper: (a) the feasibility of CNN to generate accurate synthetic CT from CBCT images, which is fast and easy to use compared to traditional techniques applied in clinics; (b) the proposal of guidelines to drive the selection of the better training technique, which can be shifted to more general image-to-image translation.
Collapse
|
41
|
Chen L, Liang X, Shen C, Nguyen D, Jiang S, Wang J. Synthetic CT generation from CBCT images via unsupervised deep learning. Phys Med Biol 2021; 66. [PMID: 34061043 DOI: 10.1088/1361-6560/ac01b6] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Accepted: 05/14/2021] [Indexed: 11/12/2022]
Abstract
Adaptive-radiation-therapy (ART) is applied to account for anatomical variations observed over the treatment course. Daily or weekly cone-beam computed tomography (CBCT) is commonly used in clinic for patient positioning, but CBCT's inaccuracy in Hounsfield units (HU) prevents its application to dose calculation and treatment planning. Adaptive re-planning can be performed by deformably registering planning CT (pCT) to CBCT. However, scattering artifacts and noise in CBCT decrease the accuracy of deformable registration and induce uncertainty in treatment plan. Hence, generating from CBCT a synthetic CT (sCT) that has the same anatomical structure as CBCT but accurate HU values is desirable for ART. We proposed an unsupervised style-transfer-based approach to generate sCT based on CBCT and pCT. Unsupervised learning was desired because exactly matched CBCT and CT are rarely available, even when they are taken a few minutes apart. In the proposed model, CBCT and pCT are two inputs that provide anatomical structure and accurate HU information, respectively. The training objective function is designed to simultaneously minimize (1) contextual loss between sCT and CBCT to maintain the content and structure of CBCT in sCT and (2) style loss between sCT and pCT to achieve pCT-like image quality in sCT. We used CBCT and pCT images of 114 patients to train and validate the designed model, and another 29 independent patient cases to test the model's effectiveness. We quantitatively compared the resulting sCT with the original CBCT using the deformed same-day pCT as reference. Structure-similarity-index, peak-signal-to-noise-ratio, and mean-absolute-error in HU of sCT were 0.9723, 33.68, and 28.52, respectively, while those of CBCT were 0.9182, 29.67, and 49.90, respectively. We have demonstrated the effectiveness of the proposed model in using CBCT and pCT to synthesize CT-quality images. This model may permit using CBCT for advanced applications such as adaptive treatment planning.
Collapse
Affiliation(s)
- Liyuan Chen
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390 United States of America
| | - Xiao Liang
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390 United States of America
| | - Chenyang Shen
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390 United States of America
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390 United States of America
| | - Steve Jiang
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390 United States of America
| | - Jing Wang
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390 United States of America
| |
Collapse
|
42
|
Zhao J, Chen Z, Wang J, Xia F, Peng J, Hu Y, Hu W, Zhang Z. MV CBCT-Based Synthetic CT Generation Using a Deep Learning Method for Rectal Cancer Adaptive Radiotherapy. Front Oncol 2021; 11:655325. [PMID: 34136391 PMCID: PMC8201514 DOI: 10.3389/fonc.2021.655325] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 04/26/2021] [Indexed: 01/04/2023] Open
Abstract
Due to image quality limitations, online Megavoltage cone beam CT (MV CBCT), which represents real online patient anatomy, cannot be used to perform adaptive radiotherapy (ART). In this study, we used a deep learning method, the cycle-consistent adversarial network (CycleGAN), to improve the MV CBCT image quality and Hounsfield-unit (HU) accuracy for rectal cancer patients to make the generated synthetic CT (sCT) eligible for ART. Forty rectal cancer patients treated with the intensity modulated radiotherapy (IMRT) were involved in this study. The CT and MV CBCT images of 30 patients were used for model training, and the images of the remaining 10 patients were used for evaluation. Image quality, autosegmentation capability and dose calculation capability using the autoplanning technique of the generated sCT were evaluated. The mean absolute error (MAE) was reduced from 135.84 ± 41.59 HU for the CT and CBCT comparison to 52.99 ± 12.09 HU for the CT and sCT comparison. The structural similarity (SSIM) index for the CT and sCT comparison was 0.81 ± 0.03, which is a great improvement over the 0.44 ± 0.07 for the CT and CBCT comparison. The autosegmentation model performance on sCT for femoral heads was accurate and required almost no manual modification. For the CTV and bladder, although modification was needed for autocontouring, the Dice similarity coefficient (DSC) indices were high, at 0.93 and 0.94 for the CTV and bladder, respectively. For dose evaluation, the sCT-based plan has a much smaller dose deviation from the CT-based plan than that of the CBCT-based plan. The proposed method solved a key problem for rectal cancer ART realization based on MV CBCT. The generated sCT enables ART based on the actual patient anatomy at the treatment position.
Collapse
Affiliation(s)
- Jun Zhao
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Zhi Chen
- Department of Medical Physics, Shanghai Proton and Heavy Ion Center, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Fan Xia
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Jiayuan Peng
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yiwen Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Zhen Zhang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| |
Collapse
|
43
|
Training a deep neural network coping with diversities in abdominal and pelvic images of children and young adults for CBCT-based adaptive proton therapy. Radiother Oncol 2021; 160:250-258. [PMID: 33992626 DOI: 10.1016/j.radonc.2021.05.006] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 05/04/2021] [Accepted: 05/05/2021] [Indexed: 11/24/2022]
Abstract
PURPOSE To train a deep neural network for correcting abdominal and pelvic cone-beam computed tomography (CBCT) of children and young adults in the presence of diverse patient size, anatomic extent, and scan parameters. MATERIALS AND METHODS Pretreatment CBCT and planning/repeat CT image pairs from 64 children and young adults treated with proton therapy (aged 1-23 years) were analyzed. To evaluate the impact of anatomic extent in CBCT and data size in the training data, we compared the performance of three cycle-consistent generative adversarial network models that were separately trained by three datasets comprising abdominal (n = 21), pelvic (n = 29), and combined abdominal-pelvic image pairs (n = 50), respectively. The maximum body width of each patient was normalized to a fixed width before training and model application to reduce the impact of variations in body size. The corrected CBCT images by the three models were comparatively evaluated against the repeat CT closest in time to the CBCT (median gap, 0 days; range, 0-6 days) in HU accuracy, estimated dose distribution, and proton range. RESULTS The network model trained by the combined dataset significantly outperformed the abdomen and pelvis models in mean absolute HU error of the corrected CBCT from 14 testing patients (47 ± 7 HU versus 51 ± 8 HU; paired Wilcoxon signed-rank test, P < 0.01). The larger error (60 ± 7 HU) without the body-size normalization confirmed the efficacy of the preprocessing. The model trained with the combined dataset resulted in gamma passing rates of 98.5 ± 1.9% (2%/2 mm criterion) and the range (80% distal fall-off) differences from the reference within ±3 mm for 91.2 ± 11.5% beamlets. CONCLUSION Combining data from adjacent anatomic sites and normalizing age-dependent body sizes in children and young adults were beneficial in training a neural network to accurately estimate proton dose from CBCT despite limited training data size and anatomic diversities.
Collapse
|
44
|
Sun H, Fan R, Li C, Lu Z, Xie K, Ni X, Yang J. Imaging Study of Pseudo-CT Synthesized From Cone-Beam CT Based on 3D CycleGAN in Radiotherapy. Front Oncol 2021; 11:603844. [PMID: 33777746 PMCID: PMC7994515 DOI: 10.3389/fonc.2021.603844] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 02/01/2021] [Indexed: 11/13/2022] Open
Abstract
Purpose To propose a synthesis method of pseudo-CT (CTCycleGAN) images based on an improved 3D cycle generative adversarial network (CycleGAN) to solve the limitations of cone-beam CT (CBCT), which cannot be directly applied to the correction of radiotherapy plans. Methods The improved U-Net with residual connection and attention gates was used as the generator, and the discriminator was a full convolutional neural network (FCN). The imaging quality of pseudo-CT images is improved by adding a 3D gradient loss function. Fivefold cross-validation was performed to validate our model. Each pseudo CT generated is compared against the real CT image (ground truth CT, CTgt) of the same patient based on mean absolute error (MAE) and structural similarity index (SSIM). The dice similarity coefficient (DSC) coefficient was used to evaluate the segmentation results of pseudo CT and real CT. 3D CycleGAN performance was compared to 2D CycleGAN based on normalized mutual information (NMI) and peak signal-to-noise ratio (PSNR) metrics between the pseudo-CT and CTgt images. The dosimetric accuracy of pseudo-CT images was evaluated by gamma analysis. Results The MAE metric values between the CTCycleGAN and the real CT in fivefold cross-validation are 52.03 ± 4.26HU, 50.69 ± 5.25HU, 52.48 ± 4.42HU, 51.27 ± 4.56HU, and 51.65 ± 3.97HU, respectively, and the SSIM values are 0.87 ± 0.02, 0.86 ± 0.03, 0.85 ± 0.02, 0.85 ± 0.03, and 0.87 ± 0.03 respectively. The DSC values of the segmentation of bladder, cervix, rectum, and bone between CTCycleGAN and real CT images are 91.58 ± 0.45, 88.14 ± 1.26, 87.23 ± 2.01, and 92.59 ± 0.33, respectively. Compared with 2D CycleGAN, the 3D CycleGAN based pseudo-CT image is closer to the real image, with NMI values of 0.90 ± 0.01 and PSNR values of 30.70 ± 0.78. The gamma pass rate of the dose distribution between CTCycleGAN and CTgt is 97.0% (2%/2 mm). Conclusion The pseudo-CT images obtained based on the improved 3D CycleGAN have more accurate electronic density and anatomical structure.
Collapse
Affiliation(s)
- Hongfei Sun
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Rongbo Fan
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Chunying Li
- Department of Radiotherapy, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Center of Medical Physics With Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Key Laboratory of Medical Physics With Changzhou, Changzhou, China
| | - Zhengda Lu
- Department of Radiotherapy, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Center of Medical Physics With Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Key Laboratory of Medical Physics With Changzhou, Changzhou, China
| | - Kai Xie
- Department of Radiotherapy, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Center of Medical Physics With Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Key Laboratory of Medical Physics With Changzhou, Changzhou, China
| | - Xinye Ni
- Department of Radiotherapy, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Center of Medical Physics With Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Key Laboratory of Medical Physics With Changzhou, Changzhou, China
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| |
Collapse
|
45
|
Yan C, Lin J, Li H, Xu J, Zhang T, Chen H, Woodruff HC, Wu G, Zhang S, Xu Y, Lambin P. Cycle-Consistent Generative Adversarial Network: Effect on Radiation Dose Reduction and Image Quality Improvement in Ultralow-Dose CT for Evaluation of Pulmonary Tuberculosis. Korean J Radiol 2021; 22:983-993. [PMID: 33739634 PMCID: PMC8154783 DOI: 10.3348/kjr.2020.0988] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Revised: 11/22/2020] [Accepted: 12/21/2020] [Indexed: 01/15/2023] Open
Abstract
Objective To investigate the image quality of ultralow-dose CT (ULDCT) of the chest reconstructed using a cycle-consistent generative adversarial network (CycleGAN)-based deep learning method in the evaluation of pulmonary tuberculosis. Materials and Methods Between June 2019 and November 2019, 103 patients (mean age, 40.8 ± 13.6 years; 61 men and 42 women) with pulmonary tuberculosis were prospectively enrolled to undergo standard-dose CT (120 kVp with automated exposure control), followed immediately by ULDCT (80 kVp and 10 mAs). The images of the two successive scans were used to train the CycleGAN framework for image-to-image translation. The denoising efficacy of the CycleGAN algorithm was compared with that of hybrid and model-based iterative reconstruction. Repeated-measures analysis of variance and Wilcoxon signed-rank test were performed to compare the objective measurements and the subjective image quality scores, respectively. Results With the optimized CycleGAN denoising model, using the ULDCT images as input, the peak signal-to-noise ratio and structural similarity index improved by 2.0 dB and 0.21, respectively. The CycleGAN-generated denoised ULDCT images typically provided satisfactory image quality for optimal visibility of anatomic structures and pathological findings, with a lower level of image noise (mean ± standard deviation [SD], 19.5 ± 3.0 Hounsfield unit [HU]) than that of the hybrid (66.3 ± 10.5 HU, p < 0.001) and a similar noise level to model-based iterative reconstruction (19.6 ± 2.6 HU, p > 0.908). The CycleGAN-generated images showed the highest contrast-to-noise ratios for the pulmonary lesions, followed by the model-based and hybrid iterative reconstruction. The mean effective radiation dose of ULDCT was 0.12 mSv with a mean 93.9% reduction compared to standard-dose CT. Conclusion The optimized CycleGAN technique may allow the synthesis of diagnostically acceptable images from ULDCT of the chest for the evaluation of pulmonary tuberculosis.
Collapse
Affiliation(s)
- Chenggong Yan
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China.,The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands
| | - Jie Lin
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Haixia Li
- Clinical and Technical Solution, Philips Healthcare, Guangzhou, China
| | - Jun Xu
- Department of Hematology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Tianjing Zhang
- Clinical and Technical Solution, Philips Healthcare, Guangzhou, China
| | - Hao Chen
- Jiangsu JITRI Sioux Technologies Co., Ltd., Suzhou, China
| | - Henry C Woodruff
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands.,Department of Radiology and Nuclear Imaging, GROW-School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Guangyao Wu
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands
| | - Siqi Zhang
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Yikai Xu
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China.
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands.,Department of Radiology and Nuclear Imaging, GROW-School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| |
Collapse
|
46
|
Vinas L, Scholey J, Descovich M, Kearney V, Sudhyadhom A. Improved contrast and noise of megavoltage computed tomography (MVCT) through cycle-consistent generative machine learning. Med Phys 2021; 48:676-690. [PMID: 33232526 PMCID: PMC8743188 DOI: 10.1002/mp.14616] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 09/15/2020] [Accepted: 11/12/2020] [Indexed: 01/11/2023] Open
Abstract
PURPOSE Megavoltage computed tomography (MVCT) has been implemented on many radiation therapy treatment machines as a tomographic imaging modality that allows for three-dimensional visualization and localization of patient anatomy. Yet MVCT images exhibit lower contrast and greater noise than its kilovoltage CT (kVCT) counterpart. In this work, we sought to improve these disadvantages of MVCT images through an image-to-image-based machine learning transformation of MVCT and kVCT images. We demonstrated that by learning the style of kVCT images, MVCT images can be converted into high-quality synthetic kVCT (skVCT) images with higher contrast and lower noise, when compared to the original MVCT. METHODS Kilovoltage CT and MVCT images of 120 head and neck (H&N) cancer patients treated on an Accuray TomoHD system were retrospectively analyzed in this study. A cycle-consistent generative adversarial network (CycleGAN) machine learning, a variant of the generative adversarial network (GAN), was used to learn Hounsfield Unit (HU) transformations from MVCT to kVCT images, creating skVCT images. A formal mathematical proof is given describing the interplay between function sensitivity and input noise and how it applies to the error variance of a high-capacity function trained with noisy input data. Finally, we show how skVCT shares distributional similarity to kVCT for various macro-structures found in the body. RESULTS Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were improved in skVCT images relative to the original MVCT images and were consistent with kVCT images. Specifically, skVCT CNR for muscle-fat, bone-fat, and bone-muscle improved to 14.8 ± 0.4, 122.7 ± 22.6, and 107.9 ± 22.4 compared with 1.6 ± 0.3, 7.6 ± 1.9, and 6.0 ± 1.7, respectively, in the original MVCT images and was more consistent with kVCT CNR values of 15.2 ± 0.8, 124.9 ± 27.0, and 109.7 ± 26.5, respectively. Noise was significantly reduced in skVCT images with SNR values improving by roughly an order of magnitude and consistent with kVCT SNR values. Axial slice mean (S-ME) and mean absolute error (S-MAE) agreement between kVCT and MVCT/skVCT improved, on average, from -16.0 and 109.1 HU to 8.4 and 76.9 HU, respectively. CONCLUSIONS A kVCT-like qualitative aid was generated from input MVCT data through a CycleGAN instance. This qualitative aid, skVCT, was robust toward embedded metallic material, dramatically improves HU alignment from MVCT, and appears perceptually similar to kVCT with SNR and CNR values equivalent to that of kVCT images.
Collapse
Affiliation(s)
- Luciano Vinas
- Department of Physics, University of California Berkeley, Berkeley, California, 94720
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143
| | - Jessica Scholey
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143
| | - Martina Descovich
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143
| | - Vasant Kearney
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143
| | - Atchar Sudhyadhom
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143
| |
Collapse
|
47
|
Cone-beam CT image quality improvement using Cycle-Deblur consistent adversarial networks (Cycle-Deblur GAN) for chest CT imaging in breast cancer patients. Sci Rep 2021; 11:1133. [PMID: 33441936 PMCID: PMC7807016 DOI: 10.1038/s41598-020-80803-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 12/23/2020] [Indexed: 01/26/2023] Open
Abstract
Cone-beam computed tomography (CBCT) integrated with a linear accelerator is widely used to increase the accuracy of radiotherapy and plays an important role in image-guided radiotherapy (IGRT). For comparison with fan-beam computed tomography (FBCT), the image quality of CBCT is indistinct due to X-ray scattering, noise, and artefacts. We proposed a deep learning model, “Cycle-Deblur GAN”, combined with CycleGAN and Deblur-GAN models to improve the image quality of chest CBCT images. The 8706 CBCT and FBCT image pairs were used for training, and 1150 image pairs were used for testing in deep learning. The generated CBCT images from the Cycle-Deblur GAN model demonstrated closer CT values to FBCT in the lung, breast, mediastinum, and sternum compared to the CycleGAN and RED-CNN models. The quantitative evaluations of MAE, PSNR, and SSIM for CBCT generated from the Cycle-Deblur GAN model demonstrated better results than the CycleGAN and RED-CNN models. The Cycle-Deblur GAN model improved image quality and CT-value accuracy and preserved structural details for chest CBCT images.
Collapse
|
48
|
Wang T, Lei Y, Fu Y, Wynne JF, Curran WJ, Liu T, Yang X. A review on medical imaging synthesis using deep learning and its clinical applications. J Appl Clin Med Phys 2021; 22:11-36. [PMID: 33305538 PMCID: PMC7856512 DOI: 10.1002/acm2.13121] [Citation(s) in RCA: 100] [Impact Index Per Article: 33.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/12/2020] [Accepted: 11/21/2020] [Indexed: 02/06/2023] Open
Abstract
This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Jacob F. Wynne
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
49
|
Eckl M, Hoppen L, Sarria GR, Boda-Heggemann J, Simeonova-Chergou A, Steil V, Giordano FA, Fleckenstein J. Evaluation of a cycle-generative adversarial network-based cone-beam CT to synthetic CT conversion algorithm for adaptive radiation therapy. Phys Med 2020; 80:308-316. [PMID: 33246190 DOI: 10.1016/j.ejmp.2020.11.007] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 10/29/2020] [Accepted: 11/05/2020] [Indexed: 12/17/2022] Open
Abstract
PURPOSE Image-guided radiation therapy could benefit from implementing adaptive radiation therapy (ART) techniques. A cycle-generative adversarial network (cycle-GAN)-based cone-beam computed tomography (CBCT)-to-synthetic CT (sCT) conversion algorithm was evaluated regarding image quality, image segmentation and dosimetric accuracy for head and neck (H&N), thoracic and pelvic body regions. METHODS Using a cycle-GAN, three body site-specific models were priorly trained with independent paired CT and CBCT datasets of a kV imaging system (XVI, Elekta). sCT were generated based on first-fraction CBCT for 15 patients of each body region. Mean errors (ME) and mean absolute errors (MAE) were analyzed for the sCT. On the sCT, manually delineated structures were compared to deformed structures from the planning CT (pCT) and evaluated with standard segmentation metrics. Treatment plans were recalculated on sCT. A comparison of clinically relevant dose-volume parameters (D98, D50 and D2 of the target volume) and 3D-gamma (3%/3mm) analysis were performed. RESULTS The mean ME and MAE were 1.4, 29.6, 5.4 Hounsfield units (HU) and 77.2, 94.2, 41.8 HU for H&N, thoracic and pelvic region, respectively. Dice similarity coefficients varied between 66.7 ± 8.3% (seminal vesicles) and 94.9 ± 2.0% (lungs). Maximum mean surface distances were 6.3 mm (heart), followed by 3.5 mm (brainstem). The mean dosimetric differences of the target volumes did not exceed 1.7%. Mean 3D gamma pass rates greater than 97.8% were achieved in all cases. CONCLUSIONS The presented method generates sCT images with a quality close to pCT and yielded clinically acceptable dosimetric deviations. Thus, an important prerequisite towards clinical implementation of CBCT-based ART is fulfilled.
Collapse
Affiliation(s)
- Miriam Eckl
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany
| | - Lea Hoppen
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany.
| | - Gustavo R Sarria
- Department of Radiology and Radiation Oncology, University Hospital Bonn, Germany
| | - Judit Boda-Heggemann
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany
| | - Anna Simeonova-Chergou
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany
| | - Volker Steil
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany
| | - Frank A Giordano
- Department of Radiology and Radiation Oncology, University Hospital Bonn, Germany
| | - Jens Fleckenstein
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany
| |
Collapse
|
50
|
Xie S, Liang Y, Yang T, Song Z. Contextual loss based artifact removal method on CBCT image. J Appl Clin Med Phys 2020; 21:166-177. [PMID: 33136307 PMCID: PMC7769412 DOI: 10.1002/acm2.13084] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 09/09/2020] [Accepted: 10/02/2020] [Indexed: 12/28/2022] Open
Abstract
Purpose Cone beam computed tomography (CBCT) offers advantages such as high ray utilization rate, the same spatial resolution within and between slices, and high precision. It is one of the most actively studied topics in international computed tomography (CT) research. However, its application is hindered owing to scatter artifacts. This paper proposes a novel scatter artifact removal algorithm that is based on a convolutional neural network (CNN), where contextual loss is employed as the loss function. Methods In the proposed method, contextual loss is added to a simple CNN network to correct the CBCT artifacts in the pelvic region. The algorithm aims to learn the mapping from CBCT images to planning CT images. The 627 CBCT‐CT pairs of 11 patients were used to train the network, and the proposed algorithm was evaluated in terms of the mean absolute error (MAE), average peak signal‐to‐noise ratio (PSNR) and so on. The proposed method was compared with other methods to illustrate its effectiveness. Results The proposed method can remove artifacts (including streaking, shadowing, and cupping) in the CBCT image. Furthermore, key details such as the internal contours and texture information of the pelvic region are well preserved. Analysis of the average CT number, average MAE, and average PSNR indicated that the proposed method improved the image quality. The test results obtained with the chest data also indicated that the proposed method could be applied to other anatomies. Conclusions Although the CBCT‐CT image pairs are not completely matched at the pixel level, the method proposed in this paper can effectively correct the artifacts in the CBCT slices and improve the image quality. The average CT number of the regions of interest (including bones, skin) also exhibited a significant improvement. Furthermore, the proposed method can be applied to enhance the performance on such applications as dose estimation and segmentation.
Collapse
Affiliation(s)
- Shipeng Xie
- College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu, China
| | - Yingjuan Liang
- College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu, China
| | - Tao Yang
- College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu, China
| | - Zhenrong Song
- College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu, China
| |
Collapse
|