1
|
Hu Y, Zhou H, Cao N, Li C, Hu C. Synthetic CT generation based on CBCT using improved vision transformer CycleGAN. Sci Rep 2024; 14:11455. [PMID: 38769329 PMCID: PMC11106312 DOI: 10.1038/s41598-024-61492-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 05/06/2024] [Indexed: 05/22/2024] Open
Abstract
Cone-beam computed tomography (CBCT) is a crucial component of adaptive radiation therapy; however, it frequently encounters challenges such as artifacts and noise, significantly constraining its clinical utility. While CycleGAN is a widely employed method for CT image synthesis, it has notable limitations regarding the inadequate capture of global features. To tackle these challenges, we introduce a refined unsupervised learning model called improved vision transformer CycleGAN (IViT-CycleGAN). Firstly, we integrate a U-net framework that builds upon ViT. Next, we augment the feed-forward neural network by incorporating deep convolutional networks. Lastly, we enhance the stability of the model training process by introducing gradient penalty and integrating an additional loss term into the generator loss. The experiment demonstrates from multiple perspectives that our model-generated synthesizing CT(sCT) has significant advantages compared to other unsupervised learning models, thereby validating the clinical applicability and robustness of our model. In future clinical practice, our model has the potential to assist clinical practitioners in formulating precise radiotherapy plans.
Collapse
Affiliation(s)
- Yuxin Hu
- School of Computer and Software, Hohai University, Nanjing, 211100, China
| | - Han Zhou
- School of Electronic Science and Engineering, Nanjing University, NanJing, 210046, China
- Department of Radiation Oncology, The Fourth Affiliated Hospital of Nanjing Medical University, Nanjing, 210013, China
| | - Ning Cao
- School of Computer and Software, Hohai University, Nanjing, 211100, China
| | - Can Li
- Engineering Research Center of TCM Intelligence Health Service, School of Artificial Intelligence and Information Technology, Nanjing University of Chinese Medicine, Nanjing, 210023, China.
| | - Can Hu
- School of Computer and Software, Hohai University, Nanjing, 211100, China.
| |
Collapse
|
2
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
3
|
Sun H, Yang Z, Zhu J, Li J, Gong J, Chen L, Wang Z, Yin Y, Ren G, Cai J, Zhao L. Pseudo-medical image-guided technology based on 'CBCT-only' mode in esophageal cancer radiotherapy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 245:108007. [PMID: 38241802 DOI: 10.1016/j.cmpb.2024.108007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 11/03/2023] [Accepted: 01/03/2024] [Indexed: 01/21/2024]
Abstract
Purpose To minimize the various errors introduced by image-guided radiotherapy (IGRT) in the application of esophageal cancer treatment, this study proposes a novel technique based on the 'CBCT-only' mode of pseudo-medical image guidance. Methods The framework of this technology consists of two pseudo-medical image synthesis models in the CBCT→CT and the CT→PET direction. The former utilizes a dual-domain parallel deep learning model called AWM-PNet, which incorporates attention waning mechanisms. This model effectively suppresses artifacts in CBCT images in both the sinogram and spatial domains while efficiently capturing important image features and contextual information. The latter leverages tumor location and shape information provided by clinical experts. It introduces a PRAM-GAN model based on a prior region aware mechanism to establish a non-linear mapping relationship between CT and PET image domains. As a result, it enables the generation of pseudo-PET images that meet the clinical requirements for radiotherapy. Results The NRMSE and multi-scale SSIM (MS-SSIM) were utilized to evaluate the test set, and the results were presented as median values with lower quartile and upper quartile ranges. For the AWM-PNet model, the NRMSE and MS-SSIM values were 0.0218 (0.0143, 0.0255) and 0.9325 (0.9141, 0.9410), respectively. The PRAM-GAN model produced NRMSE and MS-SSIM values of 0.0404 (0.0356, 0.0476) and 0.9154 (0.8971, 0.9294), respectively. Statistical analysis revealed significant differences (p < 0.05) between these models and others. The numerical results of dose metrics, including D98 %, Dmean, and D2 %, validated the accuracy of HU values in the pseudo-CT images synthesized by the AWM-PNet. Furthermore, the Dice coefficient results confirmed statistically significant differences (p < 0.05) in GTV delineation between the pseudo-PET images synthesized using the PRAM-GAN model and other compared methods. Conclusion The AWM-PNet and PRAM-GAN models have the capability to generate accurate pseudo-CT and pseudo-PET images, respectively. The pseudo-image-guided technique based on the 'CBCT-only' mode shows promising prospects for application in esophageal cancer radiotherapy.
Collapse
Affiliation(s)
- Hongfei Sun
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Zhi Yang
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Jiarui Zhu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jie Li
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Jie Gong
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Liting Chen
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Zhongfei Wang
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Yutian Yin
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
| | - Lina Zhao
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China.
| |
Collapse
|
4
|
Rusanov B, Hassan GM, Reynolds M, Sabet M, Rowshanfarzad P, Bucknell N, Gill S, Dass J, Ebert M. Transformer CycleGAN with uncertainty estimation for CBCT based synthetic CT in adaptive radiotherapy. Phys Med Biol 2024; 69:035014. [PMID: 38198726 DOI: 10.1088/1361-6560/ad1cfc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 01/10/2024] [Indexed: 01/12/2024]
Abstract
Objective. Clinical implementation of synthetic CT (sCT) from cone-beam CT (CBCT) for adaptive radiotherapy necessitates a high degree of anatomical integrity, Hounsfield unit (HU) accuracy, and image quality. To achieve these goals, a vision-transformer and anatomically sensitive loss functions are described. Better quantification of image quality is achieved using the alignment-invariant Fréchet inception distance (FID), and uncertainty estimation for sCT risk prediction is implemented in a scalable plug-and-play manner.Approach. Baseline U-Net, generative adversarial network (GAN), and CycleGAN models were trained to identify shortcomings in each approach. The proposed CycleGAN-Best model was empirically optimized based on a large ablation study and evaluated using classical image quality metrics, FID, gamma index, and a segmentation analysis. Two uncertainty estimation methods, Monte-Carlo Dropout (MCD) and test-time augmentation (TTA), were introduced to model epistemic and aleatoric uncertainty.Main results. FID was correlated to blind observer image quality scores with a Correlation Coefficient of -0.83, validating the metric as an accurate quantifier of perceived image quality. The FID and mean absolute error (MAE) of CycleGAN-Best was 42.11 ± 5.99 and 25.00 ± 1.97 HU, compared to 63.42 ± 15.45 and 31.80 HU for CycleGAN-Baseline, and 144.32 ± 20.91 and 68.00 ± 5.06 HU for the CBCT, respectively. Gamma 1%/1 mm pass rates were 98.66 ± 0.54% for CycleGAN-Best, compared to 86.72 ± 2.55% for the CBCT. TTA and MCD-based uncertainty maps were well spatially correlated with poor synthesis outputs.Significance. Anatomical accuracy was achieved by suppressing CycleGAN-related artefacts. FID better discriminated image quality, where alignment-based metrics such as MAE erroneously suggest poorer outputs perform better. Uncertainty estimation for sCT was shown to correlate with poor outputs and has clinical relevancy toward model risk assessment and quality assurance. The proposed model and accompanying evaluation and risk assessment tools are necessary additions to achieve clinically robust sCT generation models.
Collapse
Affiliation(s)
- Branimir Rusanov
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
- Center for Advanced Technologies in Cancer Research, Perth, Western Australia, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
| | - Mark Reynolds
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
| | - Mahsheed Sabet
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
- Center for Advanced Technologies in Cancer Research, Perth, Western Australia, Australia
| | - Pejman Rowshanfarzad
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
- Center for Advanced Technologies in Cancer Research, Perth, Western Australia, Australia
| | - Nicholas Bucknell
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - Suki Gill
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - Joshua Dass
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - Martin Ebert
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
- Center for Advanced Technologies in Cancer Research, Perth, Western Australia, Australia
- Australian Centre for Quantitative Imaging, University of Western Australia, Perth, Western Australia, Australia
- School of Medicine and Public Health, University of Wisconsin, Madison WI, United States of America
| |
Collapse
|
5
|
Boldrini L, D'Aviero A, De Felice F, Desideri I, Grassi R, Greco C, Iorio GC, Nardone V, Piras A, Salvestrini V. Artificial intelligence applied to image-guided radiation therapy (IGRT): a systematic review by the Young Group of the Italian Association of Radiotherapy and Clinical Oncology (yAIRO). LA RADIOLOGIA MEDICA 2024; 129:133-151. [PMID: 37740838 DOI: 10.1007/s11547-023-01708-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/16/2023] [Indexed: 09/25/2023]
Abstract
INTRODUCTION The advent of image-guided radiation therapy (IGRT) has recently changed the workflow of radiation treatments by ensuring highly collimated treatments. Artificial intelligence (AI) and radiomics are tools that have shown promising results for diagnosis, treatment optimization and outcome prediction. This review aims to assess the impact of AI and radiomics on modern IGRT modalities in RT. METHODS A PubMed/MEDLINE and Embase systematic review was conducted to investigate the impact of radiomics and AI to modern IGRT modalities. The search strategy was "Radiomics" AND "Cone Beam Computed Tomography"; "Radiomics" AND "Magnetic Resonance guided Radiotherapy"; "Radiomics" AND "on board Magnetic Resonance Radiotherapy"; "Artificial Intelligence" AND "Cone Beam Computed Tomography"; "Artificial Intelligence" AND "Magnetic Resonance guided Radiotherapy"; "Artificial Intelligence" AND "on board Magnetic Resonance Radiotherapy" and only original articles up to 01.11.2022 were considered. RESULTS A total of 402 studies were obtained using the previously mentioned search strategy on PubMed and Embase. The analysis was performed on a total of 84 papers obtained following the complete selection process. Radiomics application to IGRT was analyzed in 23 papers, while a total 61 papers were focused on the impact of AI on IGRT techniques. DISCUSSION AI and radiomics seem to significantly impact IGRT in all the phases of RT workflow, even if the evidence in the literature is based on retrospective data. Further studies are needed to confirm these tools' potential and provide a stronger correlation with clinical outcomes and gold-standard treatment strategies.
Collapse
Affiliation(s)
- Luca Boldrini
- UOC Radioterapia Oncologica, Fondazione Policlinico Universitario IRCCS "A. Gemelli", Rome, Italy
- Università Cattolica del Sacro Cuore, Rome, Italy
| | - Andrea D'Aviero
- Radiation Oncology, Mater Olbia Hospital, Olbia, Sassari, Italy
| | - Francesca De Felice
- Radiation Oncology, Department of Radiological, Policlinico Umberto I, Rome, Italy
- Oncological and Pathological Sciences, "Sapienza" University of Rome, Rome, Italy
| | - Isacco Desideri
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
| | - Roberta Grassi
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Carlo Greco
- Department of Radiation Oncology, Università Campus Bio-Medico di Roma, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
| | | | - Valerio Nardone
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Antonio Piras
- UO Radioterapia Oncologica, Villa Santa Teresa, Bagheria, Palermo, Italy.
| | - Viola Salvestrini
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
- Cyberknife Center, Istituto Fiorentino di Cura e Assistenza (IFCA), 50139, Florence, Italy
| |
Collapse
|
6
|
Pang B, Si H, Liu M, Fu W, Zeng Y, Liu H, Cao T, Chang Y, Quan H, Yang Z. Comparison and evaluation of different deep learning models of synthetic CT generation from CBCT for nasopharynx cancer adaptive proton therapy. Med Phys 2023; 50:6920-6930. [PMID: 37800874 DOI: 10.1002/mp.16777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/09/2023] [Accepted: 09/17/2023] [Indexed: 10/07/2023] Open
Abstract
BACKGROUND Cone-beam computed tomography (CBCT) scanning is used for patient setup in image-guided radiotherapy. However, its inaccurate CT numbers limit its applicability in dose calculation and treatment planning. PURPOSE This study compares four deep learning methods for generating synthetic CT (sCT) to determine which method is more appropriate and offers potential for further clinical exploration in adaptive proton therapy for nasopharynx cancer. METHODS CBCTs and deformed planning CT (dCT) from 75 patients (60/5/10 for training, validation and testing) were used to compare cycle-consistent Generative Adversarial Network (cycleGAN), Unet, Unet+cycleGAN and conditionalGenerative Adversarial Network (cGAN) for sCT generation. The sCT images generated by each method were evaluated against dCT images using mean absolute error (MAE), structural similarity (SSIM), peak signal-to-noise ratio (PSNR), spatial non-uniformity (SNU) and radial averaging in the frequency domain. In addition, dosimetric accuracy was assessed through gamma analysis, differences in water equivalent thickness (WET), and dose-volume histogram metrics. RESULTS The cGAN model has demonstrated optimal performance in the four models across various indicators. In terms of image quality under global condition, the average MAE has been reduced to 16.39HU, SSIM has increased to 95.24%, and PSNR has increased to 28.98. Regarding dosimetric accuracy, the gamma passing rate (2%/2 mm) has reached 99.02%, and the WET difference is only 1.28 mm. The D95 value of CTVs coverage and Dmax value of spinal cord, brainstem show no significant differences between dCT and sCT generated by cGAN model. CONCLUSIONS The cGAN model has been shown to be a more suitable approach for generating sCT using CBCT, considering its characteristics and concepts. The resulting sCT has the potential for application in adaptive proton therapy.
Collapse
Affiliation(s)
- Bo Pang
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Hang Si
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Muyu Liu
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Wensheng Fu
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yiling Zeng
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Hongyuan Liu
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ting Cao
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yu Chang
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Hong Quan
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Zhiyong Yang
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
7
|
Gao L, Xie K, Sun J, Lin T, Sui J, Yang G, Ni X. A transformer-based dual-domain network for reconstructing FOV extended cone-beam CT images from truncated sinograms in radiation therapy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 241:107767. [PMID: 37633083 DOI: 10.1016/j.cmpb.2023.107767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 08/15/2023] [Accepted: 08/15/2023] [Indexed: 08/28/2023]
Abstract
BACKGROUND AND OBJECTIVE Cone-beam computed tomography (CBCT) is widely used in clinical radiotherapy, but its small field of view (sFOV) limits its application potential. In this study, a transformer-based dual-domain network (dual_swin), which combined image domain restoration and sinogram domain restoration, was proposed for the reconstruction of complete CBCT images with extended FOV from truncated sinograms. METHODS The planning CT images with large FOV (LFOV) of 330 patients who received radiation therapy were collected. The synthetic CBCT (sCBCT) images with LFOV were generated from CT images by the trained cycleGAN network, and CBCT images with sFOV were obtained through forward projection, projection truncation, and filtered back projection (FBP), comprising the training and test data. The proposed dual_swin includes sinogram domain restoration, image domain restoration, and FBP layer, and the swin transformer blocks were used as the basic feature extraction module in the network to improve the global feature extraction ability. The proposed dual_swin was compared with the image domain method, the sinogram domain method, the U-Net based dual domain network (dual_Unet), and the traditional iterative reconstruction method based on prior image and conjugate gradient least-squares (CGLS) in the test of sCBCT images and clinical CBCT images. The HU accuracy and body contour accuracy of the predicted images by each method were evaluated. RESULTS The images generated using the CGLS method were fuzzy and obtained the lowest structural similarity (SSIM) among all methods in the test of sCBCT and clinical CBCT images. The predicted images by the image domain methods are quite different from the ground truth and have low accuracy on HU value and body contour. In comparison with image domain methods, sinogram domain methods improved the accuracy of HU value and body contour but introduced secondary artifacts and distorted bone tissue. The proposed dual_swin achieved the highest HU and contour accuracy with mean absolute error (MAE) of 23.0 HU, SSIM of 95.7%, dice similarity coefficient (DSC) of 99.6%, and Hausdorff distance (HD) of 4.1 mm in the test of sCBCT images. In the test of clinical patients, images that were predicted by dual_swin yielded MAE, SSIM, DSC, and HD of 38.2 HU, 91.7%, 99.0%, and 5.4 mm, respectively. The predicted images by the proposed dual_swin has significantly higher accuracy than the other methods (P < 0.05). CONCLUSIONS The proposed dual_swin can accurately reconstruct FOV extended CBCT images from the truncated sinogram to improve the application potential of CBCT images in radiotherapy.
Collapse
Affiliation(s)
- Liugang Gao
- School of Computer Science and Engineering, Southeast University, Nanjing, China; The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Kai Xie
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Jiawei Sun
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Tao Lin
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Jianfeng Sui
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Guanyu Yang
- School of Computer Science and Engineering, Southeast University, Nanjing, China.
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China.
| |
Collapse
|
8
|
Liu Y, Chen A, Li Y, Lai H, Huang S, Yang X. CT synthesis from CBCT using a sequence-aware contrastive generative network. Comput Med Imaging Graph 2023; 109:102300. [PMID: 37776676 DOI: 10.1016/j.compmedimag.2023.102300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 08/21/2023] [Accepted: 08/21/2023] [Indexed: 10/02/2023]
Abstract
Computerized tomography (CT) synthesis from cone-beam computerized tomography (CBCT) is a key step in adaptive radiotherapy. It uses a synthetic CT to calculate the dose to correct and adjust the radiotherapy plan in a timely manner. The cycle-consistent adversarial network (Cycle GAN) is commonly used in CT synthesis tasks but it has some defects: (a) the premise of the cycle consistency loss is that the conversion between domains is bijective, but the CBCT and CT conversion does not fully satisfy the bijective relationship, and (b) it does not take advantage of the complementary information between multiple sets of CBCTs for the same patient. To address these problems, we propose a novel framework named the sequence-aware contrastive generative network (SCGN) that introduces an attention sequence fusion module to improve the CBCT quality. In addition, it not only applies contrastive learning to the generative adversarial networks (GANs) to pay more attention to the anatomical structure of CBCT in feature extraction but also uses a new generator to improve the accuracy of the anatomical details. Experimental results on our datasets show that our method significantly outperforms the existing unsupervised CT synthesis methods.
Collapse
Affiliation(s)
- Yanxia Liu
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, China
| | - Anni Chen
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, China
| | - Yuhong Li
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, China
| | - Haoyu Lai
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, China
| | - Sijuan Huang
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Esophageal Cancer Institute, Guangzhou, Guangdong 510060, China.
| | - Xin Yang
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Esophageal Cancer Institute, Guangzhou, Guangdong 510060, China.
| |
Collapse
|
9
|
Yoganathan S, Aouadi S, Ahmed S, Paloor S, Torfeh T, Al-Hammadi N, Hammoud R. Generating synthetic images from cone beam computed tomography using self-attention residual UNet for head and neck radiotherapy. Phys Imaging Radiat Oncol 2023; 28:100512. [PMID: 38111501 PMCID: PMC10726231 DOI: 10.1016/j.phro.2023.100512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 11/09/2023] [Accepted: 11/09/2023] [Indexed: 12/20/2023] Open
Abstract
Background and purpose Accurate CT numbers in Cone Beam CT (CBCT) are crucial for precise dose calculations in adaptive radiotherapy (ART). This study aimed to generate synthetic CT (sCT) from CBCT using deep learning (DL) models in head and neck (HN) radiotherapy. Materials and methods A novel DL model, the 'self-attention-residual-UNet' (ResUNet), was developed for accurate sCT generation. ResUNet incorporates a self-attention mechanism in its long skip connections to enhance information transfer between the encoder and decoder. Data from 93 HN patients, each with planning CT (pCT) and first-day CBCT images were used. Model performance was evaluated using two DL approaches (non-adversarial and adversarial training) and two model types (2D axial only vs. 2.5D axial, sagittal, and coronal). ResUNet was compared with the traditional UNet through image quality assessment (Mean Absolute Error (MAE), Peak-Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM)) and dose calculation accuracy evaluation (DVH deviation and gamma evaluation (1 %/1mm)). Results Image similarity evaluation results for the 2.5D-ResUNet and 2.5D-UNet models were: MAE: 46±7 HU vs. 51±9 HU, PSNR: 66.6±2.0 dB vs. 65.8±1.8 dB, and SSIM: 0.81±0.04 vs. 0.79±0.05. There were no significant differences in dose calculation accuracy between DL models. Both models demonstrated DVH deviation below 0.5 % and a gamma-pass-rate (1 %/1mm) exceeding 97 %. Conclusions ResUNet enhanced CT number accuracy and image quality of sCT and outperformed UNet in sCT generation from CBCT. This method holds promise for generating precise sCT for HN ART.
Collapse
Affiliation(s)
- S.A. Yoganathan
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Souha Aouadi
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Sharib Ahmed
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Satheesh Paloor
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Tarraf Torfeh
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Noora Al-Hammadi
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Rabih Hammoud
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| |
Collapse
|
10
|
Li Z, Zhang Q, Li H, Kong L, Wang H, Liang B, Chen M, Qin X, Yin Y, Li Z. Using RegGAN to generate synthetic CT images from CBCT images acquired with different linear accelerators. BMC Cancer 2023; 23:828. [PMID: 37670252 PMCID: PMC10478281 DOI: 10.1186/s12885-023-11274-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 08/08/2023] [Indexed: 09/07/2023] Open
Abstract
BACKGROUND The goal was to investigate the feasibility of the registration generative adversarial network (RegGAN) model in image conversion for performing adaptive radiation therapy on the head and neck and its stability under different cone beam computed tomography (CBCT) models. METHODS A total of 100 CBCT and CT images of patients diagnosed with head and neck tumors were utilized for the training phase, whereas the testing phase involved 40 distinct patients obtained from four different linear accelerators. The RegGAN model was trained and tested to evaluate its performance. The generated synthetic CT (sCT) image quality was compared to that of planning CT (pCT) images by employing metrics such as the mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). Moreover, the radiation therapy plan was uniformly applied to both the sCT and pCT images to analyze the planning target volume (PTV) dose statistics and calculate the dose difference rate, reinforcing the model's accuracy. RESULTS The generated sCT images had good image quality, and no significant differences were observed among the different CBCT modes. The conversion effect achieved for Synergy was the best, and the MAE decreased from 231.3 ± 55.48 to 45.63 ± 10.78; the PSNR increased from 19.40 ± 1.46 to 26.75 ± 1.32; the SSIM increased from 0.82 ± 0.02 to 0.85 ± 0.04. The quality improvement effect achieved for sCT image synthesis based on RegGAN was obvious, and no significant sCT synthesis differences were observed among different accelerators. CONCLUSION The sCT images generated by the RegGAN model had high image quality, and the RegGAN model exhibited a strong generalization ability across different accelerators, enabling its outputs to be used as reference images for performing adaptive radiation therapy on the head and neck.
Collapse
Affiliation(s)
- Zhenkai Li
- Chengdu University of Technology, Chengdu, China
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | | | - Haodong Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Lingke Kong
- Manteia Technologies Co., Ltd., Xiamen, China
| | - Huadong Wang
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Benzhe Liang
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Mingming Chen
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Xiaohang Qin
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Yong Yin
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China.
| | - Zhenjiang Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China.
| |
Collapse
|
11
|
Aouadi S, Yoganathan SA, Torfeh T, Paloor S, Caparrotti P, Hammoud R, Al-Hammadi N. Generation of synthetic CT from CBCT using deep learning approaches for head and neck cancer patients. Biomed Phys Eng Express 2023; 9:055020. [PMID: 37489854 DOI: 10.1088/2057-1976/acea27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Accepted: 07/25/2023] [Indexed: 07/26/2023]
Abstract
Purpose.To create a synthetic CT (sCT) from daily CBCT using either deep residual U-Net (DRUnet), or conditional generative adversarial network (cGAN) for adaptive radiotherapy planning (ART).Methods.First fraction CBCT and planning CT (pCT) were collected from 93 Head and Neck patients who underwent external beam radiotherapy. The dataset was divided into training, validation, and test sets of 58, 10 and 25 patients respectively. Three methods were used to generate sCT, 1. Nonlocal means patch based method was modified to include multiscale patches defining the multiscale patch based method (MPBM), 2. An encoder decoder 2D Unet with imbricated deep residual units was implemented, 3. DRUnet was integrated to the generator part of cGAN whereas a convolutional PatchGAN classifier was used as the discriminator. The accuracy of sCT was evaluated geometrically using Mean Absolute Error (MAE). Clinical Volumetric Modulated Arc Therapy (VMAT) plans were copied from pCT to registered CBCT and sCT and dosimetric analysis was performed by comparing Dose Volume Histogram (DVH) parameters of planning target volumes (PTVs) and organs at risk (OARs). Furthermore, 3D Gamma analysis (2%/2mm, global) between the dose on the sCT or CBCT and that on the pCT was performed.Results. The average MAE calculated between pCT and CBCT was 180.82 ± 27.37HU. Overall, all approaches significantly reduced the uncertainties in CBCT. Deep learning approaches outperformed patch-based methods with MAE = 67.88 ± 8.39HU (DRUnet) and MAE = 72.52 ± 8.43HU (cGAN) compared to MAE = 90.69 ± 14.3HU (MPBM). The percentages of DVH metric deviations were below 0.55% for PTVs and 1.17% for OARs using DRUnet. The average Gamma pass rate was 99.45 ± 1.86% for sCT generated using DRUnet.Conclusion.DL approaches outperformed MPBM. Specifically, DRUnet could be used for the generation of sCT with accurate intensities and realistic description of patient anatomy. This could be beneficial for CBCT based ART.
Collapse
Affiliation(s)
- Souha Aouadi
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - S A Yoganathan
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Tarraf Torfeh
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Satheesh Paloor
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Palmira Caparrotti
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Rabih Hammoud
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Noora Al-Hammadi
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| |
Collapse
|
12
|
Jassim H, Nedaei HA, Geraily G, Banaee N, Kazemian A. The geometric and dosimetric accuracy of kilovoltage cone beam computed tomography images for adaptive treatment: a systematic review. BJR Open 2023; 5:20220062. [PMID: 37389008 PMCID: PMC10301728 DOI: 10.1259/bjro.20220062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 01/24/2023] [Indexed: 07/01/2023] Open
Abstract
Objectives To provide an overview and meta-analysis of different techniques adopted to accomplish kVCBCT for dose calculation and automated segmentation. Methods A systematic review and meta-analysis were performed on eligible studies demonstrating kVCBCT-based dose calculation and automated contouring of different tumor features. Meta-analysis of the performance was accomplished on the reported γ analysis and dice similarity coefficient (DSC) score of both collected results as three subgroups (head and neck, chest, and abdomen). Results After the literature scrutinization (n = 1008), 52 papers were recognized for the systematic review. Nine studies of dosimtric studies and eleven studies of geometric analysis were suitable for inclusion in meta-analysis. Using kVCBCT for treatment replanning depends on a method used. Deformable Image Registration (DIR) methods yielded small dosimetric error (≤2%), γ pass rate (≥90%) and DSC (≥0.8). Hounsfield Unit (HU) override and calibration curve-based methods also achieved satisfactory yielded small dosimetric error (≤2%) and γ pass rate ((≥90%), but they are prone to error due to their sensitivity to a vendor-specific variation in kVCBCT image quality. Conclusions Large cohorts of patients ought to be undertaken to validate methods achieving low levels of dosimetric and geometric errors. Quality guidelines should be established when reporting on kVCBCT, which include agreed metrics for reporting on the quality of corrected kVCBCT and defines protocols of new site-specific standardized imaging used when obtaining kVCBCT images for adaptive radiotherapy. Advances in knowledge This review gives useful knowledge about methods making kVCBCT feasible for kVCBCT-based adaptive radiotherapy, simplifying patient pathway and reducing concomitant imaging dose to the patient.
Collapse
Affiliation(s)
| | | | | | - Nooshin Banaee
- Medical Radiation Research Center, Islamic Azad University, Tehran, Iran
| | | |
Collapse
|
13
|
Szmul A, Taylor S, Lim P, Cantwell J, Moreira I, Zhang Y, D’Souza D, Moinuddin S, Gaze MN, Gains J, Veiga C. Deep learning based synthetic CT from cone beam CT generation for abdominal paediatric radiotherapy. Phys Med Biol 2023; 68:105006. [PMID: 36996837 PMCID: PMC10160738 DOI: 10.1088/1361-6560/acc921] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 03/13/2023] [Accepted: 03/30/2023] [Indexed: 04/01/2023]
Abstract
Objective. Adaptive radiotherapy workflows require images with the quality of computed tomography (CT) for re-calculation and re-optimisation of radiation doses. In this work we aim to improve the quality of on-board cone beam CT (CBCT) images for dose calculation using deep learning.Approach. We propose a novel framework for CBCT-to-CT synthesis using cycle-consistent Generative Adversarial Networks (cycleGANs). The framework was tailored for paediatric abdominal patients, a challenging application due to the inter-fractional variability in bowel filling and small patient numbers. We introduced to the networks the concept of global residuals only learning and modified the cycleGAN loss function to explicitly promote structural consistency between source and synthetic images. Finally, to compensate for the anatomical variability and address the difficulties in collecting large datasets in the paediatric population, we applied a smart 2D slice selection based on the common field-of-view (abdomen) to our imaging dataset. This acted as a weakly paired data approach that allowed us to take advantage of scans from patients treated for a variety of malignancies (thoracic-abdominal-pelvic) for training purposes. We first optimised the proposed framework and benchmarked its performance on a development dataset. Later, a comprehensive quantitative evaluation was performed on an unseen dataset, which included calculating global image similarity metrics, segmentation-based measures and proton therapy-specific metrics.Main results. We found improved performance for our proposed method, compared to a baseline cycleGAN implementation, on image-similarity metrics such as Mean Absolute Error calculated for a matched virtual CT (55.0 ± 16.6 HU proposed versus 58.9 ± 16.8 HU baseline). There was also a higher level of structural agreement for gastrointestinal gas between source and synthetic images measured using the dice similarity coefficient (0.872 ± 0.053 proposed versus 0.846 ± 0.052 baseline). Differences found in water-equivalent thickness metrics were also smaller for our method (3.3 ± 2.4% proposed versus 3.7 ± 2.8% baseline).Significance. Our findings indicate that our innovations to the cycleGAN framework improved the quality and structure consistency of the synthetic CTs generated.
Collapse
Affiliation(s)
- Adam Szmul
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Sabrina Taylor
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Pei Lim
- Department of Oncology, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Jessica Cantwell
- Radiotherapy, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Isabel Moreira
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Ying Zhang
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Derek D’Souza
- Radiotherapy Physics Services, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Syed Moinuddin
- Radiotherapy, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Mark N. Gaze
- Department of Oncology, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Jennifer Gains
- Department of Oncology, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Catarina Veiga
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| |
Collapse
|
14
|
Yang X, Wu J, Chen X. Application of Artificial Intelligence to the Diagnosis and Therapy of Nasopharyngeal Carcinoma. J Clin Med 2023; 12:jcm12093077. [PMID: 37176518 PMCID: PMC10178972 DOI: 10.3390/jcm12093077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 04/12/2023] [Accepted: 04/18/2023] [Indexed: 05/15/2023] Open
Abstract
Artificial intelligence (AI) is an interdisciplinary field that encompasses a wide range of computer science disciplines, including image recognition, machine learning, human-computer interaction, robotics and so on. Recently, AI, especially deep learning algorithms, has shown excellent performance in the field of image recognition, being able to automatically perform quantitative evaluation of complex medical image features to improve diagnostic accuracy and efficiency. AI has a wider and deeper application in the medical field of diagnosis, treatment and prognosis. Nasopharyngeal carcinoma (NPC) occurs frequently in southern China and Southeast Asian countries and is the most common head and neck cancer in the region. Detecting and treating NPC early is crucial for a good prognosis. This paper describes the basic concepts of AI, including traditional machine learning and deep learning algorithms, and their clinical applications of detecting and assessing NPC lesions, facilitating treatment and predicting prognosis. The main limitations of current AI technologies are briefly described, including interpretability issues, privacy and security and the need for large amounts of annotated data. Finally, we discuss the remaining challenges and the promising future of using AI to diagnose and treat NPC.
Collapse
Affiliation(s)
- Xinggang Yang
- Division of Biotherapy, Cancer Center, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Guoxue Road 37, Chengdu 610041, China
| | - Juan Wu
- Out-Patient Department, West China Hospital, Sichuan University, Guoxue Road 37, Chengdu 610041, China
| | - Xiyang Chen
- Division of Vascular Surgery, Department of General Surgery, West China Hospital, Sichuan University, Guoxue Road 37, Chengdu 610041, China
| |
Collapse
|
15
|
Jihong C, Kerun Q, Kaiqiang C, Xiuchun Z, Yimin Z, Penggang B. CBCT-based synthetic CT generated using CycleGAN with HU correction for adaptive radiotherapy of nasopharyngeal carcinoma. Sci Rep 2023; 13:6624. [PMID: 37095147 PMCID: PMC10125979 DOI: 10.1038/s41598-023-33472-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 04/13/2023] [Indexed: 04/26/2023] Open
Abstract
This study aims to utilize a hybrid approach of phantom correction and deep learning for synthesized CT (sCT) images generation based on cone-beam CT (CBCT) images for nasopharyngeal carcinoma (NPC). 52 CBCT/CT paired images of NPC patients were used for model training (41), validation (11). Hounsfield Units (HU) of the CBCT images was calibrated by a commercially available CIRS phantom. Then the original CBCT and the corrected CBCT (CBCT_cor) were trained separately with the same cycle generative adversarial network (CycleGAN) to generate SCT1 and SCT2. The mean error and mean absolute error (MAE) were used to quantify the image quality. For validations, the contours and treatment plans in CT images were transferred to original CBCT, CBCT_cor, SCT1 and SCT2 for dosimetric comparison. Dose distribution, dosimetric parameters and 3D gamma passing rate were analyzed. Compared with rigidly registered CT (RCT), the MAE of CBCT, CBCT_cor, SCT1 and SCT2 were 346.11 ± 13.58 HU, 145.95 ± 17.64 HU, 105.62 ± 16.08 HU and 83.51 ± 7.71 HU, respectively. Moreover, the average dosimetric parameter differences for the CBCT_cor, SCT1 and SCT2 were 2.7% ± 1.4%, 1.2% ± 1.0% and 0.6% ± 0.6%, respectively. Using the dose distribution of RCT images as reference, the 3D gamma passing rate of the hybrid method was significantly better than the other methods. The effectiveness of CBCT-based sCT generated using CycleGAN with HU correction for adaptive radiotherapy of nasopharyngeal carcinoma was confirmed. The image quality and dose accuracy of SCT2 were outperform the simple CycleGAN method. This finding has great significance for the clinical application of adaptive radiotherapy for NPC.
Collapse
Affiliation(s)
- Chen Jihong
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, 350014, Fujian, China
| | - Quan Kerun
- Department of Radiation Oncology, Xiangtan City Central Hospital, Xiangtan, 411100, Hunan, China
| | - Chen Kaiqiang
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, 350014, Fujian, China
| | - Zhang Xiuchun
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, 350014, Fujian, China
| | - Zhou Yimin
- School of Nuclear Science and Technology, University of South China, Hengyang, 421001, China
| | - Bai Penggang
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, 350014, Fujian, China.
| |
Collapse
|
16
|
Deng L, Ji Y, Huang S, Yang X, Wang J. Synthetic CT generation from CBCT using double-chain-CycleGAN. Comput Biol Med 2023; 161:106889. [DOI: 10.1016/j.compbiomed.2023.106889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 03/16/2023] [Accepted: 04/01/2023] [Indexed: 04/05/2023]
|
17
|
Xie K, Gao L, Xi Q, Zhang H, Zhang S, Zhang F, Sun J, Lin T, Sui J, Ni X. New technique and application of truncated CBCT processing in adaptive radiotherapy for breast cancer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107393. [PMID: 36739623 DOI: 10.1016/j.cmpb.2023.107393] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 01/26/2023] [Accepted: 01/31/2023] [Indexed: 06/18/2023]
Abstract
OBJECTIVE A generative adversarial network (TCBCTNet) was proposed to generate synthetic computed tomography (sCT) from truncated low-dose cone-beam computed tomography (CBCT) and planning CT (pCT). The sCT was applied to the dose calculation of radiotherapy for patients with breast cancer. METHODS The low-dose CBCT and pCT images of 80 female thoracic patients were used for training. The CBCT, pCT, and replanning CT (rCT) images of 20 thoracic patients and 20 patients with breast cancer were used for testing. All patients were fixed in the same posture with a vacuum pad. The CBCT images were scanned under the Fast Chest M20 protocol with a 50% reduction in projection frames compared with the standard Chest M20 protocol. Rigid registration was performed between pCT and CBCT, and deformation registration was performed between rCT and CBCT. In the training stage of the TCBCTNet, truncated CBCT images obtained from complete CBCT images by simulation were used. The input of the CBCT→CT generator was truncated CBCT and pCT, and TCBCTNet was applied to patients with breast cancer after training. The accuracy of the sCT was evaluated by anatomy and dosimetry and compared with the generative adversarial network with UNet and ResNet as the generators (named as UnetGAN, ResGAN). RESULTS The three models could improve the image quality of CBCT and reduce the scattering artifacts while preserving the anatomical geometry of CBCT. For the chest test set, TCBCTNet achieved the best mean absolute error (MAE, 21.18±3.76 HU), better than 23.06±3.90 HU in UnetGAN and 22.47±3.57 HU in ResGAN. When applied to patients with breast cancer, TCBCTNet performance decreased, and MAE was 25.34±6.09 HU. Compared with rCT, sCT by TCBCTNet showed consistent dose distribution and subtle absolute dose differences between the target and the organ at risk. The 3D gamma pass rates were 98.98%±0.64% and 99.69%±0.22% at 2 mm/2% and 3 mm/3%, respectively. Ablation experiments confirmed that pCT and content loss played important roles in TCBCTNet. CONCLUSIONS High-quality sCT images could be synthesized from truncated low-dose CBCT and pCT by using the proposed TCBCTNet model. In addition, sCT could be used to accurately calculate the dose distribution for patients with breast cancer.
Collapse
Affiliation(s)
- Kai Xie
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Liugang Gao
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Qianyi Xi
- Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China
| | - Heng Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China
| | - Sai Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China
| | - Fan Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China
| | - Jiawei Sun
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Tao Lin
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Jianfeng Sui
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Xinye Ni
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China; Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China.
| |
Collapse
|
18
|
Hasan Z, Key S, Habib AR, Wong E, Aweidah L, Kumar A, Sacks R, Singh N. Convolutional Neural Networks in ENT Radiology: Systematic Review of the Literature. Ann Otol Rhinol Laryngol 2023; 132:417-430. [PMID: 35651308 DOI: 10.1177/00034894221095899] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
INTRODUCTION Convolutional neural networks (CNNs) represent a state-of-the-art methodological technique in AI and deep learning, and were specifically created for image classification and computer vision tasks. CNNs have been applied in radiology in a number of different disciplines, mostly outside otolaryngology, potentially due to a lack of familiarity with this technology within the otolaryngology community. CNNs have the potential to revolutionize clinical practice by reducing the time required to perform manual tasks. This literature search aims to present a comprehensive systematic review of the published literature with regard to CNNs and their utility to date in ENT radiology. METHODS Data were extracted from a variety of databases including PubMED, Proquest, MEDLINE Open Knowledge Maps, and Gale OneFile Computer Science. Medical subject headings (MeSH) terms and keywords were used to extract related literature from each databases inception to October 2020. Inclusion criteria were studies where CNNs were used as the main intervention and CNNs focusing on radiology relevant to ENT. Titles and abstracts were reviewed followed by the contents. Once the final list of articles was obtained, their reference lists were also searched to identify further articles. RESULTS Thirty articles were identified for inclusion in this study. Studies utilizing CNNs in most ENT subspecialties were identified. Studies utilized CNNs for a number of tasks including identification of structures, presence of pathology, and segmentation of tumors for radiotherapy planning. All studies reported a high degree of accuracy of CNNs in performing the chosen task. CONCLUSION This study provides a better understanding of CNN methodology used in ENT radiology demonstrating a myriad of potential uses for this exciting technology including nodule and tumor identification, identification of anatomical variation, and segmentation of tumors. It is anticipated that this field will continue to evolve and these technologies and methodologies will become more entrenched in our everyday practice.
Collapse
Affiliation(s)
- Zubair Hasan
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
| | - Seraphina Key
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, VIC, Australia
| | - Al-Rahim Habib
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Princess Alexandra Hospital, Woolloongabba, QLD, Australia
| | - Eugene Wong
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
| | - Layal Aweidah
- Faculty of Medicine, University of Notre Dame, Darlinghurst, NSW, Australia
| | - Ashnil Kumar
- School of Biomedical Engineering, Faculty of Engineering, University of Sydney, Darlington, NSW, Australia
| | - Raymond Sacks
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Concord Hospital, Concord, NSW, Australia
| | - Narinder Singh
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
| |
Collapse
|
19
|
Joseph J, Biji I, Babu N, Pournami PN, Jayaraj PB, Puzhakkal N, Sabu C, Patel V. Fan beam CT image synthesis from cone beam CT image using nested residual UNet based conditional generative adversarial network. Phys Eng Sci Med 2023; 46:703-717. [PMID: 36943626 DOI: 10.1007/s13246-023-01244-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 03/09/2023] [Indexed: 03/23/2023]
Abstract
A radiotherapy technique called Image-Guided Radiation Therapy adopts frequent imaging throughout a treatment session. Fan Beam Computed Tomography (FBCT) based planning followed by Cone Beam Computed Tomography (CBCT) based radiation delivery drastically improved the treatment accuracy. Furtherance in terms of radiation exposure and cost can be achieved if FBCT could be replaced with CBCT. This paper proposes a Conditional Generative Adversarial Network (CGAN) for CBCT-to-FBCT synthesis. Specifically, a new architecture called Nested Residual UNet (NR-UNet) is introduced as the generator of the CGAN. A composite loss function, which comprises adversarial loss, Mean Squared Error (MSE), and Gradient Difference Loss (GDL), is used with the generator. The CGAN utilises the inter-slice dependency in the input by taking three consecutive CBCT slices to generate an FBCT slice. The model is trained using Head-and-Neck (H&N) FBCT-CBCT images of 53 cancer patients. The synthetic images exhibited a Peak Signal-to-Noise Ratio of 34.04±0.93 dB, Structural Similarity Index Measure of 0.9751±0.001 and a Mean Absolute Error of 14.81±4.70 HU. On average, the proposed model guarantees an improvement in Contrast-to-Noise Ratio four times better than the input CBCT images. The model also minimised the MSE and alleviated blurriness. Compared to the CBCT-based plan, the synthetic image results in a treatment plan closer to the FBCT-based plan. The three-slice to single-slice translation captures the three-dimensional contextual information in the input. Besides, it withstands the computational complexity associated with a three-dimensional image synthesis model. Furthermore, the results demonstrate that the proposed model is superior to the state-of-the-art methods.
Collapse
Affiliation(s)
- Jiffy Joseph
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India.
| | - Ivan Biji
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Naveen Babu
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - P N Pournami
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - P B Jayaraj
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Niyas Puzhakkal
- Department of Medical Physics, MVR Cancer Centre & Research Institute, Poolacode, Calicut, Kerala, 673601, India
| | - Christy Sabu
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Vedkumar Patel
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| |
Collapse
|
20
|
Gao L, Xie K, Sun J, Lin T, Sui J, Yang G, Ni X. Streaking artifact reduction for CBCT-based synthetic CT generation in adaptive radiotherapy. Med Phys 2023; 50:879-893. [PMID: 36183234 DOI: 10.1002/mp.16017] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 09/02/2022] [Accepted: 09/25/2022] [Indexed: 11/07/2022] Open
Abstract
BACKGROUND Cone-beam computed tomography (CBCT) is widely used for daily image guidance in radiation therapy, enhancing the reproducibility of patient setup. However, its application in adaptive radiotherapy (ART) is limited by many imaging artifacts and inaccurate Hounsfield units (HUs). The correction of CBCT image is necessary and of great value for CBCT-based ART. PURPOSE To explore the synthetic CT (sCT) generation from CBCT images of thorax and abdomen patients, which usually surfer from serious artifacts duo to organ state changes. In this study, a streaking artifact reduction network (SARN) is proposed to reduce artifacts and combine with cycleGAN to generate high-quality sCT images from CBCT and achieve an accurate dose calculation. METHODS The proposed SARN was trained in a self-supervised manner. Artifact-CT images were generated from planning CT by random deformation and projection replacement, and SARN was trained based on paired artifact-CT and CT images. The planning CT and CBCT images of 260 patients with cancer, including 120 thoracic and 140 abdominal CT scans, were used to train and evaluate neural networks. The CBCT images of another 12 patients in late treatment fractions, which contained large anatomy changes, were also tested by trained models. The trained models include commonly used U-Net, cycleGAN, attention-gated cycleGAN (cycAT), and cascade models combined SARN with cycleGAN or cycAT. The generated sCT images were compared in terms of image quality and dose calculation accuracy. RESULTS The sCT images generated by SARN combined with cycleGAN and cycAT showed the best image quality, removed the most artifacts, and retained the normal anatomical structure. The SARN+cycleGAN performed best in streaking artifacts removal with the maximum percent integrity uniformity (PIUm ) of 91.0% and minimum standard deviation (SD) of 35.4 HU for delineated artifact regions among all models. The mean absolute error (MAE) of CBCT images in the thorax and abdomen were 71.6 and 55.2 HU, respectively, using planning CT images after deformable registration as ground truth. Compared with CBCT, the thoracic and abdominal sCT images generated by each model had significantly improved image quality with smaller MAE (p < 0.05). The SARN+cycAT obtained the minimum MAEs of 42.5 HU in the thorax while SARN+cycleGAN got the minimum MAEs of 32.0 HU in the abdomen. The sCT generated by U-Net had a remarkably lower anatomical structure accuracy compared with the other models. The thoracic and abdominal sCT images generated by SARN+cycleGAN showed optimal dose calculation accuracy with gamma passing rates (2 mm/2%) of 98.2% and 96.9%, respectively. CONCLUSIONS The proposed SARN can reduce serious streaking artifacts in CBCT images. The SARN combined with cycleGAN can generate high-quality sCT images with fewer artifacts, high-accuracy HU values, and accurate anatomical structures, thus providing reliable dose calculation in ART.
Collapse
Affiliation(s)
- Liugang Gao
- School of Computer Science and Engineering, Southeast University, Nanjing, China
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Kai Xie
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Jiawei Sun
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Tao Lin
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Jianfeng Sui
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Guanyu Yang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| |
Collapse
|
21
|
Gong H, Liu B, Zhang G, Dai X, Qu B, Cai B, Xie C, Xu S. Evaluation of Dose Calculation Based on Cone-Beam CT Using Different Measuring Correction Methods for Head and Neck Cancer Patients. Technol Cancer Res Treat 2023; 22:15330338221148317. [PMID: 36638542 PMCID: PMC9841465 DOI: 10.1177/15330338221148317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023] Open
Abstract
Purpose: To investigate and compare 2 cone-beam computed tomography (CBCT) correction methods for CBCT-based dose calculation. Materials and Methods: Routine CBCT image sets of 12 head and neck cancer patients who received volumetric modulated arc therapy (VMAT) treatment were retrospectively analyzed. The CBCT images obtained using an on-board imager (OBI) at the first treatment fraction were firstly deformable registered and padded with the kVCT images to provide enough anatomical information about the tissues for dose calculation. Then, 2 CBCT correction methods were developed and applied to correct CBCT Hounsfield unit (HU) values. One method (HD method) is based on protocol-specific CBCT HU to physical density (HD) curve, and the other method (HM method) is based on histogram matching (HM) of HU value. The corrected CBCT images (CBCTHD and CBCTHM for HD and HM methods) were imported into the original planning system for dose calculation based on the HD curve of kVCT (the planning CT). The dose computation result was analyzed and discussed to compare these 2 CBCT-correction methods. Results: Dosimetric parameters, such as the Dmean, Dmax and D5% of the target volume in CBCT plan doses, were higher than those in the kVCT plan doses; however, the deviations were less than 2%. The D2%, in parallel organs such as the parotid glands, the deviations from the CBCTHM plan dose were less than those of the CBCTHD plan dose. The differences were statistically significant (P < .05). Meanwhile, the V30 value based on the HM method was better than that based on the HD method in the oral cavity region (P = .016). In addition, we also compared the γ passing rates of kVCT plan doses with the 2 CBCT plan doses, and negligible differences were found. Conclusion: The HM method was more suitable for head and neck cancer patients than the HD one. Furthermore, with the CBCTHM-based method, the dose calculation result better matches the kVCT-based dose calculation.
Collapse
Affiliation(s)
- Hanshun Gong
- Department of Radiation Oncology, The First Medical Center of PLA General
Hospital, Beijing, China
| | - Bo Liu
- School of Astronautics, Beihang
University, Beijing, China
| | - Gaolong Zhang
- School of Physics, Beihang
University, Beijing, China
| | - Xiangkun Dai
- Department of Radiation Oncology, The First Medical Center of PLA General
Hospital, Beijing, China
| | - Baolin Qu
- Department of Radiation Oncology, The First Medical Center of PLA General
Hospital, Beijing, China
| | - Boning Cai
- Department of Radiation Oncology, The First Medical Center of PLA General
Hospital, Beijing, China
| | - Chuanbin Xie
- Department of Radiation Oncology, The First Medical Center of PLA General
Hospital, Beijing, China
| | - Shouping Xu
- Department of Radiation Oncology, National Cancer Center/Cancer
Hospital, Chinese
Academy of Medical Sciences and Peking Union Medical
College, Beijing, China,National Cancer Center/National Clinical Research Center for
Cancer/Hebei Cancer Hospital, Chinese Academy of Medical
Sciences, Langfang, China,Shouping Xu, Department of Radiation
Oncology, National Cancer Center/Cancer Hospital, Chinese Academy of Medical
Sciences and Peking Union Medical College, Beijing, China.
| |
Collapse
|
22
|
Chang Y, Liang Y, Yang B, Qiu J, Pei X, Xu XG. Dosimetric comparison of deformable image registration and synthetic CT generation based on CBCT images for organs at risk in cervical cancer radiotherapy. Radiat Oncol 2023; 18:3. [PMID: 36604687 PMCID: PMC9817400 DOI: 10.1186/s13014-022-02191-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 12/27/2022] [Indexed: 01/07/2023] Open
Abstract
OBJECTIVE Anatomical variations existing in cervical cancer radiotherapy treatment can be monitored by cone-beam computed tomography (CBCT) images. Deformable image registration (DIR) from planning CT (pCT) to CBCT images and synthetic CT (sCT) image generation based on CBCT are two methods for improving the quality of CBCT images. This study aims to compare the accuracy of these two approaches geometrically and dosimetrically in cervical cancer radiotherapy. METHODS In this study, 40 paired pCT-CBCT images were collected to evaluate the accuracy of DIR and sCT generation. The DIR method was based on a 3D multistage registration network that was trained with 150 paired pCT-CBCT images, and the sCT generation method was performed based on a 2D cycle-consistent adversarial network (CycleGAN) with 6000 paired pCT-CBCT slices for training. Then, the doses were recalculated with the CBCT, pCT, deformed pCT (dpCT) and sCT images by a GPU-based Monte Carlo dose code, ArcherQA, to obtain DoseCBCT, DosepCT, DosedpCT and DosesCT. Organs at risk (OARs) included small intestine, rectum, bladder, spinal cord, femoral heads and bone marrow, CBCT and pCT contours were delineated manually, dpCT contours were propagated through deformation vector fields, sCT contours were auto-segmented and corrected manually. RESULTS The global gamma pass rate of DosesCT and DosedpCT was 99.66% ± 0.34%, while that of DoseCBCT and DosedpCT was 85.92% ± 7.56% at the 1%/1 mm criterion and a low-dose threshold of 10%. Based on DosedpCT as uniform dose distribution, there were comparable errors in femoral heads and bone marrow for the dpCT and sCT contours compared with CBCT contours, while sCT contours had lower errors in small intestine, rectum, bladder and spinal cord, especially for those with large volume difference of pCT and CBCT. CONCLUSIONS For cervical cancer radiotherapy, the DIR method and sCT generation could produce similar precise dose distributions, but sCT contours had higher accuracy when the difference in planning CT and CBCT was large.
Collapse
Affiliation(s)
- Yankui Chang
- grid.59053.3a0000000121679639School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
| | - Yongguang Liang
- grid.506261.60000 0001 0706 7839Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Bo Yang
- grid.506261.60000 0001 0706 7839Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Jie Qiu
- grid.506261.60000 0001 0706 7839Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Xi Pei
- grid.59053.3a0000000121679639School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China ,Technology Development Department, Anhui Wisdom Technology Co., Ltd., Hefei, China
| | - Xie George Xu
- grid.59053.3a0000000121679639School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China ,grid.411395.b0000 0004 1757 0085Department of Radiation Oncology, First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| |
Collapse
|
23
|
Hamming VC, Andersson S, Maduro JH, Langendijk JA, Both S, Sijtsema NM. Daily dose evaluation based on corrected CBCTs for breast cancer patients: accuracy of dose and complication risk assessment. Radiat Oncol 2022; 17:205. [PMID: 36510254 PMCID: PMC9746176 DOI: 10.1186/s13014-022-02174-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 11/30/2022] [Indexed: 12/14/2022] Open
Abstract
OBJECTIVES The goal of this study is to validate different CBCT correction methods to select the superior method that can be used for dose evaluation in breast cancer patients with large anatomical changes treated with photon irradiation. MATERIALS AND METHOD Seventy-six breast cancer patients treated with a partial VMAT photon technique (70% conformal, 30% VMAT) were included in this study. All patients showed at least a 5 mm variation (swelling or shrinkage) of the breast on the CBCT compared to the planning-CT (pCT) and had a repeat-CT (rCT) for dose evaluation acquired within 3 days of this CBCT. The original CBCT was corrected using four methods: (1) HU-override correction (CBCTHU), (2) analytical correction and conversion (CBCTCC), (3) deep learning (DL) correction (CTDL) and (4) virtual correction (CTV). Image quality evaluation consisted of calculating the mean absolute error (MAE) and mean error (ME) within the whole breast clinical target volume (CTV) and the field of view of the CBCT minus 2 cm (CBCT-ROI) with respect to the rCT. The dose was calculated on all image sets using the clinical treatment plan for dose and gamma passing rate analysis. RESULTS The MAE of the CBCT-ROI was below 66 HU for all corrected CBCTs, except for the CBCTHU with a MAE of 142 HU. No significant dose differences were observed in the CTV regions in the CBCTCC, CTDL and CTv. Only the CBCTHU deviated significantly (p < 0.01) resulting in 1.7% (± 1.1%) average dose deviation. Gamma passing rates were > 95% for 2%/2 mm for all corrected CBCTs. CONCLUSION The analytical correction and conversion, deep learning correction and virtual correction methods can be applied for an accurate CBCT correction that can be used for dose evaluation during the course of photon radiotherapy of breast cancer patients.
Collapse
Affiliation(s)
- Vincent C. Hamming
- grid.4830.f0000 0004 0407 1981Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | | | - John H. Maduro
- grid.4830.f0000 0004 0407 1981Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Johannes A. Langendijk
- grid.4830.f0000 0004 0407 1981Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Stefan Both
- grid.4830.f0000 0004 0407 1981Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Nanna M. Sijtsema
- grid.4830.f0000 0004 0407 1981Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| |
Collapse
|
24
|
Cao Z, Gao X, Chang Y, Liu G, Pei Y. A novel approach for eliminating metal artifacts based on MVCBCT and CycleGAN. Front Oncol 2022; 12:1024160. [PMID: 36439465 PMCID: PMC9686009 DOI: 10.3389/fonc.2022.1024160] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Accepted: 10/27/2022] [Indexed: 08/15/2023] Open
Abstract
PURPOSE To develop a metal artifact reduction (MAR) algorithm and eliminate the adverse effects of metal artifacts on imaging diagnosis and radiotherapy dose calculations. METHODS Cycle-consistent adversarial network (CycleGAN) was used to generate synthetic CT (sCT) images from megavoltage cone beam CT (MVCBCT) images. In this study, there were 140 head cases with paired CT and MVCBCT images, from which 97 metal-free cases were used for training. Based on the trained model, metal-free sCT (sCT_MF) images and metal-containing sCT (sCT_M) images were generated from the MVCBCT images of 29 metal-free cases and 14 metal cases, respectively. Then, the sCT_MF and sCT_M images were quantitatively evaluated for imaging and dosimetry accuracy. RESULTS The structural similarity (SSIM) index of the sCT_MF and metal-free CT (CT_MF) images were 0.9484, and the peak signal-to-noise ratio (PSNR) was 31.4 dB. Compared with the CT images, the sCT_MF images had similar relative electron density (RED) and dose distribution, and their gamma pass rate (1 mm/1%) reached 97.99% ± 1.14%. The sCT_M images had high tissue resolution with no metal artifacts, and the RED distribution accuracy in the range of 1.003 to 1.056 was improved significantly. The RED and dose corrections were most significant for the planning target volume (PTV), mandible and oral cavity. The maximum correction of Dmean and D50 for the oral cavity reached 90 cGy. CONCLUSIONS Accurate sCT_M images were generated from MVCBCT images based on CycleGAN, which eliminated the metal artifacts in clinical images completely and corrected the RED and dose distributions accurately for clinical application.
Collapse
Affiliation(s)
- Zheng Cao
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, China
- Hematology and Oncology Department, Hefei First People’s Hospital, Hefei, China
| | - Xiang Gao
- Hematology and Oncology Department, Hefei First People’s Hospital, Hefei, China
| | - Yankui Chang
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
| | - Gongfa Liu
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, China
| | - Yuanji Pei
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, China
| |
Collapse
|
25
|
O'Hara CJ, Bird D, Al-Qaisieh B, Speight R. Assessment of CBCT-based synthetic CT generation accuracy for adaptive radiotherapy planning. J Appl Clin Med Phys 2022; 23:e13737. [PMID: 36200179 DOI: 10.1002/acm2.13737] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 05/26/2022] [Accepted: 07/04/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Cone-beam CT (CBCT)-based synthetic CT (sCT) dose calculation has the potential to make the adaptive radiotherapy (ART) pathway more efficient while removing subjectivity. This study assessed four sCT generation methods using 15 head-and-neck rescanned ART patients. Each patient's planning CT (pCT), rescan CT (rCT), and CBCT post-rCT was acquired with the CBCT deformably registered to the rCT (dCBCT). METHODS The four methods investigated were as follows: method 1-deformably registering the pCT to the dCBCT. Method 2-assigning six mass density values to the dCBCT. Method 3-iteratively removing artifacts and correcting the dCBCT Hounsfield units (HU). Method 4-using a cycle general adversarial network machine learning model (trained with 45 paired pCT and CBCT). Treatment plans were created on the rCT and recalculated on each sCT. Planning target volume (PTV) and organ-at-risk (OAR) structures were contoured by clinicians on the rCT (high-dose PTV, low-dose PTV, spinal canal, larynx, brainstem, and parotids) to allow the assessment of dose-volume histogram statistics at clinically relevant points. RESULTS The HU mean absolute error (MAE) and minimum dose gamma index pass rate (2%/2 mm) were calculated, and the generation time was measured for 15 patients using the rCT as the comparator. For methods 1-4 the MAE, gamma index analysis, and generation time were as follows: 59.7 HU, 100.0%, and 143 s; 164.2 HU, 95.2%, and 232 s; 75.7 HU, 99.9%, and 153 s; and 79.4 HU, 99.8%, and 112 s, respectively. Dose differences for PTVs and OARs were all <0.3 Gy except for method 2 (<0.5 Gy). CONCLUSION All methods were considered clinically viable. The machine learning method was found to be most suitable for clinical implementation due to its high dosimetric accuracy and short generation time. Further investigation is required for larger anatomical changes between the CBCT and pCT and for other anatomical sites.
Collapse
Affiliation(s)
| | - David Bird
- Leeds Cancer Centre, Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | | | - Richard Speight
- Leeds Cancer Centre, Leeds Teaching Hospitals NHS Trust, Leeds, UK
| |
Collapse
|
26
|
Chen X, Liu Y, Yang B, Zhu J, Yuan S, Xie X, Liu Y, Dai J, Men K. A more effective CT synthesizer using transformers for cone-beam CT-guided adaptive radiotherapy. Front Oncol 2022; 12:988800. [PMID: 36091131 PMCID: PMC9454309 DOI: 10.3389/fonc.2022.988800] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 07/27/2022] [Indexed: 11/13/2022] Open
Abstract
PurposeThe challenge of cone-beam computed tomography (CBCT) is its low image quality, which limits its application for adaptive radiotherapy (ART). Despite recent substantial improvement in CBCT imaging using the deep learning method, the image quality still needs to be improved for effective ART application. Spurred by the advantages of transformers, which employs multi-head attention mechanisms to capture long-range contextual relations between image pixels, we proposed a novel transformer-based network (called TransCBCT) to generate synthetic CT (sCT) from CBCT. This study aimed to further improve the accuracy and efficiency of ART.Materials and methodsIn this study, 91 patients diagnosed with prostate cancer were enrolled. We constructed a transformer-based hierarchical encoder–decoder structure with skip connection, called TransCBCT. The network also employed several convolutional layers to capture local context. The proposed TransCBCT was trained and validated on 6,144 paired CBCT/deformed CT images from 76 patients and tested on 1,026 paired images from 15 patients. The performance of the proposed TransCBCT was compared with a widely recognized style transferring deep learning method, the cycle-consistent adversarial network (CycleGAN). We evaluated the image quality and clinical value (application in auto-segmentation and dose calculation) for ART need.ResultsTransCBCT had superior performance in generating sCT from CBCT. The mean absolute error of TransCBCT was 28.8 ± 16.7 HU, compared to 66.5 ± 13.2 for raw CBCT, and 34.3 ± 17.3 for CycleGAN. It can preserve the structure of raw CBCT and reduce artifacts. When applied in auto-segmentation, the Dice similarity coefficients of bladder and rectum between auto-segmentation and oncologist manual contours were 0.92 and 0.84 for TransCBCT, respectively, compared to 0.90 and 0.83 for CycleGAN. When applied in dose calculation, the gamma passing rate (1%/1 mm criterion) was 97.5% ± 1.1% for TransCBCT, compared to 96.9% ± 1.8% for CycleGAN.ConclusionsThe proposed TransCBCT can effectively generate sCT for CBCT. It has the potential to improve radiotherapy accuracy.
Collapse
Affiliation(s)
- Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- National Cancer Center/National Clinical Research Center for Cancer/Hebei Cancer Hospital, Chinese Academy of Medical Sciences, Langfang, China
| | - Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- School of Physics and Technology, Wuhan University, Wuhan, China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Siqi Yuan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xuejie Xie
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yueping Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- *Correspondence: Kuo Men,
| |
Collapse
|
27
|
Deng L, Zhang M, Wang J, Huang S, Yang X. Improving cone-beam CT quality using a cycle-residual connection with a dilated convolution-consistent generative adversarial network. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7b0a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 06/21/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective.Cone-Beam CT (CBCT) often results in severe image artifacts and inaccurate HU values, meaning poor quality CBCT images cannot be directly applied to dose calculation in radiotherapy. To overcome this, we propose a cycle-residual connection with a dilated convolution-consistent generative adversarial network (Cycle-RCDC-GAN). Approach. The cycle-consistent generative adversarial network (Cycle-GAN) was modified using a dilated convolution with different expansion rates to extract richer semantic features from input images. Thirty pelvic patients were used to investigate the effect of synthetic CT (sCT) from CBCT, and 55 head and neck patients were used to explore the generalizability of the model. Three generalizability experiments were performed and compared: the pelvis trained model was applied to the head and neck; the head and neck trained model was applied to the pelvis, and the two datasets were trained together. Main results. The mean absolute error (MAE), the root mean square error (RMSE), peak signal to noise ratio (PSNR), the structural similarity index (SSIM), and spatial nonuniformity (SNU) assessed the quality of the sCT generated from CBCT. Compared with CBCT images, the MAE improved from 28.81 to 18.48, RMSE from 85.66 to 69.50, SNU from 0.34 to 0.30, and PSNR from 31.61 to 33.07, while SSIM improved from 0.981 to 0.989. The sCT objective indicators of Cycle-RCDC-GAN were better than Cycle-GAN’s. The objective metrics for generalizability were also better than Cycle-GAN’s. Significance. Cycle-RCDC-GAN enhances CBCT image quality and has better generalizability than Cycle-GAN, which further promotes the application of CBCT in radiotherapy.
Collapse
|
28
|
Rusanov B, Hassan GM, Reynolds M, Sabet M, Kendrick J, Farzad PR, Ebert M. Deep learning methods for enhancing cone-beam CT image quality towards adaptive radiation therapy: A systematic review. Med Phys 2022; 49:6019-6054. [PMID: 35789489 PMCID: PMC9543319 DOI: 10.1002/mp.15840] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 05/21/2022] [Accepted: 06/16/2022] [Indexed: 11/11/2022] Open
Abstract
The use of deep learning (DL) to improve cone-beam CT (CBCT) image quality has gained popularity as computational resources and algorithmic sophistication have advanced in tandem. CBCT imaging has the potential to facilitate online adaptive radiation therapy (ART) by utilizing up-to-date patient anatomy to modify treatment parameters before irradiation. Poor CBCT image quality has been an impediment to realizing ART due to the increased scatter conditions inherent to cone-beam acquisitions. Given the recent interest in DL applications in radiation oncology, and specifically DL for CBCT correction, we provide a systematic theoretical and literature review for future stakeholders. The review encompasses DL approaches for synthetic CT generation, as well as projection domain methods employed in the CBCT correction literature. We review trends pertaining to publications from January 2018 to April 2022 and condense their major findings - with emphasis on study design and deep learning techniques. Clinically relevant endpoints relating to image quality and dosimetric accuracy are summarised, highlighting gaps in the literature. Finally, we make recommendations for both clinicians and DL practitioners based on literature trends and the current DL state of the art methods utilized in radiation oncology. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Branimir Rusanov
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia
| | - Mark Reynolds
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia
| | - Mahsheed Sabet
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Jake Kendrick
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Pejman Rowshan Farzad
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Martin Ebert
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| |
Collapse
|
29
|
Sun H, Xi Q, Sun J, Fan R, Xie K, Ni X, Yang J. Research on new treatment mode of radiotherapy based on pseudo-medical images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106932. [PMID: 35671601 DOI: 10.1016/j.cmpb.2022.106932] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 04/20/2022] [Accepted: 06/01/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Multi-modal medical images with multiple feature information are beneficial for radiotherapy. A new radiotherapy treatment mode based on triangle generative adversarial network (TGAN) model was proposed to synthesize pseudo-medical images between multi-modal datasets. METHODS CBCT, MRI and CT images of 80 patients with nasopharyngeal carcinoma were selected. The TGAN model based on multi-scale discriminant network was used for data training between different image domains. The generator of the TGAN model refers to cGAN and CycleGAN, and only one generation network can establish the non-linear mapping relationship between multiple image domains. The discriminator used multi-scale discrimination network to guide the generator to synthesize pseudo-medical images that are similar to real images from both shallow and deep aspects. The accuracy of pseudo-medical images was verified in anatomy and dosimetry. RESULTS In the three synthetic directions, namely, CBCT → CT, CBCT → MRI, and MRI → CT, significant differences (p < 0.05) in the three-fold-cross validation results on PSNR and SSIM metrics between the pseudo-medical images obtained based on TGAN and the real images. In the testing stage, for TGAN, the MAE metric results in the three synthesis directions (CBCT → CT, CBCT → MRI, and MRI → CT) were presented as mean (standard deviation), which were 68.67 (5.83), 83.14 (8.48), and 79.96 (7.59), and the NMI metric results were 0.8643 (0.0253), 0.8051 (0.0268), and 0.8146 (0.0267) respectively. In terms of dose verification, the differences in dose distribution between the pseudo-CT obtained by TGAN and the real CT were minimal. The H values of the measurement results of dose uncertainty in PGTV, PGTVnd, PTV1, and PTV2 were 42.510, 43.121, 17.054, and 7.795, respectively (P < 0.05). The differences were statistically significant. The gamma pass rate (2%/2 mm) of pseudo-CT obtained by the new model was 94.94% (0.73%), and the numerical results were better than those of the three other comparison models. CONCLUSIONS The pseudo-medical images acquired based on TGAN were close to the real images in anatomy and dosimetry. The pseudo-medical images synthesized by the TGAN model have good application prospects in clinical adaptive radiotherapy.
Collapse
Affiliation(s)
- Hongfei Sun
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| | - Qianyi Xi
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Jiawei Sun
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Rongbo Fan
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| | - Kai Xie
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| |
Collapse
|
30
|
Li Y, Wei Z, Liu Z, Teng J, Chang Y, Xie Q, Zhang L, Shi J, Chen L. Quantifying the dosimetric effects of neck contour changes and setup errors on the spinal cord in patients with nasopharyngeal carcinoma: establishing a rapid estimation method. JOURNAL OF RADIATION RESEARCH 2022; 63:443-451. [PMID: 35373827 PMCID: PMC9124625 DOI: 10.1093/jrr/rrac009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 12/09/2021] [Indexed: 06/14/2023]
Abstract
The purpose of this study was to quantify the effect of neck contour changes and setup errors on spinal cord (SC) doses during the treatment of nasopharyngeal carcinoma (NPC) and to establish a rapid dose estimation method. The setup errors and contour changes in 60 cone-beam computed tomography (CBCT) images of 10 NPC patients were analysed in different regions of the neck (C1-C3, C4-C5 and C6-C7). The actual delivered dose to the SC was calculated using the CBCT images, and univariate simulations were performed using the planning CT to evaluate the dose effects of each factor, and an index ${\mathrm{Dmax}}_{\mathrm{displaced}}$ was introduced to estimate the SC dose. Compared with the planned dose, the mean (maximum) Dmax increases in the C1-C3, C4-C5 and C6-C7 regions of the SC were 2.1% (12.3%), 1.8% (8.2%) and 2.5% (9.2%), respectively. The simulation results showed that the effects of setup error in the C1-C3, C4-C5 and C6-C7 regions were 1.5% (9.7%), 0.9% (8.2%) and 1.3% (6.3%), respectively, and the effects of contour change were 0.4% (1.7%), 0.7% (2.5%) and 1.5% (4.9%), respectively. The linear regression model can be used to estimate the dose effect of contour changes (R2 > 0.975) and setup errors (R2 = 0.989). Setup errors may lead to a significant increase in the SC dose in some patients. This study established a rapid dose estimation method, which is of great significance for the daily dose evaluation and the adaptive re-planning trigger of the SC.
Collapse
Affiliation(s)
- Yinghui Li
- State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Sun Yat-Sen University of Medical Sciences, Guangzhou, 510060, Guangdong, China
- Physics Department of the Radiotherapy Department, The First People’s Hospital of FoShan (Affiliated FoShan Hospital of Sun Yat-sen University), Foshan, 528000, Guangdong, China
| | - Zhanfu Wei
- Radiotherapy Center of the Oncology Medical Center, The First People’s Hospital of ZhaoQing, Zhaoqing, 526000, Guangdong, China
| | - Zhibin Liu
- Physics Department of the Radiotherapy Department, The First People’s Hospital of FoShan (Affiliated FoShan Hospital of Sun Yat-sen University), Foshan, 528000, Guangdong, China
| | - Jianjian Teng
- Physics Department of the Radiotherapy Department, The First People’s Hospital of FoShan (Affiliated FoShan Hospital of Sun Yat-sen University), Foshan, 528000, Guangdong, China
| | - Yuanzhi Chang
- Physics Department of the Radiotherapy Department, The First People’s Hospital of FoShan (Affiliated FoShan Hospital of Sun Yat-sen University), Foshan, 528000, Guangdong, China
| | - Qiuying Xie
- Physics Department of the Radiotherapy Department, The First People’s Hospital of FoShan (Affiliated FoShan Hospital of Sun Yat-sen University), Foshan, 528000, Guangdong, China
| | - Liwen Zhang
- Physics Department of the Radiotherapy Department, The First People’s Hospital of FoShan (Affiliated FoShan Hospital of Sun Yat-sen University), Foshan, 528000, Guangdong, China
| | - Jinping Shi
- Corresponding author. Sun Yat-sen University State Key Laboratory of Oncology in South China. NO. 651, Dongfeng Road East, Guanzhou, 510060, Guangdong, China. E-mail: ; The First People's Hospital of FoShan, No. 81, North Lingnan Avenue, Chancheng District, Foshan, 528000, Guangdong, China. E-mail:
| | - Lixin Chen
- Corresponding author. Sun Yat-sen University State Key Laboratory of Oncology in South China. NO. 651, Dongfeng Road East, Guanzhou, 510060, Guangdong, China. E-mail: ; The First People's Hospital of FoShan, No. 81, North Lingnan Avenue, Chancheng District, Foshan, 528000, Guangdong, China. E-mail:
| |
Collapse
|
31
|
Deng L, Hu J, Wang J, Huang S, Yang X. Synthetic CT generation based on CBCT using respath-cycleGAN. Med Phys 2022; 49:5317-5329. [PMID: 35488299 DOI: 10.1002/mp.15684] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 04/08/2022] [Accepted: 04/13/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Cone-beam computed tomography (CBCT) plays an important role in radiotherapy, but the presence of a large number of artifacts limits its application. The purpose of this study was to use respath-cycleGAN to synthesize CT (sCT) similar to planning CT (pCT) from CBCT for future clinical practice. METHODS The method integrates the respath concept into the original cycleGAN, called respath-cycleGAN, to map CBCT to pCT. Thirty patients were used for training, and 15 for testing. RESULTS The mean absolute error (MAE), root mean square error (RMSE), peak signal to noise ratio (PSNR), structural similarity index (SSIM), and spatial non-uniformity (SNU) were calculated to assess the quality of sCT generated from CBCT. Compared with CBCT images, the MAE improved from 197.72 to 140.7, RMSE from 339.17 to 266.51, and PSNR from 22.07 to 24.44, while SSIM increased from 0.948 to 0.964. Both visually and quantitatively, sCT with respath is superior to sCT without respath. We also performed a generalization test of the head-and-neck (H&N) model on a pelvic dataset. The results again showed that our model was superior. CONCLUSION We developed a respath-cycleGAN method to synthesize CT with good quality from CBCT. In future clinical practice, this method may be used to develop radiotherapy plans. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Liwei Deng
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, Heilongjiang, 150080, China
| | - Jie Hu
- School of Automation, Harbin University of Science and Technology, Harbin, Heilongjiang, 150080, China
| | - Jing Wang
- School of Biomedical Engineering, Guangzhou Xinhua University, Guangzhou, Guangdong, 510520, China
| | - Sijuan Huang
- Huang Department of Radiation Oncology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China
| | - Xin Yang
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China
| |
Collapse
|
32
|
MRA-free intracranial vessel localization on MR vessel wall images. Sci Rep 2022; 12:6240. [PMID: 35422490 PMCID: PMC9010428 DOI: 10.1038/s41598-022-10256-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 03/31/2022] [Indexed: 11/08/2022] Open
Abstract
Analysis of vessel morphology is important in assessing intracranial atherosclerosis disease (ICAD). Recently, magnetic resonance (MR) vessel wall imaging (VWI) has been introduced to image ICAD and characterize morphology for atherosclerotic lesions. In order to automatically perform quantitative analysis on VWI data, MR angiography (MRA) acquired in the same imaging session is typically used to localize the vessel segments of interest. However, MRA may be unavailable caused by the lack or failure of the sequence in a VWI protocol. This study aims to investigate the feasibility to infer the vessel location directly from VWI. We propose to synergize an atlas-based method to preserve general vessel structure topology with a deep learning network in the motion field domain to correct the residual geometric error. Performance is quantified by examining the agreement between the extracted vessel structures from the pair-acquired and alignment-corrected angiogram, and the estimated output using a cross-validation scheme. Our proposed pipeline yields clinically feasible performance in localizing intracranial vessels, demonstrating the promise of performing vessel morphology analysis using VWI alone.
Collapse
|
33
|
Liu Y, Chen X, Zhu J, Yang B, Wei R, Xiong R, Quan H, Liu Y, Dai J, Men K. A two-step method to improve image quality of CBCT with phantom-based supervised and patient-based unsupervised learning strategies. Phys Med Biol 2022; 67. [PMID: 35354124 DOI: 10.1088/1361-6560/ac6289] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 03/30/2022] [Indexed: 11/12/2022]
Abstract
Objective.In this study, we aimed to develop deep learning framework to improve cone-beam computed tomography (CBCT) image quality for adaptive radiation therapy (ART) applications.Approach.Paired CBCT and planning CT images of 2 pelvic phantoms and 91 patients (15 patients for testing) diagnosed with prostate cancer were included in this study. First, well-matched images of rigid phantoms were used to train a U-net, which is the supervised learning strategy to reduce serious artifacts. Second, the phantom-trained U-net generated intermediate CT images from the patient CBCT images. Finally, a cycle-consistent generative adversarial network (CycleGAN) was trained with intermediate CT images and deformed planning CT images, which is the unsupervised learning strategy to learn the style of the patient images for further improvement. When testing or applying the trained model on patient CBCT images, the intermediate CT images were generated from the original CBCT image by U-net, and then the synthetic CT images were generated by the generator of CycleGAN with intermediate CT images as input. The performance was compared with conventional methods (U-net/CycleGAN alone trained with patient images) on the test set.Results.The proposed two-step method effectively improved the CBCT image quality to the level of CT scans. It outperformed conventional methods for region-of-interest contouring and HU calibration, which are important to ART applications. Compared with the U-net alone, it maintained the structure of CBCT. Compared with CycleGAN alone, our method improved the accuracy of CT number and effectively reduced the artifacts, making it more helpful for identifying the clinical target volume.Significance.This novel two-step method improves CBCT image quality by combining phantom-based supervised and patient-based unsupervised learning strategies. It has immense potential to be integrated into the ART workflow to improve radiotherapy accuracy.
Collapse
Affiliation(s)
- Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China.,School of Physics and Technology, Wuhan University, Wuhan 430072, People's Republic of China
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Ran Wei
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Rui Xiong
- School of Physics and Technology, Wuhan University, Wuhan 430072, People's Republic of China
| | - Hong Quan
- School of Physics and Technology, Wuhan University, Wuhan 430072, People's Republic of China
| | - Yueping Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| |
Collapse
|
34
|
Recent Applications of Artificial Intelligence in Radiotherapy: Where We Are and Beyond. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073223] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
In recent decades, artificial intelligence (AI) tools have been applied in many medical fields, opening the possibility of finding novel solutions for managing very complex and multifactorial problems, such as those commonly encountered in radiotherapy (RT). We conducted a PubMed and Scopus search to identify the AI application field in RT limited to the last four years. In total, 1824 original papers were identified, and 921 were analyzed by considering the phase of the RT workflow according to the applied AI approaches. AI permits the processing of large quantities of information, data, and images stored in RT oncology information systems, a process that is not manageable for individuals or groups. AI allows the iterative application of complex tasks in large datasets (e.g., delineating normal tissues or finding optimal planning solutions) and might support the entire community working in the various sectors of RT, as summarized in this overview. AI-based tools are now on the roadmap for RT and have been applied to the entire workflow, mainly for segmentation, the generation of synthetic images, and outcome prediction. Several concerns were raised, including the need for harmonization while overcoming ethical, legal, and skill barriers.
Collapse
|
35
|
Ma M, Kidd E, Fahimian BP, Han B, Niedermayr TR, Hristov D, Xing L, Yang Y. Dose Prediction for Cervical Cancer Brachytherapy Using 3-D Deep Convolutional Neural Network. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3098507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
36
|
Zhang Y, Ding SG, Gong XC, Yuan XX, Lin JF, Chen Q, Li JG. Generating synthesized computed tomography from CBCT using a conditional generative adversarial network for head and neck cancer patients. Technol Cancer Res Treat 2022; 21:15330338221085358. [PMID: 35262422 PMCID: PMC8918752 DOI: 10.1177/15330338221085358] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Purpose: To overcome the imaging artifacts and Hounsfield unit inaccuracy limitations of cone-beam computed tomography, a conditional generative adversarial network is proposed to synthesize high-quality computed tomography-like images from cone-beam computed tomography images. Methods: A total of 120 paired cone-beam computed tomography and computed tomography scans of patients with head and neck cancer who were treated during January 2019 and December 2020 retrospectively collected; the scans of 90 patients were assembled into training and validation datasets, and the scans of 30 patients were used in testing datasets. The proposed method integrates a U-Net backbone architecture with residual blocks into a conditional generative adversarial network framework to learn a mapping from cone-beam computed tomography images to pair planning computed tomography images. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio were used to assess the performance of this method compared with U-Net and CycleGAN. Results: The synthesized computed tomography images produced by the conditional generative adversarial network were visually similar to planning computed tomography images. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio calculated from test images generated by conditional generative adversarial network were all significantly different than CycleGAN and U-Net. The mean absolute error, root-mean-square error, structural similarity index, and peak signal-to-noise ratio values between the synthesized computed tomography and the reference computed tomography were 16.75 ± 11.07 Hounsfield unit, 58.15 ± 28.64 Hounsfield unit, 0.92 ± 0.04, and 30.58 ± 3.86 dB in conditional generative adversarial network, 20.66 ± 12.15 Hounsfield unit, 66.53 ± 29.73 Hounsfield unit, 0.90 ± 0.05, and 29.29 ± 3.49 dB in CycleGAN, and 16.82 ± 10.99 Hounsfield unit, 58.68 ± 28.34 Hounsfield unit, 0.92 ± 0.04, and 30.48 ± 3.83 dB in U-Net, respectively. Conclusions: The synthesized computed tomography generated from the cone-beam computed tomography-based conditional generative adversarial network method has accurate computed tomography numbers while keeping the same anatomical structure as cone-beam computed tomography. It can be used effectively for quantitative applications in radiotherapy.
Collapse
Affiliation(s)
- Yun Zhang
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, Jiangxi, People’s Republic of China
| | - Sheng-gou Ding
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, Jiangxi, People’s Republic of China
| | - Xiao-chang Gong
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, Jiangxi, People’s Republic of China
| | - Xing-xing Yuan
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, Jiangxi, People’s Republic of China
| | - Jia-fan Lin
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, Jiangxi, People’s Republic of China
| | - Qi Chen
- MedMind Technology Co. Ltd, Beijing, People’s Republic of China
| | - Jin-gao Li
- Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University, Nanchang, Jiangxi, People’s Republic of China
- Key Laboratory of Personalized Diagnosis and Treatment of Nasopharyngeal Carcinoma Nanchang, Jiangxi, People’s Republic of China
- Medical College of Nanchang University, Nanchang, Jiangxi, People’s Republic of China
- Jin-gao Li, Department of Radiation Oncology, Jiangxi Cancer Hospital of Nanchang University Nanchang, Jiangxi 330029, People’s Republic of China.
| |
Collapse
|
37
|
Liu J, Yan H, Cheng H, Liu J, Sun P, Wang B, Mao R, Du C, Luo S. CBCT-based synthetic CT generation using generative adversarial networks with disentangled representation. Quant Imaging Med Surg 2021; 11:4820-4834. [PMID: 34888192 DOI: 10.21037/qims-20-1056] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 06/02/2021] [Indexed: 11/06/2022]
Abstract
Background Cone-beam computed tomography (CBCT) plays a key role in image-guided radiotherapy (IGRT), however its poor image quality limited its clinical application. In this study, we developed a deep-learning based approach to translate CBCT image to synthetic CT (sCT) image that preserves both CT image quality and CBCT anatomical structures. Methods A novel synthetic CT generative adversarial network (sCTGAN) was proposed for CBCT-to-CT translation via disentangled representation. The approach of disentangled representation was employed to extract the anatomical information shared by CBCT and CT image domains. Both on-board CBCT and planning CT of 40 patients were used for network learning and those of another 12 patients were used for testing. Accuracy of our network was quantitatively evaluated using a series of statistical metrics, including the peak signal-to-noise ratio (PSNR), mean structural similarity index (SSIM), mean absolute error (MAE), and root-mean-square error (RMSE). Effectiveness of our network was compared against three state-of-the-art CycleGAN-based methods. Results The PSNR, SSIM, MAE, and RMSE between sCT generated by sCTGAN and deformed planning CT (dpCT) were 34.12 dB, 0.86, 32.70 HU, and 60.53 HU, while the corresponding values between original CBCT and dpCT were 28.67 dB, 0.64, 70.56 HU, and 112.13 HU. The RMSE (60.53±14.38 HU) of sCT generated by sCTGAN was less than that of sCT generated by all the three comparing methods (72.40±16.03 HU by CycleGAN, 71.60±15.09 HU by CycleGAN-Unet512, 64.93±14.33 HU by CycleGAN-AG). Conclusions The sCT generated by our sCTGAN network was closer to the ground truth (dpCT), in comparison to all the three comparing CycleGAN-based methods. It provides an effective way to generate high-quality sCT which has a wide application in IGRT and adaptive radiotherapy.
Collapse
Affiliation(s)
- Jiwei Liu
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Hui Yan
- Department of Radiation Oncology, National Clinical Research Center for Cancer, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Hanlin Cheng
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Jianfei Liu
- School of Electrical Engineering and Automation, Anhui University, Hefei, China
| | - Pengjian Sun
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Boyi Wang
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Ronghu Mao
- Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University, Henan Cancer Hospital, Zhengzhou, China
| | - Chi Du
- Cancer Center, The Second Peoples Hospital of Neijiang, Neijiang, China
| | - Shengquan Luo
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| |
Collapse
|
38
|
Dai Z, Zhang Y, Zhu L, Tan J, Yang G, Zhang B, Cai C, Jin H, Meng H, Tan X, Jian W, Yang W, Wang X. Geometric and Dosimetric Evaluation of Deep Learning-Based Automatic Delineation on CBCT-Synthesized CT and Planning CT for Breast Cancer Adaptive Radiotherapy: A Multi-Institutional Study. Front Oncol 2021; 11:725507. [PMID: 34858813 PMCID: PMC8630628 DOI: 10.3389/fonc.2021.725507] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 10/12/2021] [Indexed: 12/29/2022] Open
Abstract
Purpose We developed a deep learning model to achieve automatic multitarget delineation on planning CT (pCT) and synthetic CT (sCT) images generated from cone-beam CT (CBCT) images. The geometric and dosimetric impact of the model was evaluated for breast cancer adaptive radiation therapy. Methods We retrospectively analyzed 1,127 patients treated with radiotherapy after breast-conserving surgery from two medical institutions. The CBCT images for patient setup acquired utilizing breath-hold guided by optical surface monitoring system were used to generate sCT with a generative adversarial network. Organs at risk (OARs), clinical target volume (CTV), and tumor bed (TB) were delineated automatically with a 3D U-Net model on pCT and sCT images. The geometric accuracy of the model was evaluated with metrics, including Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95). Dosimetric evaluation was performed by quick dose recalculation on sCT images relying on gamma analysis and dose-volume histogram (DVH) parameters. The relationship between ΔD95, ΔV95 and DSC-CTV was assessed to quantify the clinical impact of the geometric changes of CTV. Results The ranges of DSC and HD95 were 0.73–0.97 and 2.22–9.36 mm for pCT, 0.63–0.95 and 2.30–19.57 mm for sCT from institution A, 0.70–0.97 and 2.10–11.43 mm for pCT from institution B, respectively. The quality of sCT was excellent with an average mean absolute error (MAE) of 71.58 ± 8.78 HU. The mean gamma pass rate (3%/3 mm criterion) was 91.46 ± 4.63%. DSC-CTV down to 0.65 accounted for a variation of more than 6% of V95 and 3 Gy of D95. DSC-CTV up to 0.80 accounted for a variation of less than 4% of V95 and 2 Gy of D95. The mean ΔD90/ΔD95 of CTV and TB were less than 2Gy/4Gy, 4Gy/5Gy for all the patients. The cardiac dose difference in left breast cancer cases was larger than that in right breast cancer cases. Conclusions The accurate multitarget delineation is achievable on pCT and sCT via deep learning. The results show that dose distribution needs to be considered to evaluate the clinical impact of geometric variations during breast cancer radiotherapy.
Collapse
Affiliation(s)
- Zhenhui Dai
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Lin Zhu
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Junwen Tan
- Department of Oncology, The Fourth Affiliated Hospital, Guangxi Medical University, Liuzhou, China
| | - Geng Yang
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Bailin Zhang
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Chunya Cai
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Huaizhi Jin
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Haoyu Meng
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xiang Tan
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Wanwei Jian
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Xuetao Wang
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| |
Collapse
|
39
|
Gao L, Xie K, Wu X, Lu Z, Li C, Sun J, Lin T, Sui J, Ni X. Generating synthetic CT from low-dose cone-beam CT by using generative adversarial networks for adaptive radiotherapy. Radiat Oncol 2021; 16:202. [PMID: 34649572 PMCID: PMC8515667 DOI: 10.1186/s13014-021-01928-w] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 06/17/2021] [Indexed: 11/10/2022] Open
Abstract
OBJECTIVE To develop high-quality synthetic CT (sCT) generation method from low-dose cone-beam CT (CBCT) images by using attention-guided generative adversarial networks (AGGAN) and apply these images to dose calculations in radiotherapy. METHODS The CBCT/planning CT images of 170 patients undergoing thoracic radiotherapy were used for training and testing. The CBCT images were scanned under a fast protocol with 50% less clinical projection frames compared with standard chest M20 protocol. Training with aligned paired images was performed using conditional adversarial networks (so-called pix2pix), and training with unpaired images was carried out with cycle-consistent adversarial networks (cycleGAN) and AGGAN, through which sCT images were generated. The image quality and Hounsfield unit (HU) value of the sCT images generated by the three neural networks were compared. The treatment plan was designed on CT and copied to sCT images to calculated dose distribution. RESULTS The image quality of sCT images by all the three methods are significantly improved compared with original CBCT images. The AGGAN achieves the best image quality in the testing patients with the smallest mean absolute error (MAE, 43.5 ± 6.69), largest structural similarity (SSIM, 93.7 ± 3.88) and peak signal-to-noise ratio (PSNR, 29.5 ± 2.36). The sCT images generated by all the three methods showed superior dose calculation accuracy with higher gamma passing rates compared with original CBCT image. The AGGAN offered the highest gamma passing rates (91.4 ± 3.26) under the strictest criteria of 1 mm/1% compared with other methods. In the phantom study, the sCT images generated by AGGAN demonstrated the best image quality and the highest dose calculation accuracy. CONCLUSIONS High-quality sCT images were generated from low-dose thoracic CBCT images by using the proposed AGGAN through unpaired CBCT and CT images. The dose distribution could be calculated accurately based on sCT images in radiotherapy.
Collapse
Affiliation(s)
- Liugang Gao
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Kai Xie
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Xiaojin Wu
- Oncology Department, Xuzhou No.1 People's Hospital, Xuzhou, 221000, China
| | - Zhengda Lu
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China.,School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing, 213000, China
| | - Chunying Li
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Jiawei Sun
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Tao Lin
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Jianfeng Sui
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Xinye Ni
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China. .,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China.
| |
Collapse
|
40
|
Li S, Deng YQ, Zhu ZL, Hua HL, Tao ZZ. A Comprehensive Review on Radiomics and Deep Learning for Nasopharyngeal Carcinoma Imaging. Diagnostics (Basel) 2021; 11:1523. [PMID: 34573865 PMCID: PMC8465998 DOI: 10.3390/diagnostics11091523] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 08/10/2021] [Accepted: 08/19/2021] [Indexed: 12/23/2022] Open
Abstract
Nasopharyngeal carcinoma (NPC) is one of the most common malignant tumours of the head and neck, and improving the efficiency of its diagnosis and treatment strategies is an important goal. With the development of the combination of artificial intelligence (AI) technology and medical imaging in recent years, an increasing number of studies have been conducted on image analysis of NPC using AI tools, especially radiomics and artificial neural network methods. In this review, we present a comprehensive overview of NPC imaging research based on radiomics and deep learning. These studies depict a promising prospect for the diagnosis and treatment of NPC. The deficiencies of the current studies and the potential of radiomics and deep learning for NPC imaging are discussed. We conclude that future research should establish a large-scale labelled dataset of NPC images and that studies focused on screening for NPC using AI are necessary.
Collapse
Affiliation(s)
- Song Li
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China; (S.L.); (Y.-Q.D.); (H.-L.H.)
| | - Yu-Qin Deng
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China; (S.L.); (Y.-Q.D.); (H.-L.H.)
| | - Zhi-Ling Zhu
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital Affiliated to Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China;
| | - Hong-Li Hua
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China; (S.L.); (Y.-Q.D.); (H.-L.H.)
| | - Ze-Zhang Tao
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China; (S.L.); (Y.-Q.D.); (H.-L.H.)
| |
Collapse
|
41
|
Spadea MF, Maspero M, Zaffino P, Seco J. Deep learning based synthetic-CT generation in radiotherapy and PET: A review. Med Phys 2021; 48:6537-6566. [PMID: 34407209 DOI: 10.1002/mp.15150] [Citation(s) in RCA: 80] [Impact Index Per Article: 26.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 06/06/2021] [Accepted: 07/13/2021] [Indexed: 01/22/2023] Open
Abstract
Recently,deep learning (DL)-based methods for the generation of synthetic computed tomography (sCT) have received significant research attention as an alternative to classical ones. We present here a systematic review of these methods by grouping them into three categories, according to their clinical applications: (i) to replace computed tomography in magnetic resonance (MR) based treatment planning, (ii) facilitate cone-beam computed tomography based image-guided adaptive radiotherapy, and (iii) derive attenuation maps for the correction of positron emission tomography. Appropriate database searching was performed on journal articles published between January 2014 and December 2020. The DL methods' key characteristics were extracted from each eligible study, and a comprehensive comparison among network architectures and metrics was reported. A detailed review of each category was given, highlighting essential contributions, identifying specific challenges, and summarizing the achievements. Lastly, the statistics of all the cited works from various aspects were analyzed, revealing the popularity and future trends and the potential of DL-based sCT generation. The current status of DL-based sCT generation was evaluated, assessing the clinical readiness of the presented methods.
Collapse
Affiliation(s)
- Maria Francesca Spadea
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Matteo Maspero
- Division of Imaging & Oncology, Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands
| | - Paolo Zaffino
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Joao Seco
- Division of Biomedical Physics in Radiation Oncology, DKFZ German Cancer Research Center, Heidelberg, Germany.,Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
42
|
Dong G, Zhang C, Liang X, Deng L, Zhu Y, Zhu X, Zhou X, Song L, Zhao X, Xie Y. A Deep Unsupervised Learning Model for Artifact Correction of Pelvis Cone-Beam CT. Front Oncol 2021; 11:686875. [PMID: 34350115 PMCID: PMC8327750 DOI: 10.3389/fonc.2021.686875] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Accepted: 06/25/2021] [Indexed: 11/13/2022] Open
Abstract
Purpose In recent years, cone-beam computed tomography (CBCT) is increasingly used in adaptive radiation therapy (ART). However, compared with planning computed tomography (PCT), CBCT image has much more noise and imaging artifacts. Therefore, it is necessary to improve the image quality and HU accuracy of CBCT. In this study, we developed an unsupervised deep learning network (CycleGAN) model to calibrate CBCT images for the pelvis to extend potential clinical applications in CBCT-guided ART. Methods To train CycleGAN to generate synthetic PCT (sPCT), we used CBCT and PCT images as inputs from 49 patients with unpaired data. Additional deformed PCT (dPCT) images attained as CBCT after deformable registration are utilized as the ground truth before evaluation. The trained uncorrected CBCT images are converted into sPCT images, and the obtained sPCT images have the characteristics of PCT images while keeping the anatomical structure of CBCT images unchanged. To demonstrate the effectiveness of the proposed CycleGAN, we use additional nine independent patients for testing. Results We compared the sPCT with dPCT images as the ground truth. The average mean absolute error (MAE) of the whole image on testing data decreased from 49.96 ± 7.21HU to 14.6 ± 2.39HU, the average MAE of fat and muscle ROIs decreased from 60.23 ± 7.3HU to 16.94 ± 7.5HU, and from 53.16 ± 9.1HU to 13.03 ± 2.63HU respectively. Conclusion We developed an unsupervised learning method to generate high-quality corrected CBCT images (sPCT). Through further evaluation and clinical implementation, it can replace CBCT in ART.
Collapse
Affiliation(s)
- Guoya Dong
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, China.,Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin, China
| | - Chenglong Zhang
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, China.,Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin, China.,Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lei Deng
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yulin Zhu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xuanyu Zhu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, QLD, Australia
| | - Xuanru Zhou
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Liming Song
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xiang Zhao
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, China
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
43
|
Zhao J, Chen Z, Wang J, Xia F, Peng J, Hu Y, Hu W, Zhang Z. MV CBCT-Based Synthetic CT Generation Using a Deep Learning Method for Rectal Cancer Adaptive Radiotherapy. Front Oncol 2021; 11:655325. [PMID: 34136391 PMCID: PMC8201514 DOI: 10.3389/fonc.2021.655325] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 04/26/2021] [Indexed: 01/04/2023] Open
Abstract
Due to image quality limitations, online Megavoltage cone beam CT (MV CBCT), which represents real online patient anatomy, cannot be used to perform adaptive radiotherapy (ART). In this study, we used a deep learning method, the cycle-consistent adversarial network (CycleGAN), to improve the MV CBCT image quality and Hounsfield-unit (HU) accuracy for rectal cancer patients to make the generated synthetic CT (sCT) eligible for ART. Forty rectal cancer patients treated with the intensity modulated radiotherapy (IMRT) were involved in this study. The CT and MV CBCT images of 30 patients were used for model training, and the images of the remaining 10 patients were used for evaluation. Image quality, autosegmentation capability and dose calculation capability using the autoplanning technique of the generated sCT were evaluated. The mean absolute error (MAE) was reduced from 135.84 ± 41.59 HU for the CT and CBCT comparison to 52.99 ± 12.09 HU for the CT and sCT comparison. The structural similarity (SSIM) index for the CT and sCT comparison was 0.81 ± 0.03, which is a great improvement over the 0.44 ± 0.07 for the CT and CBCT comparison. The autosegmentation model performance on sCT for femoral heads was accurate and required almost no manual modification. For the CTV and bladder, although modification was needed for autocontouring, the Dice similarity coefficient (DSC) indices were high, at 0.93 and 0.94 for the CTV and bladder, respectively. For dose evaluation, the sCT-based plan has a much smaller dose deviation from the CT-based plan than that of the CBCT-based plan. The proposed method solved a key problem for rectal cancer ART realization based on MV CBCT. The generated sCT enables ART based on the actual patient anatomy at the treatment position.
Collapse
Affiliation(s)
- Jun Zhao
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Zhi Chen
- Department of Medical Physics, Shanghai Proton and Heavy Ion Center, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Fan Xia
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Jiayuan Peng
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yiwen Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Zhen Zhang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| |
Collapse
|
44
|
Zhang Y, Yue N, Su MY, Liu B, Ding Y, Zhou Y, Wang H, Kuang Y, Nie K. Improving CBCT quality to CT level using deep learning with generative adversarial network. Med Phys 2021; 48:2816-2826. [PMID: 33259647 DOI: 10.1002/mp.14624] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Revised: 10/26/2020] [Accepted: 11/04/2020] [Indexed: 11/08/2022] Open
Abstract
PURPOSE To improve image quality and computed tomography (CT) number accuracy of daily cone beam CT (CBCT) through a deep learning methodology with generative adversarial network. METHODS One hundred and fifty paired pelvic CT and CBCT scans were used for model training and validation. An unsupervised deep learning method, 2.5D pixel-to-pixel generative adversarial network (GAN) model with feature mapping was proposed. A total of 12 000 slice pairs of CT and CBCT were used for model training, while ten-fold cross validation was applied to verify model robustness. Paired CT-CBCT scans from an additional 15 pelvic patients and 10 head-and-neck (HN) patients with CBCT images collected at a different machine were used for independent testing purpose. Besides the proposed method above, other network architectures were also tested as: 2D vs 2.5D; GAN model with vs without feature mapping; GAN model with vs without additional perceptual loss; and previously reported models as U-net and cycleGAN with or without identity loss. Image quality of deep-learning generated synthetic CT (sCT) images was quantitatively compared against the reference CT (rCT) image using mean absolute error (MAE) of Hounsfield units (HU) and peak signal-to-noise ratio (PSNR). The dosimetric calculation accuracy was further evaluated with both photon and proton beams. RESULTS The deep-learning generated sCTs showed improved image quality with reduced artifact distortion and improved soft tissue contrast. The proposed algorithm of 2.5 Pix2pix GAN with feature matching (FM) was shown to be the best model among all tested methods producing the highest PSNR and the lowest MAE to rCT. The dose distribution demonstrated a high accuracy in the scope of photon-based planning, yet more work is needed for proton-based treatment. Once the model was trained, it took 11-12 ms to process one slice, and could generate a 3D volume of dCBCT (80 slices) in less than a second using a NVIDIA GeForce GTX Titan X GPU (12 GB, Maxwell architecture). CONCLUSION The proposed deep learning algorithm is promising to improve CBCT image quality in an efficient way, thus has a potential to support online CBCT-based adaptive radiotherapy.
Collapse
Affiliation(s)
- Yang Zhang
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, USA.,Department of Radiological Sciences, University of California, Irvine, CA, USA
| | - Ning Yue
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - Min-Ying Su
- Department of Radiological Sciences, University of California, Irvine, CA, USA
| | - Bo Liu
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - Yi Ding
- Department of Radiation Oncology, Hubei Cancer Hospital, Wuhan, China
| | - Yongkang Zhou
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Hao Wang
- Department of Radiation Oncology, Zhongshan Hospital, Shanghai, China
| | - Yu Kuang
- Department of Integrated Health Sciences, University of Nebraska, Las Vegas, NV, USA
| | - Ke Nie
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| |
Collapse
|
45
|
Touati R, Le WT, Kadoury S. A feature invariant generative adversarial network for head and neck MRI/CT image synthesis. Phys Med Biol 2021; 66. [PMID: 33761478 DOI: 10.1088/1361-6560/abf1bb] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 03/24/2021] [Indexed: 12/12/2022]
Abstract
With the emergence of online MRI radiotherapy treatments, MR-based workflows have increased in importance in the clinical workflow. However proper dose planning still requires CT images to calculate dose attenuation due to bony structures. In this paper, we present a novel deep image synthesis model that generates in an unsupervised manner CT images from diagnostic MRI for radiotherapy planning. The proposed model based on a generative adversarial network (GAN) consists of learning a new invariant representation to generate synthetic CT (sCT) images based on high frequency and appearance patterns. This new representation encodes each convolutional feature map of the convolutional GAN discriminator, leading the training of the proposed model to be particularly robust in terms of image synthesis quality. Our model includes an analysis of common histogram features in the training process, thus reinforcing the generator such that the output sCT image exhibits a histogram matching that of the ground-truth CT. This CT-matched histogram is embedded then in a multi-resolution framework by assessing the evaluation over all layers of the discriminator network, which then allows the model to robustly classify the output synthetic image. Experiments were conducted on head and neck images of 56 cancer patients with a wide range of shape sizes and spatial image resolutions. The obtained results confirm the efficiency of the proposed model compared to other generative models, where the mean absolute error yielded by our model was 26.44(0.62), with a Hounsfield unit error of 45.3(1.87), and an overall Dice coefficient of 0.74(0.05), demonstrating the potential of the synthesis model for radiotherapy planning applications.
Collapse
Affiliation(s)
- Redha Touati
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
| | - William Trung Le
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
| | - Samuel Kadoury
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada.,CHUM Research Center, Montreal, QC, Canada
| |
Collapse
|
46
|
Utena Y, Takatsu J, Sugimoto S, Sasai K. Trajectory log analysis and cone-beam CT-based daily dose calculation to investigate the dosimetric accuracy of intensity-modulated radiotherapy for gynecologic cancer. J Appl Clin Med Phys 2021; 22:108-117. [PMID: 33426810 PMCID: PMC7882102 DOI: 10.1002/acm2.13163] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 11/13/2020] [Accepted: 12/15/2020] [Indexed: 11/21/2022] Open
Abstract
This study evaluated unexpected dosimetric errors caused by machine control accuracy, patient setup errors, and patient weight changes/internal organ deformations. Trajectory log files for 13 gynecologic plans with seven‐ or nine‐beam dynamic multileaf collimator (MLC) intensity‐modulated radiation therapy (IMRT), and differences between expected and actual MLC positions and MUs were evaluated. Effects of patient setup errors on dosimetry were estimated by in‐house software. To simulate residual patient setup errors after image‐guided patient repositioning, planned dose distributions were recalculated (blurred dose) after the positions were randomly moved in three dimensions 0–2 mm (translation) and 0°–2° (rotation) 28 times per patient. Differences between planned and blurred doses in the clinical target volume (CTV) D98% and D2% were evaluated. Daily delivered doses were calculated from cone‐beam computed tomography by the Hounsfield unit‐to‐density conversion method. Fractional and accumulated dose differences between original plans and actual delivery were evaluated by CTV D98% and D2%. The significance of accumulated doses was tested by the paired t test. Trajectory log file analysis showed that MLC positional errors were −0.01 ± 0.02 mm and MU delivery errors were 0.10 ± 0.10 MU. Differences in CTV D98% and D2% were <0.5% for simulated patient setup errors. Differences in CTV D98% and D2% were 2.4% or less between the fractional planned and delivered doses, but were 1.7% or less for the accumulated dose. Dosimetric errors were primarily caused by patient weight changes and internal organ deformation in gynecologic radiation therapy.
Collapse
Affiliation(s)
- Yohei Utena
- Department of Radiation Oncology, Graduate School of Medicine, Juntendo University, Tokyo, Japan.,Department of Radiology, Toranomon Hospital, Tokyo, Japan
| | - Jun Takatsu
- Department of Radiation Oncology, Faculty of Medicine, Juntendo University, Tokyo, Japan
| | - Satoru Sugimoto
- Department of Radiation Oncology, Graduate School of Medicine, Juntendo University, Tokyo, Japan
| | - Keisuke Sasai
- Department of Radiation Oncology, Graduate School of Medicine, Juntendo University, Tokyo, Japan
| |
Collapse
|
47
|
Wang T, Lei Y, Fu Y, Wynne JF, Curran WJ, Liu T, Yang X. A review on medical imaging synthesis using deep learning and its clinical applications. J Appl Clin Med Phys 2021; 22:11-36. [PMID: 33305538 PMCID: PMC7856512 DOI: 10.1002/acm2.13121] [Citation(s) in RCA: 94] [Impact Index Per Article: 31.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/12/2020] [Accepted: 11/21/2020] [Indexed: 02/06/2023] Open
Abstract
This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Jacob F. Wynne
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
48
|
Eckl M, Hoppen L, Sarria GR, Boda-Heggemann J, Simeonova-Chergou A, Steil V, Giordano FA, Fleckenstein J. Evaluation of a cycle-generative adversarial network-based cone-beam CT to synthetic CT conversion algorithm for adaptive radiation therapy. Phys Med 2020; 80:308-316. [PMID: 33246190 DOI: 10.1016/j.ejmp.2020.11.007] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 10/29/2020] [Accepted: 11/05/2020] [Indexed: 12/17/2022] Open
Abstract
PURPOSE Image-guided radiation therapy could benefit from implementing adaptive radiation therapy (ART) techniques. A cycle-generative adversarial network (cycle-GAN)-based cone-beam computed tomography (CBCT)-to-synthetic CT (sCT) conversion algorithm was evaluated regarding image quality, image segmentation and dosimetric accuracy for head and neck (H&N), thoracic and pelvic body regions. METHODS Using a cycle-GAN, three body site-specific models were priorly trained with independent paired CT and CBCT datasets of a kV imaging system (XVI, Elekta). sCT were generated based on first-fraction CBCT for 15 patients of each body region. Mean errors (ME) and mean absolute errors (MAE) were analyzed for the sCT. On the sCT, manually delineated structures were compared to deformed structures from the planning CT (pCT) and evaluated with standard segmentation metrics. Treatment plans were recalculated on sCT. A comparison of clinically relevant dose-volume parameters (D98, D50 and D2 of the target volume) and 3D-gamma (3%/3mm) analysis were performed. RESULTS The mean ME and MAE were 1.4, 29.6, 5.4 Hounsfield units (HU) and 77.2, 94.2, 41.8 HU for H&N, thoracic and pelvic region, respectively. Dice similarity coefficients varied between 66.7 ± 8.3% (seminal vesicles) and 94.9 ± 2.0% (lungs). Maximum mean surface distances were 6.3 mm (heart), followed by 3.5 mm (brainstem). The mean dosimetric differences of the target volumes did not exceed 1.7%. Mean 3D gamma pass rates greater than 97.8% were achieved in all cases. CONCLUSIONS The presented method generates sCT images with a quality close to pCT and yielded clinically acceptable dosimetric deviations. Thus, an important prerequisite towards clinical implementation of CBCT-based ART is fulfilled.
Collapse
Affiliation(s)
- Miriam Eckl
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany
| | - Lea Hoppen
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany.
| | - Gustavo R Sarria
- Department of Radiology and Radiation Oncology, University Hospital Bonn, Germany
| | - Judit Boda-Heggemann
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany
| | - Anna Simeonova-Chergou
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany
| | - Volker Steil
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany
| | - Frank A Giordano
- Department of Radiology and Radiation Oncology, University Hospital Bonn, Germany
| | - Jens Fleckenstein
- Department of Radiation Oncology, University Medical Center Mannheim, University of Heidelberg, Germany
| |
Collapse
|
49
|
Xie S, Liang Y, Yang T, Song Z. Contextual loss based artifact removal method on CBCT image. J Appl Clin Med Phys 2020; 21:166-177. [PMID: 33136307 PMCID: PMC7769412 DOI: 10.1002/acm2.13084] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 09/09/2020] [Accepted: 10/02/2020] [Indexed: 12/28/2022] Open
Abstract
Purpose Cone beam computed tomography (CBCT) offers advantages such as high ray utilization rate, the same spatial resolution within and between slices, and high precision. It is one of the most actively studied topics in international computed tomography (CT) research. However, its application is hindered owing to scatter artifacts. This paper proposes a novel scatter artifact removal algorithm that is based on a convolutional neural network (CNN), where contextual loss is employed as the loss function. Methods In the proposed method, contextual loss is added to a simple CNN network to correct the CBCT artifacts in the pelvic region. The algorithm aims to learn the mapping from CBCT images to planning CT images. The 627 CBCT‐CT pairs of 11 patients were used to train the network, and the proposed algorithm was evaluated in terms of the mean absolute error (MAE), average peak signal‐to‐noise ratio (PSNR) and so on. The proposed method was compared with other methods to illustrate its effectiveness. Results The proposed method can remove artifacts (including streaking, shadowing, and cupping) in the CBCT image. Furthermore, key details such as the internal contours and texture information of the pelvic region are well preserved. Analysis of the average CT number, average MAE, and average PSNR indicated that the proposed method improved the image quality. The test results obtained with the chest data also indicated that the proposed method could be applied to other anatomies. Conclusions Although the CBCT‐CT image pairs are not completely matched at the pixel level, the method proposed in this paper can effectively correct the artifacts in the CBCT slices and improve the image quality. The average CT number of the regions of interest (including bones, skin) also exhibited a significant improvement. Furthermore, the proposed method can be applied to enhance the performance on such applications as dose estimation and segmentation.
Collapse
Affiliation(s)
- Shipeng Xie
- College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu, China
| | - Yingjuan Liang
- College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu, China
| | - Tao Yang
- College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu, China
| | - Zhenrong Song
- College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu, China
| |
Collapse
|
50
|
Comparison of CBCT conversion methods for dose calculation in the head and neck region. Z Med Phys 2020; 30:289-299. [DOI: 10.1016/j.zemedi.2020.05.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 04/28/2020] [Accepted: 05/26/2020] [Indexed: 01/21/2023]
|