1
|
Yagi S, Usui K, Ogawa K. Scatter and beam hardening effect corrections in pelvic region cone beam CT images using a convolutional neural network. Radiol Phys Technol 2025:10.1007/s12194-025-00896-0. [PMID: 40183875 DOI: 10.1007/s12194-025-00896-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2024] [Revised: 03/03/2025] [Accepted: 03/06/2025] [Indexed: 04/05/2025]
Abstract
The aim of this study is to remove scattered photons and beam hardening effect in cone beam CT (CBCT) images and make an image available for treatment planning. To remove scattered photons and beam hardening effect, a convolutional neural network (CNN) was used, and trained with distorted projection data including scattered photons and beam hardening effect and supervised projection data calculated with monochromatic X-rays. The number of training projection data was 17,280 with data augmentation and that of test projection data was 540. The performance of the CNN was investigated in terms of the number of photons in the projection data used in the training of the network. Projection data of pelvic CBCT images (32 cases) were calculated with a Monte Carlo simulation with six different count levels ranging from 0.5 to 3 million counts/pixel. For the evaluation of corrected images, the peak signal-to-noise ratio (PSNR), the structural similarity index measure (SSIM), and the sum of absolute difference (SAD) were used. The results of simulations showed that the CNN could effectively remove scattered photons and beam hardening effect, and the PSNR, the SSIM, and the SAD significantly improved. It was also found that the number of photons in the training projection data was important in correction accuracy. Furthermore, a CNN model trained with projection data with a sufficient number of photons could yield good performance even though a small number of photons were used in the input projection data.
Collapse
Affiliation(s)
- Soya Yagi
- Department of Applied Informatics, Graduate School of Science and Engineering, Hosei University, 3-7-2 Kajinocho, Koganei, Tokyo, 184-0002, Japan
| | - Keisuke Usui
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, 1-5-3 Yushima, Bunkyo-ku, Tokyo, 113-0034, Japan
| | - Koichi Ogawa
- Department of Applied Informatics, Faculty of Science and Engineering, Hosei University, 3-7-2 Kajinocho, Koganei, Tokyo, 184-0002, Japan.
| |
Collapse
|
2
|
Zhao F, Liu M, Xiang M, Li D, Jiang X, Jin X, Lin C, Wang R. Unsupervised and Self-supervised Learning in Low-Dose Computed Tomography Denoising: Insights from Training Strategies. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:902-930. [PMID: 39231886 PMCID: PMC11950483 DOI: 10.1007/s10278-024-01213-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Revised: 06/30/2024] [Accepted: 07/01/2024] [Indexed: 09/06/2024]
Abstract
In recent years, X-ray low-dose computed tomography (LDCT) has garnered widespread attention due to its significant reduction in the risk of patient radiation exposure. However, LDCT images often contain a substantial amount of noises, adversely affecting diagnostic quality. To mitigate this, a plethora of LDCT denoising methods have been proposed. Among them, deep learning (DL) approaches have emerged as the most effective, due to their robust feature extraction capabilities. Yet, the prevalent use of supervised training paradigms is often impractical due to the challenges in acquiring low-dose and normal-dose CT pairs in clinical settings. Consequently, unsupervised and self-supervised deep learning methods have been introduced for LDCT denoising, showing considerable potential for clinical applications. These methods' efficacy hinges on training strategies. Notably, there appears to be no comprehensive reviews of these strategies. Our review aims to address this gap, offering insights and guidance for researchers and practitioners. Based on training strategies, we categorize the LDCT methods into six groups: (i) cycle consistency-based, (ii) score matching-based, (iii) statistical characteristics of noise-based, (iv) similarity-based, (v) LDCT synthesis model-based, and (vi) hybrid methods. For each category, we delve into the theoretical underpinnings, training strategies, strengths, and limitations. In addition, we also summarize the open source codes of the reviewed methods. Finally, the review concludes with a discussion on open issues and future research directions.
Collapse
Affiliation(s)
- Feixiang Zhao
- School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Ouhai District, Wenzhou, 325000, Zhejiang, China
- College of Nuclear Technology and Automation Engineering, Chengdu University of Technology, 1 East Third Road, Chengdu, 610059, Sichuan, China
| | - Mingzhe Liu
- School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Ouhai District, Wenzhou, 325000, Zhejiang, China
- College of Computer Science and Cyber Security, Chengdu University of Technology, 1 East Third Road, Chengdu, 610059, Sichuan, China
| | - Mingrong Xiang
- School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Ouhai District, Wenzhou, 325000, Zhejiang, China.
- School of Information Technology, Deakin University, Melbourne Burwood Campus, 221 Burwood Hwy, Melbourne, 3125, Victoria, Australia.
| | - Dongfen Li
- College of Computer Science and Cyber Security, Chengdu University of Technology, 1 East Third Road, Chengdu, 610059, Sichuan, China
| | - Xin Jiang
- School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Ouhai District, Wenzhou, 325000, Zhejiang, China
| | - Xiance Jin
- Department of Radiotherapy Center, The first Affiliated Hospital of Wenzhou Medical University, Ouhai District, Wenzhou, 325000, Zhejiang, China
| | - Cai Lin
- Department of Burn, Wound Repair and Regenerative Medicine Center, The first Affiliated Hospital of Wenzhou Medical University, Ouhai District, Wenzhou, 325000, Zhejiang, China
| | - Ruili Wang
- School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Ouhai District, Wenzhou, 325000, Zhejiang, China
- School of Mathematical and Computational Science, Massey University, SH17, Albany, 0632, Auckland, New Zealand
| |
Collapse
|
3
|
Hu Y, Zhou H, Cao N, Li C, Hu C. Synthetic CT generation based on CBCT using improved vision transformer CycleGAN. Sci Rep 2024; 14:11455. [PMID: 38769329 PMCID: PMC11106312 DOI: 10.1038/s41598-024-61492-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 05/06/2024] [Indexed: 05/22/2024] Open
Abstract
Cone-beam computed tomography (CBCT) is a crucial component of adaptive radiation therapy; however, it frequently encounters challenges such as artifacts and noise, significantly constraining its clinical utility. While CycleGAN is a widely employed method for CT image synthesis, it has notable limitations regarding the inadequate capture of global features. To tackle these challenges, we introduce a refined unsupervised learning model called improved vision transformer CycleGAN (IViT-CycleGAN). Firstly, we integrate a U-net framework that builds upon ViT. Next, we augment the feed-forward neural network by incorporating deep convolutional networks. Lastly, we enhance the stability of the model training process by introducing gradient penalty and integrating an additional loss term into the generator loss. The experiment demonstrates from multiple perspectives that our model-generated synthesizing CT(sCT) has significant advantages compared to other unsupervised learning models, thereby validating the clinical applicability and robustness of our model. In future clinical practice, our model has the potential to assist clinical practitioners in formulating precise radiotherapy plans.
Collapse
Affiliation(s)
- Yuxin Hu
- School of Computer and Software, Hohai University, Nanjing, 211100, China
| | - Han Zhou
- School of Electronic Science and Engineering, Nanjing University, NanJing, 210046, China
- Department of Radiation Oncology, The Fourth Affiliated Hospital of Nanjing Medical University, Nanjing, 210013, China
| | - Ning Cao
- School of Computer and Software, Hohai University, Nanjing, 211100, China
| | - Can Li
- Engineering Research Center of TCM Intelligence Health Service, School of Artificial Intelligence and Information Technology, Nanjing University of Chinese Medicine, Nanjing, 210023, China.
| | - Can Hu
- School of Computer and Software, Hohai University, Nanjing, 211100, China.
| |
Collapse
|
4
|
Winter JD, Reddy V, Li W, Craig T, Raman S. Impact of technological advances in treatment planning, image guidance, and treatment delivery on target margin design for prostate cancer radiotherapy: an updated review. Br J Radiol 2024; 97:31-40. [PMID: 38263844 PMCID: PMC11027310 DOI: 10.1093/bjr/tqad041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 08/22/2023] [Accepted: 11/21/2023] [Indexed: 01/25/2024] Open
Abstract
Recent innovations in image guidance, treatment delivery, and adaptive radiotherapy (RT) have created a new paradigm for planning target volume (PTV) margin design for patients with prostate cancer. We performed a review of the recent literature on PTV margin selection and design for intact prostate RT, excluding post-operative RT, brachytherapy, and proton therapy. Our review describes the increased focus on prostate and seminal vesicles as heterogenous deforming structures with further emergence of intra-prostatic GTV boost and concurrent pelvic lymph node treatment. To capture recent innovations, we highlight the evolution in cone beam CT guidance, and increasing use of MRI for improved target delineation and image registration and supporting online adaptive RT. Moreover, we summarize new and evolving image-guidance treatment platforms as well as recent reports of novel immobilization strategies and motion tracking. Our report also captures recent implementations of artificial intelligence to support image guidance and adaptive RT. To characterize the clinical impact of PTV margin changes via model-based risk estimates and clinical trials, we highlight recent high impact reports. Our report focusses on topics in the context of PTV margins but also showcase studies attempting to move beyond the PTV margin recipes with robust optimization and probabilistic planning approaches. Although guidelines exist for target margins conventional using CT-based image guidance, further validation is required to understand the optimal margins for online adaptation either alone or combined with real-time motion compensation to minimize systematic and random uncertainties in the treatment of patients with prostate cancer.
Collapse
Affiliation(s)
- Jeff D Winter
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON M5G 2M9, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON M5T 1P5, Canada
| | - Varun Reddy
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON M5G 2M9, Canada
| | - Winnie Li
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON M5G 2M9, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON M5T 1P5, Canada
| | - Tim Craig
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON M5G 2M9, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON M5T 1P5, Canada
| | - Srinivas Raman
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON M5G 2M9, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON M5T 1P5, Canada
| |
Collapse
|
5
|
Yang B, Liu Y, Zhu J, Dai J, Men K. Deep learning framework to improve the quality of cone-beam computed tomography for radiotherapy scenarios. Med Phys 2023; 50:7641-7653. [PMID: 37345371 DOI: 10.1002/mp.16562] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 06/01/2023] [Accepted: 06/03/2023] [Indexed: 06/23/2023] Open
Abstract
BACKGROUND The application of cone-beam computed tomography (CBCT) in image-guided radiotherapy and adaptive radiotherapy remains limited due to its poor image quality. PURPOSE In this study, we aim to develop a deep learning framework to generate high-quality CBCT images for therapeutic applications. METHODS The synthetic CT (sCT) generation from the CBCT was proposed using a transformer-based network with a hybrid loss function. The network was trained and validated using the data from 176 patients to produce a general model that can be extensively applied to enhance CBCT images. After the first therapy, each patient can receive paired CBCT/planning CT (pCT) scans, and the obtained data were used to fine-tune the general model for further improvement. For subsequent treatment, a patient-specific, personalized model was made available. In total, 34 patients were examined for general model testing, and another six patients who underwent rescanned pCT scan were used for personalized model training and testing. RESULTS The general model decreased the mean absolute error (MAE) from 135 HU to 59 HU as compared to the CBCT. The hybrid loss function demonstrated superior performance in CT number correction and noise/artifacts reduction. The proposed transformer-based network also showed superior power in CT number correction compared to the classical convolutional neural network. The personalized model showed improvement based on the general model in some details, and the MAE was reduced from 59 HU (for the general model) to 57 HU (p < 0.05 Wilcoxon signed-rank test). CONCLUSION We established a deep learning framework based on transformer for clinical needs. The deep learning model demonstrated potential for continuous improvement with the help of a suggested personalized training strategy compatible with the clinical workflow.
Collapse
Affiliation(s)
- Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
6
|
Liu Y, Yang B, Chen X, Zhu J, Ji G, Liu Y, Chen B, Lu N, Yi J, Wang S, Li Y, Dai J, Men K. Efficient segmentation using domain adaptation for MRI-guided and CBCT-guided online adaptive radiotherapy. Radiother Oncol 2023; 188:109871. [PMID: 37634767 DOI: 10.1016/j.radonc.2023.109871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 07/31/2023] [Accepted: 08/20/2023] [Indexed: 08/29/2023]
Abstract
BACKGROUND Delineation of regions of interest (ROIs) is important for adaptive radiotherapy (ART) but it is also time consuming and labor intensive. AIM This study aims to develop efficient segmentation methods for magnetic resonance imaging-guided ART (MRIgART) and cone-beam computed tomography-guided ART (CBCTgART). MATERIALS AND METHODS MRIgART and CBCTgART studies enrolled 242 prostate cancer patients and 530 nasopharyngeal carcinoma patients, respectively. A public dataset of CBCT from 35 pancreatic cancer patients was adopted to test the framework. We designed two domain adaption methods to learn and adapt the features from planning computed tomography (pCT) to MRI or CBCT modalities. The pCT was transformed to synthetic MRI (sMRI) for MRIgART, while CBCT was transformed to synthetic CT (sCT) for CBCTgART. Generalized segmentation models were trained with large popular data in which the inputs were sMRI for MRIgART and pCT for CBCTgART. Finally, the personalized models for each patient were established by fine-tuning the generalized model with the contours on pCT of that patient. The proposed method was compared with deformable image registration (DIR), a regular deep learning (DL) model trained on the same modality (DL-regular), and a generalized model in our framework (DL-generalized). RESULTS The proposed method achieved better or comparable performance. For MRIgART of the prostate cancer patients, the mean dice similarity coefficient (DSC) of four ROIs was 87.2%, 83.75%, 85.36%, and 92.20% for the DIR, DL-regular, DL-generalized, and proposed method, respectively. For CBCTgART of the nasopharyngeal carcinoma patients, the mean DSC of two target volumes were 90.81% and 91.18%, 75.17% and 58.30%, for the DIR, DL-regular, DL-generalized, and the proposed method, respectively. For CBCTgART of the pancreatic cancer patients, the mean DSC of two ROIs were 61.94% and 61.44%, 63.94% and 81.56%, for the DIR, DL-regular, DL-generalized, and the proposed method, respectively. CONCLUSION The proposed method utilizing personalized modeling improved the segmentation accuracy of ART.
Collapse
Affiliation(s)
- Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Guangqian Ji
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yueping Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Bo Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ningning Lu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Junlin Yi
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Shulian Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yexiong Li
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.
| |
Collapse
|
7
|
Pang B, Si H, Liu M, Fu W, Zeng Y, Liu H, Cao T, Chang Y, Quan H, Yang Z. Comparison and evaluation of different deep learning models of synthetic CT generation from CBCT for nasopharynx cancer adaptive proton therapy. Med Phys 2023; 50:6920-6930. [PMID: 37800874 DOI: 10.1002/mp.16777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/09/2023] [Accepted: 09/17/2023] [Indexed: 10/07/2023] Open
Abstract
BACKGROUND Cone-beam computed tomography (CBCT) scanning is used for patient setup in image-guided radiotherapy. However, its inaccurate CT numbers limit its applicability in dose calculation and treatment planning. PURPOSE This study compares four deep learning methods for generating synthetic CT (sCT) to determine which method is more appropriate and offers potential for further clinical exploration in adaptive proton therapy for nasopharynx cancer. METHODS CBCTs and deformed planning CT (dCT) from 75 patients (60/5/10 for training, validation and testing) were used to compare cycle-consistent Generative Adversarial Network (cycleGAN), Unet, Unet+cycleGAN and conditionalGenerative Adversarial Network (cGAN) for sCT generation. The sCT images generated by each method were evaluated against dCT images using mean absolute error (MAE), structural similarity (SSIM), peak signal-to-noise ratio (PSNR), spatial non-uniformity (SNU) and radial averaging in the frequency domain. In addition, dosimetric accuracy was assessed through gamma analysis, differences in water equivalent thickness (WET), and dose-volume histogram metrics. RESULTS The cGAN model has demonstrated optimal performance in the four models across various indicators. In terms of image quality under global condition, the average MAE has been reduced to 16.39HU, SSIM has increased to 95.24%, and PSNR has increased to 28.98. Regarding dosimetric accuracy, the gamma passing rate (2%/2 mm) has reached 99.02%, and the WET difference is only 1.28 mm. The D95 value of CTVs coverage and Dmax value of spinal cord, brainstem show no significant differences between dCT and sCT generated by cGAN model. CONCLUSIONS The cGAN model has been shown to be a more suitable approach for generating sCT using CBCT, considering its characteristics and concepts. The resulting sCT has the potential for application in adaptive proton therapy.
Collapse
Affiliation(s)
- Bo Pang
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Hang Si
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Muyu Liu
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Wensheng Fu
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yiling Zeng
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Hongyuan Liu
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ting Cao
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yu Chang
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Hong Quan
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Zhiyong Yang
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
8
|
Dong Z, Yu S, Szmul A, Wang J, Qi J, Wu H, Li J, Lu Z, Zhang Y. Simulation of a new respiratory phase sorting method for 4D-imaging using optical surface information towards precision radiotherapy. Comput Biol Med 2023; 162:107073. [PMID: 37290392 PMCID: PMC10311359 DOI: 10.1016/j.compbiomed.2023.107073] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 05/09/2023] [Accepted: 05/27/2023] [Indexed: 06/10/2023]
Abstract
BACKGROUND Respiratory signal detection is critical for 4-dimensional (4D) imaging. This study proposes and evaluates a novel phase sorting method using optical surface imaging (OSI), aiming to improve the precision of radiotherapy. METHOD Based on 4D Extended Cardiac-Torso (XCAT) digital phantom, OSI in point cloud format was generated from the body segmentation, and image projections were simulated using the geometries of Varian 4D kV cone-beam-CT (CBCT). Respiratory signals were extracted respectively from the segmented diaphragm image (reference method) and OSI respectively, where Gaussian Mixture Model and Principal Component Analysis (PCA) were used for image registration and dimension reduction respectively. Breathing frequencies were compared using Fast-Fourier-Transform. Consistency of 4DCBCT images reconstructed using Maximum Likelihood Expectation Maximization algorithm was also evaluated quantitatively, where high consistency can be suggested by lower Root-Mean-Square-Error (RMSE), Structural-Similarity-Index (SSIM) value closer to 1, and larger Peak-Signal-To-Noise-Ratio (PSNR) respectively. RESULTS High consistency of breathing frequencies was observed between the diaphragm-based (0.232 Hz) and OSI-based (0.251 Hz) signals, with a slight discrepancy of 0.019Hz. Using end of expiration (EOE) and end of inspiration (EOI) phases as examples, the mean±1SD values of the 80 transverse, 100 coronal and 120 sagittal planes were 0.967, 0,972, 0.974 (SSIM); 1.657 ± 0.368, 1.464 ± 0.104, 1.479 ± 0.297 (RMSE); and 40.501 ± 1.737, 41.532 ± 1.464, 41.553 ± 1.910 (PSNR) for the EOE; and 0.969, 0.973, 0.973 (SSIM); 1.686 ± 0.278, 1.422 ± 0.089, 1.489 ± 0.238 (RMSE); and 40.535 ± 1.539, 41.605 ± 0.534, 41.401 ± 1.496 (PSNR) for EOI respectively. CONCLUSIONS This work proposed and evaluated a novel respiratory phase sorting approach for 4D imaging using optical surface signals, which can potentially be applied to precision radiotherapy. Its potential advantages were non-ionizing, non-invasive, non-contact, and more compatible with various anatomic regions and treatment/imaging systems.
Collapse
Affiliation(s)
- Zhengkun Dong
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China; Institute of Medical Technology, Peking University Health Science Center, Beijing, 100191, China
| | - Shutong Yu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China; Institute of Medical Technology, Peking University Health Science Center, Beijing, 100191, China
| | - Adam Szmul
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Jingyuan Wang
- Department of Biostatistics, School of Public Health, Peking University, Beijing, China
| | - Junfeng Qi
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, China
| | - Hao Wu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Junyu Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Zihong Lu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Yibao Zhang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China.
| |
Collapse
|
9
|
de Hond YJ, Kerckhaert CE, van Eijnatten MA, van Haaren PM, Hurkmans CW, Tijssen RH. Anatomical evaluation of deep-learning synthetic computed tomography images generated from male pelvis cone-beam computed tomography. Phys Imaging Radiat Oncol 2023; 25:100416. [PMID: 36969503 PMCID: PMC10037090 DOI: 10.1016/j.phro.2023.100416] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 01/17/2023] [Accepted: 01/18/2023] [Indexed: 01/25/2023] Open
Abstract
Background and purpose To improve cone-beam computed tomography (CBCT), deep-learning (DL)-models are being explored to generate synthetic CTs (sCT). The sCT evaluation is mainly focused on image quality and CT number accuracy. However, correct representation of daily anatomy of the CBCT is also important for sCTs in adaptive radiotherapy. The aim of this study was to emphasize the importance of anatomical correctness by quantitatively assessing sCT scans generated from CBCT scans using different paired and unpaired dl-models. Materials and methods Planning CTs (pCT) and CBCTs of 56 prostate cancer patients were included to generate sCTs. Three different dl-models, Dual-UNet, Single-UNet and Cycle-consistent Generative Adversarial Network (CycleGAN), were evaluated on image quality and anatomical correctness. The image quality was assessed using image metrics, such as Mean Absolute Error (MAE). The anatomical correctness between sCT and CBCT was quantified using organs-at-risk volumes and average surface distances (ASD). Results MAE was 24 Hounsfield Unit (HU) [range:19-30 HU] for Dual-UNet, 40 HU [range:34-56 HU] for Single-UNet and 41HU [range:37-46 HU] for CycleGAN. Bladder ASD was 4.5 mm [range:1.6-12.3 mm] for Dual-UNet, 0.7 mm [range:0.4-1.2 mm] for Single-UNet and 0.9 mm [range:0.4-1.1 mm] CycleGAN. Conclusions Although Dual-UNet performed best in standard image quality measures, such as MAE, the contour based anatomical feature comparison with the CBCT showed that Dual-UNet performed worst on anatomical comparison. This emphasizes the importance of adding anatomy based evaluation of sCTs generated by dl-models. For applications in the pelvic area, direct anatomical comparison with the CBCT may provide a useful method to assess the clinical applicability of dl-based sCT generation methods.
Collapse
Affiliation(s)
- Yvonne J.M. de Hond
- Department of Radiation Oncology, Catharina Hospital, Eindhoven, The Netherlands
| | - Camiel E.M. Kerckhaert
- Department of Radiation Oncology, Catharina Hospital, Eindhoven, The Netherlands
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | | | - Paul M.A. van Haaren
- Department of Radiation Oncology, Catharina Hospital, Eindhoven, The Netherlands
| | - Coen W. Hurkmans
- Department of Radiation Oncology, Catharina Hospital, Eindhoven, The Netherlands
| | - Rob H.N. Tijssen
- Department of Radiation Oncology, Catharina Hospital, Eindhoven, The Netherlands
| |
Collapse
|
10
|
Papanastasiou G, García Seco de Herrera A, Wang C, Zhang H, Yang G, Wang G. Focus on machine learning models in medical imaging. Phys Med Biol 2022; 68:010301. [PMID: 36594883 DOI: 10.1088/1361-6560/aca069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 11/04/2022] [Indexed: 12/23/2022]
Affiliation(s)
| | | | | | - Heye Zhang
- Sun Yat-sen University, People's Republic of China
| | | | - Ge Wang
- Rensselaer Polytechnic Institute, United States of America
| |
Collapse
|
11
|
Chen X, Liu Y, Yang B, Zhu J, Yuan S, Xie X, Liu Y, Dai J, Men K. A more effective CT synthesizer using transformers for cone-beam CT-guided adaptive radiotherapy. Front Oncol 2022; 12:988800. [PMID: 36091131 PMCID: PMC9454309 DOI: 10.3389/fonc.2022.988800] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 07/27/2022] [Indexed: 11/13/2022] Open
Abstract
PurposeThe challenge of cone-beam computed tomography (CBCT) is its low image quality, which limits its application for adaptive radiotherapy (ART). Despite recent substantial improvement in CBCT imaging using the deep learning method, the image quality still needs to be improved for effective ART application. Spurred by the advantages of transformers, which employs multi-head attention mechanisms to capture long-range contextual relations between image pixels, we proposed a novel transformer-based network (called TransCBCT) to generate synthetic CT (sCT) from CBCT. This study aimed to further improve the accuracy and efficiency of ART.Materials and methodsIn this study, 91 patients diagnosed with prostate cancer were enrolled. We constructed a transformer-based hierarchical encoder–decoder structure with skip connection, called TransCBCT. The network also employed several convolutional layers to capture local context. The proposed TransCBCT was trained and validated on 6,144 paired CBCT/deformed CT images from 76 patients and tested on 1,026 paired images from 15 patients. The performance of the proposed TransCBCT was compared with a widely recognized style transferring deep learning method, the cycle-consistent adversarial network (CycleGAN). We evaluated the image quality and clinical value (application in auto-segmentation and dose calculation) for ART need.ResultsTransCBCT had superior performance in generating sCT from CBCT. The mean absolute error of TransCBCT was 28.8 ± 16.7 HU, compared to 66.5 ± 13.2 for raw CBCT, and 34.3 ± 17.3 for CycleGAN. It can preserve the structure of raw CBCT and reduce artifacts. When applied in auto-segmentation, the Dice similarity coefficients of bladder and rectum between auto-segmentation and oncologist manual contours were 0.92 and 0.84 for TransCBCT, respectively, compared to 0.90 and 0.83 for CycleGAN. When applied in dose calculation, the gamma passing rate (1%/1 mm criterion) was 97.5% ± 1.1% for TransCBCT, compared to 96.9% ± 1.8% for CycleGAN.ConclusionsThe proposed TransCBCT can effectively generate sCT for CBCT. It has the potential to improve radiotherapy accuracy.
Collapse
Affiliation(s)
- Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- National Cancer Center/National Clinical Research Center for Cancer/Hebei Cancer Hospital, Chinese Academy of Medical Sciences, Langfang, China
| | - Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- School of Physics and Technology, Wuhan University, Wuhan, China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Siqi Yuan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xuejie Xie
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yueping Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- *Correspondence: Kuo Men,
| |
Collapse
|