1
|
Sun H, Chen L, Li J, Yang Z, Zhu J, Wang Z, Ren G, Cai J, Zhao L. Synthesis of pseudo-PET/CT fusion images in radiotherapy based on a new transformer model. Med Phys 2025; 52:1070-1085. [PMID: 39569842 DOI: 10.1002/mp.17512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 10/04/2024] [Accepted: 10/25/2024] [Indexed: 11/22/2024] Open
Abstract
BACKGROUND PET/CT and planning CT are commonly used medical images in radiotherapy for esophageal and nasopharyngeal cancer. However, repeated scans will expose patients to additional radiation doses and also introduce registration errors. This multimodal treatment approach is expected to be further improved. PURPOSE A new Transformer model is proposed to obtain pseudo-PET/CT fusion images for esophageal and nasopharyngeal cancer radiotherapy. METHODS The data of 129 cases of esophageal cancer and 141 cases of nasopharyngeal cancer were retrospectively selected for training, validation, and testing. PET and CT images are used as input. Based on the Transformer model with a "focus-disperse" attention mechanism and multi-consistency loss constraints, the feature information in two images is effectively captured. This ultimately results in the synthesis of pseudo-PET/CT fusion images with enhanced tumor region imaging. During the testing phase, the accuracy of pseudo-PET/CT fusion images was verified in anatomy and dosimetry, and two prospective cases were selected for further dose verification. RESULTS In terms of anatomical verification, the PET/CT fusion image obtained using the wavelet fusion algorithm was used as the ground truth image after correction by clinicians. The evaluation metrics, including peak signal-to-noise ratio, structural similarity index, mean absolute error, and normalized root mean square error, between the pseudo-fused images obtained based on the proposed model and ground truth, are represented by means (standard deviation). They are 37.82 (1.57), 95.23 (2.60), 29.70 (2.49), and 9.48 (0.32), respectively. These numerical values outperform those of the state-of-the-art deep learning comparative models. In terms of dosimetry validation, based on a 3%/2 mm gamma analysis, the average passing rates of global and tumor regions between the pseudo-fused images (with a PET/CT weight ratio of 2:8) and the planning CT images are 97.2% and 95.5%, respectively. These numerical outcomes are superior to those of pseudo-PET/CT fusion images with other weight ratios. CONCLUSIONS This pseudo-PET/CT fusion images obtained based on the proposed model hold promise as a new modality in the radiotherapy for esophageal and nasopharyngeal cancer.
Collapse
Affiliation(s)
- Hongfei Sun
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Liting Chen
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Jie Li
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Zhi Yang
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Jiarui Zhu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Zhongfei Wang
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Lina Zhao
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| |
Collapse
|
2
|
Farshchitabrizi AH, Sadeghi MH, Sina S, Alavi M, Feshani ZN, Omidi H. AI-enhanced PET/CT image synthesis using CycleGAN for improved ovarian cancer imaging. Pol J Radiol 2025; 90:e26-e35. [PMID: 40070416 PMCID: PMC11891552 DOI: 10.5114/pjr/196804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Accepted: 12/03/2024] [Indexed: 03/14/2025] Open
Abstract
Purpose Ovarian cancer is the fifth fatal cancer among women. Positron emission tomography (PET), which offers detailed metabolic data, can be effectively used for early cancer screening. However, proper attenuation correction is essential for interpreting the data obtained by this imaging modality. Computed tomography (CT) imaging is commonly performed alongside PET imaging for attenuation correction. This approach may introduce some issues in spatial alignment and registration of the images obtained by the two modalities. This study aims to perform PET image attenuation correction by using generative adversarial networks (GANs), without additional CT imaging. Material and methods The PET/CT data from 55 ovarian cancer patients were used in this study. Three GAN architectures: Conditional GAN, Wasserstein GAN, and CycleGAN, were evaluated for attenuation correction. The statistical performance of each model was assessed by calculating the mean squared error (MSE) and mean absolute error (MAE). The radiological performance assessments of the models were performed by comparing the standardised uptake value and the Hounsfield unit values of the whole body and selected organs, in the synthetic and real PET and CT images. Results Based on the results, CycleGAN demonstrated effective attenuation correction and pseudo-CT generation, with high accuracy. The MAE and MSE for all images were 2.15 ± 0.34 and 3.14 ± 0.56, respectively. For CT reconstruction, such values were found to be 4.17 ± 0.96 and 5.66 ± 1.01, respectively. Conclusions The results showed the potential of deep learning in reducing radiation exposure and improving the quality of PET imaging. Further refinement and clinical validation are needed for full clinical applicability.
Collapse
Affiliation(s)
- Amir Hossein Farshchitabrizi
- Namazi Hospital, Shiraz University of Medical Sciences, Shiraz, Iran
- Radiation Research Centre, School of Mechanical Engineering, Shiraz University, Shiraz, Iran
| | - Mohammad Hossein Sadeghi
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran
| | - Sedigheh Sina
- Radiation Research Centre, School of Mechanical Engineering, Shiraz University, Shiraz, Iran
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran
| | - Mehrosadat Alavi
- Ionising and Non-Ionising Radiation protection Research Centre, School of Paramedical Sciences, Shiraz University of Medical Sciences, Shiraz, Iran
| | | | - Hamid Omidi
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran
| |
Collapse
|
3
|
Sherif IA, Nser SY, Bobo A, Afridi A, Hamed A, Dunbar M, Boutefnouchet T. Can Ordinary AI-Powered Tools Replace a Clinician-Led Fracture Clinic Appointment? Cureus 2024; 16:e75440. [PMID: 39791069 PMCID: PMC11717409 DOI: 10.7759/cureus.75440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/09/2024] [Indexed: 01/12/2025] Open
Abstract
Introduction Artificial intelligence (AI)-powered tools are increasingly integrated into healthcare. The purpose of the present study was to compare fracture management plans generated by clinicians to those obtained from ChatGPT (OpenAI, San Francisco, CA) and Google Gemini (Google, Inc., Mountain View, CA). Methodology A retrospective comparative analysis was conducted. The study included 70 cases of isolated injuries treated at the authors' institution fracture clinic. Complex, open fractures and non-specific diagnoses were excluded. All relevant clinical details were introduced into ChatGPT and Google Gemini. The AI-generated management plans were compared with actual documented plans obtained from the clinical records. The study focused on treatment recommendations and follow-up strategies. Results In terms of agreement with actual treatment plans, Google Gemini matched in only 13 cases (19%), with disagreements in the remainder of cases due to overgeneralisation, inadequate treatment, and ambiguity. In contrast, ChatGPT matched actual plans in 24 cases (34%), with overgeneralisation being the principal cause for disagreement. The differences between AI-powered tools and actual clinician-led plans were statistically significant (p < 0.001). Conclusion Both AI-powered tools demonstrated significant disagreement with actual clinical management plans. While ChatGPT showed closer alignment to human expertise, particularly in treatment recommendations, both AI engines still lacked the clinical precision required for accurate fracture management. These findings highlight the current limitations of ordinary AI-powered tools and negate their ability to replace a clinician-led fracture clinic appointment.
Collapse
Affiliation(s)
- Islam A Sherif
- Trauma and Orthopaedics, Warwick Hospital, Birmingham, GBR
| | | | - Ahmed Bobo
- Trauma and Orthopaedics, University Hospitals Birmingham National Health Service (NHS) Foundation Trust, Birmingham, GBR
| | - Asif Afridi
- Trauma and Orthopaedics, Hayatabad Medical Complex Peshawar, Peshawar, PAK
- Trauma and Orthopaedics, Queen Elizabeth Hospital Birmingham, Birmingham, GBR
| | - Ahmed Hamed
- Trauma and Orthopaedics, University Hospitals Birmingham National Health Service (NHS) Foundation Trust, Birmingham, GBR
| | - Mark Dunbar
- Trauma and Orthopaedics, Queen Elizabeth Hospital Birmingham, Birmingham, GBR
| | - Tarek Boutefnouchet
- Orthopaedics, University Hospitals Birmingham National Health Service (NHS) Foundation Trust, Birmingham, GBR
| |
Collapse
|
4
|
Sun H, Yang Z, Zhu J, Li J, Gong J, Chen L, Wang Z, Yin Y, Ren G, Cai J, Zhao L. Pseudo-medical image-guided technology based on 'CBCT-only' mode in esophageal cancer radiotherapy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 245:108007. [PMID: 38241802 DOI: 10.1016/j.cmpb.2024.108007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 11/03/2023] [Accepted: 01/03/2024] [Indexed: 01/21/2024]
Abstract
Purpose To minimize the various errors introduced by image-guided radiotherapy (IGRT) in the application of esophageal cancer treatment, this study proposes a novel technique based on the 'CBCT-only' mode of pseudo-medical image guidance. Methods The framework of this technology consists of two pseudo-medical image synthesis models in the CBCT→CT and the CT→PET direction. The former utilizes a dual-domain parallel deep learning model called AWM-PNet, which incorporates attention waning mechanisms. This model effectively suppresses artifacts in CBCT images in both the sinogram and spatial domains while efficiently capturing important image features and contextual information. The latter leverages tumor location and shape information provided by clinical experts. It introduces a PRAM-GAN model based on a prior region aware mechanism to establish a non-linear mapping relationship between CT and PET image domains. As a result, it enables the generation of pseudo-PET images that meet the clinical requirements for radiotherapy. Results The NRMSE and multi-scale SSIM (MS-SSIM) were utilized to evaluate the test set, and the results were presented as median values with lower quartile and upper quartile ranges. For the AWM-PNet model, the NRMSE and MS-SSIM values were 0.0218 (0.0143, 0.0255) and 0.9325 (0.9141, 0.9410), respectively. The PRAM-GAN model produced NRMSE and MS-SSIM values of 0.0404 (0.0356, 0.0476) and 0.9154 (0.8971, 0.9294), respectively. Statistical analysis revealed significant differences (p < 0.05) between these models and others. The numerical results of dose metrics, including D98 %, Dmean, and D2 %, validated the accuracy of HU values in the pseudo-CT images synthesized by the AWM-PNet. Furthermore, the Dice coefficient results confirmed statistically significant differences (p < 0.05) in GTV delineation between the pseudo-PET images synthesized using the PRAM-GAN model and other compared methods. Conclusion The AWM-PNet and PRAM-GAN models have the capability to generate accurate pseudo-CT and pseudo-PET images, respectively. The pseudo-image-guided technique based on the 'CBCT-only' mode shows promising prospects for application in esophageal cancer radiotherapy.
Collapse
Affiliation(s)
- Hongfei Sun
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Zhi Yang
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Jiarui Zhu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jie Li
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Jie Gong
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Liting Chen
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Zhongfei Wang
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Yutian Yin
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
| | - Lina Zhao
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China.
| |
Collapse
|
5
|
Sun H, Xi Q, Sun J, Fan R, Xie K, Ni X, Yang J. Research on new treatment mode of radiotherapy based on pseudo-medical images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106932. [PMID: 35671601 DOI: 10.1016/j.cmpb.2022.106932] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 04/20/2022] [Accepted: 06/01/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Multi-modal medical images with multiple feature information are beneficial for radiotherapy. A new radiotherapy treatment mode based on triangle generative adversarial network (TGAN) model was proposed to synthesize pseudo-medical images between multi-modal datasets. METHODS CBCT, MRI and CT images of 80 patients with nasopharyngeal carcinoma were selected. The TGAN model based on multi-scale discriminant network was used for data training between different image domains. The generator of the TGAN model refers to cGAN and CycleGAN, and only one generation network can establish the non-linear mapping relationship between multiple image domains. The discriminator used multi-scale discrimination network to guide the generator to synthesize pseudo-medical images that are similar to real images from both shallow and deep aspects. The accuracy of pseudo-medical images was verified in anatomy and dosimetry. RESULTS In the three synthetic directions, namely, CBCT → CT, CBCT → MRI, and MRI → CT, significant differences (p < 0.05) in the three-fold-cross validation results on PSNR and SSIM metrics between the pseudo-medical images obtained based on TGAN and the real images. In the testing stage, for TGAN, the MAE metric results in the three synthesis directions (CBCT → CT, CBCT → MRI, and MRI → CT) were presented as mean (standard deviation), which were 68.67 (5.83), 83.14 (8.48), and 79.96 (7.59), and the NMI metric results were 0.8643 (0.0253), 0.8051 (0.0268), and 0.8146 (0.0267) respectively. In terms of dose verification, the differences in dose distribution between the pseudo-CT obtained by TGAN and the real CT were minimal. The H values of the measurement results of dose uncertainty in PGTV, PGTVnd, PTV1, and PTV2 were 42.510, 43.121, 17.054, and 7.795, respectively (P < 0.05). The differences were statistically significant. The gamma pass rate (2%/2 mm) of pseudo-CT obtained by the new model was 94.94% (0.73%), and the numerical results were better than those of the three other comparison models. CONCLUSIONS The pseudo-medical images acquired based on TGAN were close to the real images in anatomy and dosimetry. The pseudo-medical images synthesized by the TGAN model have good application prospects in clinical adaptive radiotherapy.
Collapse
Affiliation(s)
- Hongfei Sun
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| | - Qianyi Xi
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Jiawei Sun
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Rongbo Fan
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| | - Kai Xie
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| |
Collapse
|
6
|
Rao F, Wu Z, Han L, Yang B, Han W, Zhu W. Delayed PET imaging using image synthesis network and nonrigid registration without additional CT scan. Med Phys 2022; 49:3233-3245. [PMID: 35218053 DOI: 10.1002/mp.15574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 02/02/2022] [Accepted: 02/15/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Attenuation correction is critical for positron emission tomography (PET) image reconstruction. The standard protocol for obtaining attenuation information in a clinical PET scanner is via the coregistered computed tomography (CT) images. Therefore for delayed PET imaging, the CT scan is repeated twice, which increases the radiation dose for the patient. In this paper, we propose a zero-extra-dose delayed PET imaging method which requires no additional CT scans. METHODS A deep learning based synthesis network is designed to convert the PET data into a pseudo CT image for the delayed scan. Then, nonrigid registration is performed between this pseudo CT image and the CT image of the first scan, warping the CT image of the first scan to an estimated CT images for the delayed scan. Finally, the PET image attenuation correction in the delayed scan is obtained from this estimated CT image. Experiments with clinical datasets are implemented to assess the effectiveness of the proposed method with the well-recognized GAN method. The average peak signal-to-noise ratio (PSNR) and the mean absolute percent error (MAPE) are used in comparison. We also use scoring from three experienced radiologists as subjective measurement means, based on the diagnostic consistency of the PET images reconstructed from GAN and the proposed method with respect to the ground truth images. RESULTS The experiments show that the average PSNR is 47.04 dB (the proposed method) v.s. 44.41 dB (the traditional GAN method) for the reconstructed delayed PET images in our evaluation dataset. The average MAPEs are 1.59% for the proposed method and 3.32% for the traditional GAN method across five organ Regions of Interest (ROIs). The scores for the GAN and the proposed method rated by three experienced radiologists are 8.08±0.60 and 9.02±0.52, indicating that the proposed method yields more consistent PET images with the ground truth. CONCLUSIONS This work proposes a novel method for CT-less delayed PET imaging based on image synthesis network and nonrigid image registration. The PET image reconstructed using the proposed method yields delayed PET images with high image quality without artifacts, and is quantitatively more accurate compared with the traditional GAN method. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Fan Rao
- Research Center for Healthcare Data Science, Zhejiang Lab, China
| | - Zhuoxuan Wu
- Department of Medical Oncology, Sir Run Run Shaw Hospital, College of Medicine, Zhejiang University, China
| | - Lu Han
- Research Center for Healthcare Data Science, Zhejiang Lab, China
| | - Bao Yang
- Research Center for Healthcare Data Science, Zhejiang Lab, China
| | - Weidong Han
- Department of Medical Oncology, Sir Run Run Shaw Hospital, College of Medicine, Zhejiang University, China
| | - Wentao Zhu
- Research Center for Healthcare Data Science, Zhejiang Lab, China
| |
Collapse
|