1
|
Rossi M, Belotti G, Mainardi L, Baroni G, Cerveri P. Feasibility of proton dosimetry overriding planning CT with daily CBCT elaborated through generative artificial intelligence tools. Comput Assist Surg (Abingdon) 2024; 29:2327981. [PMID: 38468391 DOI: 10.1080/24699322.2024.2327981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024] Open
Abstract
Radiotherapy commonly utilizes cone beam computed tomography (CBCT) for patient positioning and treatment monitoring. CBCT is deemed to be secure for patients, making it suitable for the delivery of fractional doses. However, limitations such as a narrow field of view, beam hardening, scattered radiation artifacts, and variability in pixel intensity hinder the direct use of raw CBCT for dose recalculation during treatment. To address this issue, reliable correction techniques are necessary to remove artifacts and remap pixel intensity into Hounsfield Units (HU) values. This study proposes a deep-learning framework for calibrating CBCT images acquired with narrow field of view (FOV) systems and demonstrates its potential use in proton treatment planning updates. Cycle-consistent generative adversarial networks (cGAN) processes raw CBCT to reduce scatter and remap HU. Monte Carlo simulation is used to generate CBCT scans, enabling the possibility to focus solely on the algorithm's ability to reduce artifacts and cupping effects without considering intra-patient longitudinal variability and producing a fair comparison between planning CT (pCT) and calibrated CBCT dosimetry. To showcase the viability of the approach using real-world data, experiments were also conducted using real CBCT. Tests were performed on a publicly available dataset of 40 patients who received ablative radiation therapy for pancreatic cancer. The simulated CBCT calibration led to a difference in proton dosimetry of less than 2%, compared to the planning CT. The potential toxicity effect on the organs at risk decreased from about 50% (uncalibrated) up the 2% (calibrated). The gamma pass rate at 3%/2 mm produced an improvement of about 37% in replicating the prescribed dose before and after calibration (53.78% vs 90.26%). Real data also confirmed this with slightly inferior performances for the same criteria (65.36% vs 87.20%). These results may confirm that generative artificial intelligence brings the use of narrow FOV CBCT scans incrementally closer to clinical translation in proton therapy planning updates.
Collapse
Affiliation(s)
- Matteo Rossi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- Laboratory of Innovation in Sleep Medicine, Istituto Auxologico Italiano, Milan, Italy
| | - Gabriele Belotti
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Luca Mainardi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Guido Baroni
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- Bioengineering Unit, Clinical Department, National Center for Oncological Hadrontherapy (CNAO), Pavia, Italy
| | - Pietro Cerveri
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- Laboratory of Innovation in Sleep Medicine, Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
2
|
Kaushik S, Ödén J, Sharma DS, Fredriksson A, Toma-Dasu I. Generation and evaluation of anatomy-preserving virtual CT for online adaptive proton therapy. Med Phys 2024; 51:1536-1546. [PMID: 38230803 DOI: 10.1002/mp.16941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 11/24/2023] [Accepted: 12/31/2023] [Indexed: 01/18/2024] Open
Abstract
BACKGROUND Daily CTs generated by CBCT correction are required for daily replanning in online-adaptive proton therapy (APT) to effectively deal with inter-fractional changes. Out of the currently available methods, the suitability of a daily CT generation method for proton dose calculation also depends on the anatomical site. PURPOSE We propose an anatomy-preserving virtual CT (APvCT) method as a hybrid method of CBCT correction, which is especially suitable for large anatomy deformations. The accuracy of the hybrid method was assessed by comparison with the corrected CBCT (cCBCT) and virtual CT (vCT) methods in the context of online APT. METHODS Seventy-one daily CBCTs of four prostate cancer patients treated with intensity modulated proton therapy (IMPT) were converted to daily CTs using cCBCT, vCT, and the newly proposed APvCT method. In APvCT, planning CT (pCT) were mapped to CBCT geometry using deformable image registration with boundary conditions on controlling regions of interest (ROIs) created with deep learning segmentation on cCBCT. The relative frequency distribution (RFD) of HU, mass density and stopping power ratio (SPR) values were assessed and compared with the pCT. The ROIs in the APvCT and vCT were compared with cCBCT in terms of Dice similarity coefficient (DSC) and mean distance-to-agreement (mDTA). For each patient, a robustly optimized IMPT plan was created on the pCT and subsequent daily adaptive plans on daily CTs. For dose distribution comparison on the same anatomy, the daily adaptive plans on cCBCT and vCT were recalculated on the corresponding APvCT. The dose distributions were compared in terms of isodose volumes and 3D global gamma-index passing rate (GPR) at γ(2%, 2 mm) criterion. RESULTS For all patients, no noticeable difference in RFDs was observed amongst APvCT, vCT, and pCT except in cCBCT, which showed a noticeable difference. The minimum DSC value was 0.96 and 0.39 for contours in APvCT and vCT respectively. The average value of mDTA for APvCT was 0.01 cm for clinical target volume and ≤0.01 cm for organs at risk, which increased to 0.18 cm and ≤0.52 cm for vCT. The mean GPR value was 90.9%, 64.5%, and 67.0% for APvCT versus cCBCT, vCT versus cCBCT, and APvCT versus vCT, respectively. When recalculated on APvCT, the adaptive cCBCT and vCT plans resulted in mean GPRs of 89.5 ± 5.1% and 65.9 ± 19.1%, respectively. The mean DSC values for 80.0%, 90.0%, 95.0%, 98.0%, and 100.0% isodose volumes were 0.97, 0.97, 0.97, 0.95, and 0.91 for recalculated cCBCT plans, and 0.89, 0.88, 0.87, 0.85, and 0.81 for recalculated vCT plans. Hausdorff distance for the 100.0% isodose volume in some cases of recalculated cCBCT plans on APvCT exceeded 1.00 cm. CONCLUSIONS APvCT contours showed good agreement with reference contours of cCBCT which indicates anatomy preservation in APvCT. A vCT with erroneous anatomy can result in an incorrect adaptive plan. Further, slightly lower values of GPR between the APvCT and cCBCT-based adaptive plans can be explained by the difference in the cCBCT's SPR RFD from the pCT.
Collapse
Affiliation(s)
- Suryakant Kaushik
- RaySearch Laboratories AB (Publ), Stockholm, Sweden
- Department of Physics, Medical Radiation Physics, Stockholm University, Stockholm, Sweden
- Department of Oncology and Pathology, Medical Radiation Physics, Karolinska Institutet, Stockholm, Sweden
| | - Jakob Ödén
- RaySearch Laboratories AB (Publ), Stockholm, Sweden
| | | | | | - Iuliana Toma-Dasu
- Department of Physics, Medical Radiation Physics, Stockholm University, Stockholm, Sweden
- Department of Oncology and Pathology, Medical Radiation Physics, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
3
|
Kehayias CE, Yan Y, Bontempi D, Quirk S, Bitterman DS, Bredfeldt JS, Aerts HJWL, Mak RH, Guthier CV. Prospective deployment of an automated implementation solution for artificial intelligence translation to clinical radiation oncology. Front Oncol 2024; 13:1305511. [PMID: 38239639 PMCID: PMC10794768 DOI: 10.3389/fonc.2023.1305511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 12/11/2023] [Indexed: 01/22/2024] Open
Abstract
Introduction Artificial intelligence (AI)-based technologies embody countless solutions in radiation oncology, yet translation of AI-assisted software tools to actual clinical environments remains unrealized. We present the Deep Learning On-Demand Assistant (DL-ODA), a fully automated, end-to-end clinical platform that enables AI interventions for any disease site featuring an automated model-training pipeline, auto-segmentations, and QA reporting. Materials and methods We developed, tested, and prospectively deployed the DL-ODA system at a large university affiliated hospital center. Medical professionals activate the DL-ODA via two pathways (1): On-Demand, used for immediate AI decision support for a patient-specific treatment plan, and (2) Ambient, in which QA is provided for all daily radiotherapy (RT) plans by comparing DL segmentations with manual delineations and calculating the dosimetric impact. To demonstrate the implementation of a new anatomy segmentation, we used the model-training pipeline to generate a breast segmentation model based on a large clinical dataset. Additionally, the contour QA functionality of existing models was assessed using a retrospective cohort of 3,399 lung and 885 spine RT cases. Ambient QA was performed for various disease sites including spine RT and heart for dosimetric sparing. Results Successful training of the breast model was completed in less than a day and resulted in clinically viable whole breast contours. For the retrospective analysis, we evaluated manual-versus-AI similarity for the ten most common structures. The DL-ODA detected high similarities in heart, lung, liver, and kidney delineations but lower for esophagus, trachea, stomach, and small bowel due largely to incomplete manual contouring. The deployed Ambient QAs for heart and spine sites have prospectively processed over 2,500 cases and 230 cases over 9 months and 5 months, respectively, automatically alerting the RT personnel. Discussion The DL-ODA capabilities in providing universal AI interventions were demonstrated for On-Demand contour QA, DL segmentations, and automated model training, and confirmed successful integration of the system into a large academic radiotherapy department. The novelty of deploying the DL-ODA as a multi-modal, fully automated end-to-end AI clinical implementation solution marks a significant step towards a generalizable framework that leverages AI to improve the efficiency and reliability of RT systems.
Collapse
Affiliation(s)
- Christopher E. Kehayias
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Yujie Yan
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Dennis Bontempi
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, Netherlands
| | - Sarah Quirk
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Danielle S. Bitterman
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Jeremy S. Bredfeldt
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Hugo J. W. L. Aerts
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, Netherlands
| | - Raymond H. Mak
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
| | - Christian V. Guthier
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
4
|
Lv T, Xie C, Zhang Y, Liu Y, Zhang G, Qu B, Zhao W, Xu S. A qualitative study of improving megavoltage computed tomography image quality and maintaining dose accuracy using cycleGAN-based image synthesis. Med Phys 2024; 51:394-406. [PMID: 37475544 DOI: 10.1002/mp.16633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 06/18/2023] [Accepted: 07/02/2023] [Indexed: 07/22/2023] Open
Abstract
BACKGROUND Due to inconsistent positioning, tumor shrinking, and weight loss during fractionated treatment, the initial plan was no longer appropriate after a few fractional treatments, and the patient will require adaptive helical tomotherapy (HT) to overcome the issue. Patients are scanned with megavoltage computed tomography (MVCT) before each fractional treatment, which is utilized for patient setup and provides information for dose reconstruction. However, the low contrast and high noise of MVCT make it challenging to delineate treatment targets and organs at risk (OAR). PURPOSE This study developed a deep-learning-based approach to generate high-quality synthetic kilovoltage computed tomography (skVCT) from MVCT and meet clinical dose requirements. METHODS Data from 41 head and neck cancer patients were collected; 25 (2995 slices) were used for training, and 16 (1898 slices) for testing. A cycle generative adversarial network (cycleGAN) based on attention gate and residual blocks was used to generate MVCT-based skVCT. For the 16 patients, kVCT-based plans were transferred to skVCT images and electron density profile-corrected MVCT images to recalculate the dose. The quantitative indices and clinically relevant dosimetric metrics, including the mean absolute error (MAE), structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), gamma passing rates, and dose-volume-histogram (DVH) parameters (Dmax , Dmean , Dmin ), were used to assess the skVCT images. RESULTS The MAE, PSNR, and SSIM of MVCT were 109.6 ± 12.3 HU, 27.5 ± 1.1 dB, and 91.9% ± 1.7%, respectively, while those of skVCT were 60.6 ± 9.0 HU, 34.0 ± 1.9 dB, and 96.5% ± 1.1%. The image quality and contrast were enhanced, and the noise was reduced. The gamma passing rates improved from 98.31% ± 1.11% to 99.71% ± 0.20% (2 mm/2%) and 99.77% ± 0.18% to 99.98% ± 0.02% (3 mm/3%). No significant differences (p > 0.05) were observed in DVH parameters between kVCT and skVCT. CONCLUSION With training on a small data set (2995 slices), the model successfully generated skVCT with improved image quality, and the dose calculation accuracy was similar to that of MVCT. MVCT-based skVCT can increase treatment accuracy and offer the possibility of implementing adaptive radiotherapy.
Collapse
Affiliation(s)
- Tie Lv
- Beihang University, School of Physics, Beijing, China
- The First Medical Center of PLA General Hospital, Department of Radiation Oncology, Beijing, China
| | - Chuanbin Xie
- Beihang University, School of Physics, Beijing, China
- The First Medical Center of PLA General Hospital, Department of Radiation Oncology, Beijing, China
| | - Yihang Zhang
- Beihang University, School of Physics, Beijing, China
- The First Medical Center of PLA General Hospital, Department of Radiation Oncology, Beijing, China
| | - Yaoying Liu
- Beihang University, School of Physics, Beijing, China
- The First Medical Center of PLA General Hospital, Department of Radiation Oncology, Beijing, China
| | - Gaolong Zhang
- Beihang University, School of Physics, Beijing, China
| | - Baolin Qu
- The First Medical Center of PLA General Hospital, Department of Radiation Oncology, Beijing, China
| | - Wei Zhao
- Beihang University, School of Physics, Beijing, China
| | - Shouping Xu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
5
|
Aouadi S, Yoganathan SA, Torfeh T, Paloor S, Caparrotti P, Hammoud R, Al-Hammadi N. Generation of synthetic CT from CBCT using deep learning approaches for head and neck cancer patients. Biomed Phys Eng Express 2023; 9:055020. [PMID: 37489854 DOI: 10.1088/2057-1976/acea27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Accepted: 07/25/2023] [Indexed: 07/26/2023]
Abstract
Purpose.To create a synthetic CT (sCT) from daily CBCT using either deep residual U-Net (DRUnet), or conditional generative adversarial network (cGAN) for adaptive radiotherapy planning (ART).Methods.First fraction CBCT and planning CT (pCT) were collected from 93 Head and Neck patients who underwent external beam radiotherapy. The dataset was divided into training, validation, and test sets of 58, 10 and 25 patients respectively. Three methods were used to generate sCT, 1. Nonlocal means patch based method was modified to include multiscale patches defining the multiscale patch based method (MPBM), 2. An encoder decoder 2D Unet with imbricated deep residual units was implemented, 3. DRUnet was integrated to the generator part of cGAN whereas a convolutional PatchGAN classifier was used as the discriminator. The accuracy of sCT was evaluated geometrically using Mean Absolute Error (MAE). Clinical Volumetric Modulated Arc Therapy (VMAT) plans were copied from pCT to registered CBCT and sCT and dosimetric analysis was performed by comparing Dose Volume Histogram (DVH) parameters of planning target volumes (PTVs) and organs at risk (OARs). Furthermore, 3D Gamma analysis (2%/2mm, global) between the dose on the sCT or CBCT and that on the pCT was performed.Results. The average MAE calculated between pCT and CBCT was 180.82 ± 27.37HU. Overall, all approaches significantly reduced the uncertainties in CBCT. Deep learning approaches outperformed patch-based methods with MAE = 67.88 ± 8.39HU (DRUnet) and MAE = 72.52 ± 8.43HU (cGAN) compared to MAE = 90.69 ± 14.3HU (MPBM). The percentages of DVH metric deviations were below 0.55% for PTVs and 1.17% for OARs using DRUnet. The average Gamma pass rate was 99.45 ± 1.86% for sCT generated using DRUnet.Conclusion.DL approaches outperformed MPBM. Specifically, DRUnet could be used for the generation of sCT with accurate intensities and realistic description of patient anatomy. This could be beneficial for CBCT based ART.
Collapse
Affiliation(s)
- Souha Aouadi
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - S A Yoganathan
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Tarraf Torfeh
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Satheesh Paloor
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Palmira Caparrotti
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Rabih Hammoud
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Noora Al-Hammadi
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| |
Collapse
|
6
|
Park CS, Kang SR, Kim JE, Huh KH, Lee SS, Heo MS, Han JJ, Yi WJ. Validation of bone mineral density measurement using quantitative CBCT image based on deep learning. Sci Rep 2023; 13:11921. [PMID: 37488135 PMCID: PMC10366160 DOI: 10.1038/s41598-023-38943-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 07/17/2023] [Indexed: 07/26/2023] Open
Abstract
The bone mineral density (BMD) measurement is a direct method of estimating human bone mass for diagnosing osteoporosis, and performed to objectively evaluate bone quality before implant surgery in dental clinics. The objective of this study was to validate the accuracy and reliability of BMD measurements made using quantitative cone-beam CT (CBCT) image based on deep learning by applying the method to clinical data from actual patients. Datasets containing 7500 pairs of CT and CBCT axial slice images from 30 patients were used to train a previously developed deep-learning model (QCBCT-NET). We selected 36 volumes of interest in the CBCT images for each patient in the bone regions of potential implants sites on the maxilla and mandible. We compared the BMDs shown in the quantitative CBCT (QCBCT) images with those in the conventional CBCT (CAL_CBCT) images at the various bone sites of interest across the entire field of view (FOV) using the performance metrics of the MAE, RMSE, MAPE (mean absolute percentage error), R2 (coefficient of determination), and SEE (standard error of estimation). Compared with the ground truth (QCT) images, the accuracy of the BMD measurements from the QCBCT images showed an RMSE of 83.41 mg/cm3, MAE of 67.94 mg/cm3, and MAPE of 8.32% across all the bone sites of interest, whereas for the CAL_CBCT images, those values were 491.15 mg/cm3, 460.52 mg/cm3, and 54.29%, respectively. The linear regression between the QCBCT and QCT images showed a slope of 1.00 and a R2 of 0.85, whereas for the CAL_CBCT images, those values were 0.32 and 0.24, respectively. The overall SEE between the QCBCT images and QCT images was 81.06 mg/cm3, whereas the SEE for the CAL_CBCT images was 109.32 mg/cm3. The QCBCT images thus showed better accuracy, linearity, and uniformity than the CAL_CBCT images across the entire FOV. The BMD measurements from the quantitative CBCT images showed high accuracy, linearity, and uniformity regardless of the relative geometric positions of the bone in the potential implant site. When applied to actual patient CBCT images, the CBCT-based quantitative BMD measurement based on deep learning demonstrated high accuracy and reliability across the entire FOV.
Collapse
Grants
- Project Number: 1711174552, KMDF_PR_20200901_0147 Korea Medical Device Development Fund Grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety)
- Project Number: 1711174543, KMDF_PR_20200901_0011 Korea Medical Device Development Fund Grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety)
Collapse
Affiliation(s)
- Chan-Soo Park
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Se-Ryong Kang
- Department of Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Jeong-Joon Han
- Department of Oral and Maxillofacial Surgery, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Won-Jin Yi
- Department of Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea.
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea.
| |
Collapse
|
7
|
Yamaguchi N, Kosaka Y, Haga A, Sata M, Kusunose K. Artificial intelligence-assisted interpretation of systolic function by echocardiogram. Open Heart 2023; 10:e002287. [PMID: 37460267 DOI: 10.1136/openhrt-2023-002287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 06/30/2023] [Indexed: 07/20/2023] Open
Abstract
OBJECTIVE Precise and reliable echocardiographic assessment of left ventricular ejection fraction (LVEF) is needed for clinical decision-making. Recently, artificial intelligence (AI) models have been developed to estimate LVEF accurately. The aim of this study was to evaluate whether an AI model could estimate an expert read of LVEF and reduce the interinstitutional variability of level 1 readers with the AI-LVEF displayed on the echocardiographic screen. METHODS This prospective, multicentre echocardiographic study was conducted by five cardiologists of level 1 echocardiographic skill (minimum level of competency to interpret images) from different hospitals. Protocol 1: Visual LVEFs for the 48 cases were measured without input from the AI-LVEF. Protocol 2: the 48 cases were again shown to all readers with inclusion of AI-LVEF data. To assess the concordance and accuracy with or without AI-LVEF, each visual LVEF measurement was compared with an average of the estimates by five expert readers as a reference. RESULTS A good correlation was found between AI-LVEF and reference LVEF (r=0.90, p<0.001) from the expert readers. For the classification LVEF, the area under the curve was 0.95 on heart failure with preserved EF and 0.96 on heart failure reduced EF. For the precision, the SD was reduced from 6.1±2.3 to 2.5±0.9 (p<0.001) with AI-LVEF. For the accuracy, the root-mean squared error was improved from 7.5±3.1 to 5.6±3.2 (p=0.004) with AI-LVEF. CONCLUSIONS AI can assist with the interpretation of systolic function on an echocardiogram for level 1 readers from different institutions.
Collapse
Affiliation(s)
- Natsumi Yamaguchi
- Department of Cardiovascular Medicine, Tokushima University Hospital, Tokushima, Japan
| | - Yoshitaka Kosaka
- Department of Cardiovascular Medicine, Tokushima University Hospital, Tokushima, Japan
| | - Akihiko Haga
- Graduate School of Biomedical Sciences, Tokushima University, Tokushima, Japan
| | - Masataka Sata
- Department of Cardiovascular Medicine, Tokushima University Hospital, Tokushima, Japan
| | - Kenya Kusunose
- Department of Cardiovascular Medicine, Nephrology, and Neurology, University of the Ryukyus, Okinawa, Japan
| |
Collapse
|
8
|
Szmul A, Taylor S, Lim P, Cantwell J, Moreira I, Zhang Y, D’Souza D, Moinuddin S, Gaze MN, Gains J, Veiga C. Deep learning based synthetic CT from cone beam CT generation for abdominal paediatric radiotherapy. Phys Med Biol 2023; 68:105006. [PMID: 36996837 PMCID: PMC10160738 DOI: 10.1088/1361-6560/acc921] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 03/13/2023] [Accepted: 03/30/2023] [Indexed: 04/01/2023]
Abstract
Objective. Adaptive radiotherapy workflows require images with the quality of computed tomography (CT) for re-calculation and re-optimisation of radiation doses. In this work we aim to improve the quality of on-board cone beam CT (CBCT) images for dose calculation using deep learning.Approach. We propose a novel framework for CBCT-to-CT synthesis using cycle-consistent Generative Adversarial Networks (cycleGANs). The framework was tailored for paediatric abdominal patients, a challenging application due to the inter-fractional variability in bowel filling and small patient numbers. We introduced to the networks the concept of global residuals only learning and modified the cycleGAN loss function to explicitly promote structural consistency between source and synthetic images. Finally, to compensate for the anatomical variability and address the difficulties in collecting large datasets in the paediatric population, we applied a smart 2D slice selection based on the common field-of-view (abdomen) to our imaging dataset. This acted as a weakly paired data approach that allowed us to take advantage of scans from patients treated for a variety of malignancies (thoracic-abdominal-pelvic) for training purposes. We first optimised the proposed framework and benchmarked its performance on a development dataset. Later, a comprehensive quantitative evaluation was performed on an unseen dataset, which included calculating global image similarity metrics, segmentation-based measures and proton therapy-specific metrics.Main results. We found improved performance for our proposed method, compared to a baseline cycleGAN implementation, on image-similarity metrics such as Mean Absolute Error calculated for a matched virtual CT (55.0 ± 16.6 HU proposed versus 58.9 ± 16.8 HU baseline). There was also a higher level of structural agreement for gastrointestinal gas between source and synthetic images measured using the dice similarity coefficient (0.872 ± 0.053 proposed versus 0.846 ± 0.052 baseline). Differences found in water-equivalent thickness metrics were also smaller for our method (3.3 ± 2.4% proposed versus 3.7 ± 2.8% baseline).Significance. Our findings indicate that our innovations to the cycleGAN framework improved the quality and structure consistency of the synthetic CTs generated.
Collapse
Affiliation(s)
- Adam Szmul
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Sabrina Taylor
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Pei Lim
- Department of Oncology, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Jessica Cantwell
- Radiotherapy, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Isabel Moreira
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Ying Zhang
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Derek D’Souza
- Radiotherapy Physics Services, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Syed Moinuddin
- Radiotherapy, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Mark N. Gaze
- Department of Oncology, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Jennifer Gains
- Department of Oncology, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Catarina Veiga
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| |
Collapse
|
9
|
Zhang X, Sisniega A, Zbijewski WB, Lee J, Jones CK, Wu P, Han R, Uneri A, Vagdargi P, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Combining physics-based models with deep learning image synthesis and uncertainty in intraoperative cone-beam CT of the brain. Med Phys 2023; 50:2607-2624. [PMID: 36906915 PMCID: PMC10175241 DOI: 10.1002/mp.16351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 02/03/2023] [Accepted: 02/27/2023] [Indexed: 03/13/2023] Open
Abstract
BACKGROUND Image-guided neurosurgery requires high localization and registration accuracy to enable effective treatment and avoid complications. However, accurate neuronavigation based on preoperative magnetic resonance (MR) or computed tomography (CT) images is challenged by brain deformation occurring during the surgical intervention. PURPOSE To facilitate intraoperative visualization of brain tissues and deformable registration with preoperative images, a 3D deep learning (DL) reconstruction framework (termed DL-Recon) was proposed for improved intraoperative cone-beam CT (CBCT) image quality. METHODS The DL-Recon framework combines physics-based models with deep learning CT synthesis and leverages uncertainty information to promote robustness to unseen features. A 3D generative adversarial network (GAN) with a conditional loss function modulated by aleatoric uncertainty was developed for CBCT-to-CT synthesis. Epistemic uncertainty of the synthesis model was estimated via Monte Carlo (MC) dropout. Using spatially varying weights derived from epistemic uncertainty, the DL-Recon image combines the synthetic CT with an artifact-corrected filtered back-projection (FBP) reconstruction. In regions of high epistemic uncertainty, DL-Recon includes greater contribution from the FBP image. Twenty paired real CT and simulated CBCT images of the head were used for network training and validation, and experiments evaluated the performance of DL-Recon on CBCT images containing simulated and real brain lesions not present in the training data. Performance among learning- and physics-based methods was quantified in terms of structural similarity (SSIM) of the resulting image to diagnostic CT and Dice similarity metric (DSC) in lesion segmentation compared to ground truth. A pilot study was conducted involving seven subjects with CBCT images acquired during neurosurgery to assess the feasibility of DL-Recon in clinical data. RESULTS CBCT images reconstructed via FBP with physics-based corrections exhibited the usual challenges to soft-tissue contrast resolution due to image non-uniformity, noise, and residual artifacts. GAN synthesis improved image uniformity and soft-tissue visibility but was subject to error in the shape and contrast of simulated lesions that were unseen in training. Incorporation of aleatoric uncertainty in synthesis loss improved estimation of epistemic uncertainty, with variable brain structures and unseen lesions exhibiting higher epistemic uncertainty. The DL-Recon approach mitigated synthesis errors while maintaining improvement in image quality, yielding 15%-22% increase in SSIM (image appearance compared to diagnostic CT) and up to 25% increase in DSC in lesion segmentation compared to FBP. Clear gains in visual image quality were also observed in real brain lesions and in clinical CBCT images. CONCLUSIONS DL-Recon leveraged uncertainty estimation to combine the strengths of DL and physics-based reconstruction and demonstrated substantial improvements in the accuracy and quality of intraoperative CBCT. The improved soft-tissue contrast resolution could facilitate visualization of brain structures and support deformable registration with preoperative images, further extending the utility of intraoperative CBCT in image-guided neurosurgery.
Collapse
Affiliation(s)
- Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Wojciech B. Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Junghoon Lee
- Department of Radiation Oncology, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Craig K. Jones
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Pengwei Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Runze Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Prasad Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | | | - Mark Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
| | - William S. Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
| | - Jeffrey H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030
| |
Collapse
|
10
|
Jihong C, Kerun Q, Kaiqiang C, Xiuchun Z, Yimin Z, Penggang B. CBCT-based synthetic CT generated using CycleGAN with HU correction for adaptive radiotherapy of nasopharyngeal carcinoma. Sci Rep 2023; 13:6624. [PMID: 37095147 PMCID: PMC10125979 DOI: 10.1038/s41598-023-33472-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 04/13/2023] [Indexed: 04/26/2023] Open
Abstract
This study aims to utilize a hybrid approach of phantom correction and deep learning for synthesized CT (sCT) images generation based on cone-beam CT (CBCT) images for nasopharyngeal carcinoma (NPC). 52 CBCT/CT paired images of NPC patients were used for model training (41), validation (11). Hounsfield Units (HU) of the CBCT images was calibrated by a commercially available CIRS phantom. Then the original CBCT and the corrected CBCT (CBCT_cor) were trained separately with the same cycle generative adversarial network (CycleGAN) to generate SCT1 and SCT2. The mean error and mean absolute error (MAE) were used to quantify the image quality. For validations, the contours and treatment plans in CT images were transferred to original CBCT, CBCT_cor, SCT1 and SCT2 for dosimetric comparison. Dose distribution, dosimetric parameters and 3D gamma passing rate were analyzed. Compared with rigidly registered CT (RCT), the MAE of CBCT, CBCT_cor, SCT1 and SCT2 were 346.11 ± 13.58 HU, 145.95 ± 17.64 HU, 105.62 ± 16.08 HU and 83.51 ± 7.71 HU, respectively. Moreover, the average dosimetric parameter differences for the CBCT_cor, SCT1 and SCT2 were 2.7% ± 1.4%, 1.2% ± 1.0% and 0.6% ± 0.6%, respectively. Using the dose distribution of RCT images as reference, the 3D gamma passing rate of the hybrid method was significantly better than the other methods. The effectiveness of CBCT-based sCT generated using CycleGAN with HU correction for adaptive radiotherapy of nasopharyngeal carcinoma was confirmed. The image quality and dose accuracy of SCT2 were outperform the simple CycleGAN method. This finding has great significance for the clinical application of adaptive radiotherapy for NPC.
Collapse
Affiliation(s)
- Chen Jihong
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, 350014, Fujian, China
| | - Quan Kerun
- Department of Radiation Oncology, Xiangtan City Central Hospital, Xiangtan, 411100, Hunan, China
| | - Chen Kaiqiang
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, 350014, Fujian, China
| | - Zhang Xiuchun
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, 350014, Fujian, China
| | - Zhou Yimin
- School of Nuclear Science and Technology, University of South China, Hengyang, 421001, China
| | - Bai Penggang
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, 350014, Fujian, China.
| |
Collapse
|
11
|
Joseph J, Biji I, Babu N, Pournami PN, Jayaraj PB, Puzhakkal N, Sabu C, Patel V. Fan beam CT image synthesis from cone beam CT image using nested residual UNet based conditional generative adversarial network. Phys Eng Sci Med 2023; 46:703-717. [PMID: 36943626 DOI: 10.1007/s13246-023-01244-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 03/09/2023] [Indexed: 03/23/2023]
Abstract
A radiotherapy technique called Image-Guided Radiation Therapy adopts frequent imaging throughout a treatment session. Fan Beam Computed Tomography (FBCT) based planning followed by Cone Beam Computed Tomography (CBCT) based radiation delivery drastically improved the treatment accuracy. Furtherance in terms of radiation exposure and cost can be achieved if FBCT could be replaced with CBCT. This paper proposes a Conditional Generative Adversarial Network (CGAN) for CBCT-to-FBCT synthesis. Specifically, a new architecture called Nested Residual UNet (NR-UNet) is introduced as the generator of the CGAN. A composite loss function, which comprises adversarial loss, Mean Squared Error (MSE), and Gradient Difference Loss (GDL), is used with the generator. The CGAN utilises the inter-slice dependency in the input by taking three consecutive CBCT slices to generate an FBCT slice. The model is trained using Head-and-Neck (H&N) FBCT-CBCT images of 53 cancer patients. The synthetic images exhibited a Peak Signal-to-Noise Ratio of 34.04±0.93 dB, Structural Similarity Index Measure of 0.9751±0.001 and a Mean Absolute Error of 14.81±4.70 HU. On average, the proposed model guarantees an improvement in Contrast-to-Noise Ratio four times better than the input CBCT images. The model also minimised the MSE and alleviated blurriness. Compared to the CBCT-based plan, the synthetic image results in a treatment plan closer to the FBCT-based plan. The three-slice to single-slice translation captures the three-dimensional contextual information in the input. Besides, it withstands the computational complexity associated with a three-dimensional image synthesis model. Furthermore, the results demonstrate that the proposed model is superior to the state-of-the-art methods.
Collapse
Affiliation(s)
- Jiffy Joseph
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India.
| | - Ivan Biji
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Naveen Babu
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - P N Pournami
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - P B Jayaraj
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Niyas Puzhakkal
- Department of Medical Physics, MVR Cancer Centre & Research Institute, Poolacode, Calicut, Kerala, 673601, India
| | - Christy Sabu
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Vedkumar Patel
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| |
Collapse
|
12
|
Cui H, Jiang X, Tang W, Lu HM, Yang Y. A practical and robust method for beam blocker-based cone beam CT scatter correction. Phys Med Biol 2023; 68. [PMID: 36634362 DOI: 10.1088/1361-6560/acb2aa] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 01/12/2023] [Indexed: 01/14/2023]
Abstract
Objective. In the traditional beam-blocker based cone beam CT (CBCT) scatter correction, the scatter measured in the region shaded by lead strips was multiplied by a correction factor to directly represent the scatter in the unblocked region. The correction factor optimization is a tedious process and lacks an objective stop criterion. To skip the optimization process, an indirect scatter estimation method was developed and validated in phantom imaging.Approach.A beam-blocker made of lead strips was mounted between the x-ray source and object for scatter estimation. The primary signal between lead strips in the blocked region was first calculated by subtracting the measured scatter, and then used to calculate the scatter signal in the unblocked region corresponding to the same attenuation path. The calculated scatter signal was smoothed via local filtration and used to correct the measured projection in the unblocked region. Finally, the CBCT was reconstructed via Feldkamp-Davis-Kress algorithm. A Catphan and a head phantom were used to verify the performance of the proposed method in both full- and half-blocker scenarios, and with and without a bow-tie filter.Main Results. For scans without the bow-tie filter, the CT number error was reduced to 3.97±2.27 and 5.51±3.90 HU in the full- and half-blocker scenarios, respectively, for the Catphan, and to 4.01±2.18 and 7.97 ± 4.05 HU for the head phantom. When the bow-tie filter was applied, the CT number error was reduced to 2.29±1.42 and 6.72±0.77 HU in the full- and half-blocker scenarios, respectively, for the Catphan, and 2.35±1.25 and 4.96 ± 1.89 HU for the head phantom.Significance. The proposed method effectively avoids the influence of the inserted beam blocker itself on the scatter intensity estimation, and proves a more practical and robust way for the beam-blocker based scatter correction in CBCT scanning.
Collapse
Affiliation(s)
- Hehe Cui
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, 230026 People's Republic of China
| | - Xiao Jiang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, 230026 People's Republic of China
| | - Wei Tang
- Hefei Ion Medical Center, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, 231283 People's Republic of China
| | - Hsiao-Ming Lu
- Hefei Ion Medical Center, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, 231283 People's Republic of China
| | - Yidong Yang
- Department of Radiation Oncology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, 230001 People's Republic of China.,School of Physical Sciences & Ion Medical Research Institute, University of Science and Technology of China, Hefei, Anhui, 230026 People's Republic of China
| |
Collapse
|
13
|
Pai S, Hadzic I, Rao C, Zhovannik I, Dekker A, Traverso A, Asteriadis S, Hortal E. Frequency-Domain-Based Structure Losses for CycleGAN-Based Cone-Beam Computed Tomography Translation. Sensors (Basel) 2023; 23:1089. [PMID: 36772129 PMCID: PMC9920313 DOI: 10.3390/s23031089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 12/28/2022] [Accepted: 01/05/2023] [Indexed: 06/18/2023]
Abstract
Research exploring CycleGAN-based synthetic image generation has recently accelerated in the medical community due to its ability to leverage unpaired images effectively. However, a commonly established drawback of the CycleGAN, the introduction of artifacts in generated images, makes it unreliable for medical imaging use cases. In an attempt to address this, we explore the effect of structure losses on the CycleGAN and propose a generalized frequency-based loss that aims at preserving the content in the frequency domain. We apply this loss to the use-case of cone-beam computed tomography (CBCT) translation to computed tomography (CT)-like quality. Synthetic CT (sCT) images generated from our methods are compared against baseline CycleGAN along with other existing structure losses proposed in the literature. Our methods (MAE: 85.5, MSE: 20433, NMSE: 0.026, PSNR: 30.02, SSIM: 0.935) quantitatively and qualitatively improve over the baseline CycleGAN (MAE: 88.8, MSE: 24244, NMSE: 0.03, PSNR: 29.37, SSIM: 0.935) across all investigated metrics and are more robust than existing methods. Furthermore, no observable artifacts or loss in image quality were observed. Finally, we demonstrated that sCTs generated using our methods have superior performance compared to the original CBCT images on selected downstream tasks.
Collapse
Affiliation(s)
- Suraj Pai
- GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands
| | - Ibrahim Hadzic
- GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands
| | - Chinmay Rao
- Division of Image Processing, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands
| | - Ivan Zhovannik
- GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands
| | - Andre Dekker
- GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands
| | - Alberto Traverso
- GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands
| | - Stylianos Asteriadis
- Department of Advanced Computing Sciences, Maastricht University, 6229 EN Maastricht, The Netherlands
| | - Enrique Hortal
- Department of Advanced Computing Sciences, Maastricht University, 6229 EN Maastricht, The Netherlands
| |
Collapse
|
14
|
Han R, Zeng F, Li J, Yao Z, Guo W, Zhao J. A Dilated Residual Network for Turbine Blade ICT Image Artifact Removal. Sensors (Basel) 2023; 23:1028. [PMID: 36679825 PMCID: PMC9866201 DOI: 10.3390/s23021028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 01/02/2023] [Accepted: 01/05/2023] [Indexed: 06/17/2023]
Abstract
Artifacts are divergent strip artifacts or dark stripe artifacts in Industrial Computed Tomography (ICT) images due to large differences in density among the components of scanned objects, which can significantly distort the actual structure of scanned objects in ICT images. The presence of artifacts can seriously affect the practical application effectiveness of ICT in defect detection and dimensional measurement. In this paper, a series of convolution neural network models are designed and implemented based on preparing the ICT image artifact removal datasets. Our findings indicate that the RF (receptive field) and the spatial resolution of network can significantly impact the effectiveness of artifact removal. Therefore, we propose a dilated residual network for turbine blade ICT image artifact removal (DRAR), which enhances the RF of the network while maintaining spatial resolution with only a slight increase in computational load. Extensive experiments demonstrate that the DRAR achieves exceptional performance in artifact removal.
Collapse
Affiliation(s)
- Rui Han
- State Key Laboratory for Manufacturing Systems Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Fengying Zeng
- China Gas Turbine Establishment, Aero Engine Corporation of China, Chengdu 610500, China
| | - Jing Li
- China Gas Turbine Establishment, Aero Engine Corporation of China, Chengdu 610500, China
| | - Zhenwen Yao
- State Key Laboratory for Manufacturing Systems Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Wenhua Guo
- State Key Laboratory for Manufacturing Systems Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Jiyuan Zhao
- School of Automation, Beijing Information Science and Technology University, Beijing 100192, China
| |
Collapse
|
15
|
Thummerer A, Seller Oria C, Zaffino P, Visser S, Meijers A, Guterres Marmitt G, Wijsman R, Seco J, Langendijk JA, Knopf AC, Spadea MF, Both S. Deep learning-based 4D-synthetic CTs from sparse-view CBCTs for dose calculations in adaptive proton therapy. Med Phys 2022; 49:6824-6839. [PMID: 35982630 PMCID: PMC10087352 DOI: 10.1002/mp.15930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 07/20/2022] [Accepted: 08/08/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Time-resolved 4D cone beam-computed tomography (4D-CBCT) allows a daily assessment of patient anatomy and respiratory motion. However, 4D-CBCTs suffer from imaging artifacts that affect the CT number accuracy and prevent accurate proton dose calculations. Deep learning can be used to correct CT numbers and generate synthetic CTs (sCTs) that can enable CBCT-based proton dose calculations. PURPOSE In this work, sparse view 4D-CBCTs were converted into 4D-sCT utilizing a deep convolutional neural network (DCNN). 4D-sCTs were evaluated in terms of image quality and dosimetric accuracy to determine if accurate proton dose calculations for adaptive proton therapy workflows of lung cancer patients are feasible. METHODS A dataset of 45 thoracic cancer patients was utilized to train and evaluate a DCNN to generate 4D-sCTs, based on sparse view 4D-CBCTs reconstructed from projections acquired with a 3D acquisition protocol. Mean absolute error (MAE) and mean error were used as metrics to evaluate the image quality of single phases and average 4D-sCTs against 4D-CTs acquired on the same day. The dosimetric accuracy was checked globally (gamma analysis) and locally for target volumes and organs-at-risk (OARs) (lung, heart, and esophagus). Furthermore, 4D-sCTs were also compared to 3D-sCTs. To evaluate CT number accuracy, proton radiography simulations in 4D-sCT and 4D-CTs were compared in terms of range errors. The clinical suitability of 4D-sCTs was demonstrated by performing a 4D dose reconstruction using patient specific treatment delivery log files and breathing signals. RESULTS 4D-sCTs resulted in average MAEs of 48.1 ± 6.5 HU (single phase) and 37.7 ± 6.2 HU (average). The global dosimetric evaluation showed gamma pass ratios of 92.3% ± 3.2% (single phase) and 94.4% ± 2.1% (average). The clinical target volume showed high agreement in D98 between 4D-CT and 4D-sCT, with differences below 2.4% for all patients. Larger dose differences were observed in mean doses of OARs (up to 8.4%). The comparison with 3D-sCTs showed no substantial image quality and dosimetric differences for the 4D-sCT average. Individual 4D-sCT phases showed slightly lower dosimetric accuracy. The range error evaluation revealed that lung tissues cause range errors about three times higher than the other tissues. CONCLUSION In this study, we have investigated the accuracy of deep learning-based 4D-sCTs for daily dose calculations in adaptive proton therapy. Despite image quality differences between 4D-sCTs and 3D-sCTs, comparable dosimetric accuracy was observed globally and locally. Further improvement of 3D and 4D lung sCTs could be achieved by increasing CT number accuracy in lung tissues.
Collapse
Affiliation(s)
- Adrian Thummerer
- Department, of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Carmen Seller Oria
- Department, of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Paolo Zaffino
- Department of Experimental and Clinical Medicine, Magna Graecia University, Catanzaro, Italy
| | - Sabine Visser
- Department, of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Arturs Meijers
- Department, of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.,Center for Proton Therapy, Paul Scherrer Institute, Villigen, Switzerland
| | - Gabriel Guterres Marmitt
- Department, of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Robin Wijsman
- Department, of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Joao Seco
- Department of Biomedical Physics in Radiation Oncology, Deutsches Krebsforschungszentrum (DKFZ), Heidelberg, Germany.,Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| | - Johannes Albertus Langendijk
- Department, of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Antje Christin Knopf
- Department, of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.,Department I of Internal Medicine, Center for Integrated Oncology Cologne, University Hospital of Cologne, Cologne, Germany
| | - Maria Francesca Spadea
- Department of Experimental and Clinical Medicine, Magna Graecia University, Catanzaro, Italy
| | - Stefan Both
- Department, of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
16
|
Chen X, Liu Y, Yang B, Zhu J, Yuan S, Xie X, Liu Y, Dai J, Men K. A more effective CT synthesizer using transformers for cone-beam CT-guided adaptive radiotherapy. Front Oncol 2022; 12:988800. [PMID: 36091131 PMCID: PMC9454309 DOI: 10.3389/fonc.2022.988800] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 07/27/2022] [Indexed: 11/13/2022] Open
Abstract
PurposeThe challenge of cone-beam computed tomography (CBCT) is its low image quality, which limits its application for adaptive radiotherapy (ART). Despite recent substantial improvement in CBCT imaging using the deep learning method, the image quality still needs to be improved for effective ART application. Spurred by the advantages of transformers, which employs multi-head attention mechanisms to capture long-range contextual relations between image pixels, we proposed a novel transformer-based network (called TransCBCT) to generate synthetic CT (sCT) from CBCT. This study aimed to further improve the accuracy and efficiency of ART.Materials and methodsIn this study, 91 patients diagnosed with prostate cancer were enrolled. We constructed a transformer-based hierarchical encoder–decoder structure with skip connection, called TransCBCT. The network also employed several convolutional layers to capture local context. The proposed TransCBCT was trained and validated on 6,144 paired CBCT/deformed CT images from 76 patients and tested on 1,026 paired images from 15 patients. The performance of the proposed TransCBCT was compared with a widely recognized style transferring deep learning method, the cycle-consistent adversarial network (CycleGAN). We evaluated the image quality and clinical value (application in auto-segmentation and dose calculation) for ART need.ResultsTransCBCT had superior performance in generating sCT from CBCT. The mean absolute error of TransCBCT was 28.8 ± 16.7 HU, compared to 66.5 ± 13.2 for raw CBCT, and 34.3 ± 17.3 for CycleGAN. It can preserve the structure of raw CBCT and reduce artifacts. When applied in auto-segmentation, the Dice similarity coefficients of bladder and rectum between auto-segmentation and oncologist manual contours were 0.92 and 0.84 for TransCBCT, respectively, compared to 0.90 and 0.83 for CycleGAN. When applied in dose calculation, the gamma passing rate (1%/1 mm criterion) was 97.5% ± 1.1% for TransCBCT, compared to 96.9% ± 1.8% for CycleGAN.ConclusionsThe proposed TransCBCT can effectively generate sCT for CBCT. It has the potential to improve radiotherapy accuracy.
Collapse
Affiliation(s)
- Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- National Cancer Center/National Clinical Research Center for Cancer/Hebei Cancer Hospital, Chinese Academy of Medical Sciences, Langfang, China
| | - Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- School of Physics and Technology, Wuhan University, Wuhan, China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Siqi Yuan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xuejie Xie
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yueping Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- *Correspondence: Kuo Men,
| |
Collapse
|
17
|
Santoro M, Strolin S, Paolani G, Della Gala G, Bartoloni A, Giacometti C, Ammendolia I, Morganti AG, Strigari L. Recent Applications of Artificial Intelligence in Radiotherapy: Where We Are and Beyond. Applied Sciences 2022; 12:3223. [DOI: 10.3390/app12073223] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
In recent decades, artificial intelligence (AI) tools have been applied in many medical fields, opening the possibility of finding novel solutions for managing very complex and multifactorial problems, such as those commonly encountered in radiotherapy (RT). We conducted a PubMed and Scopus search to identify the AI application field in RT limited to the last four years. In total, 1824 original papers were identified, and 921 were analyzed by considering the phase of the RT workflow according to the applied AI approaches. AI permits the processing of large quantities of information, data, and images stored in RT oncology information systems, a process that is not manageable for individuals or groups. AI allows the iterative application of complex tasks in large datasets (e.g., delineating normal tissues or finding optimal planning solutions) and might support the entire community working in the various sectors of RT, as summarized in this overview. AI-based tools are now on the roadmap for RT and have been applied to the entire workflow, mainly for segmentation, the generation of synthetic images, and outcome prediction. Several concerns were raised, including the need for harmonization while overcoming ethical, legal, and skill barriers.
Collapse
|
18
|
Yang B, Chang Y, Liang Y, Wang Z, Pei X, Xu X, Qiu J. A Comparison Study Between CNN-Based Deformed Planning CT and CycleGAN-Based Synthetic CT Methods for Improving iCBCT Image Quality. Front Oncol 2022; 12:896795. [PMID: 35707352 PMCID: PMC9189355 DOI: 10.3389/fonc.2022.896795] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 04/27/2022] [Indexed: 12/24/2022] Open
Abstract
Purpose The aim of this study is to compare two methods for improving the image quality of the Varian Halcyon cone-beam CT (iCBCT) system through the deformed planning CT (dpCT) based on the convolutional neural network (CNN) and the synthetic CT (sCT) generation based on the cycle-consistent generative adversarial network (CycleGAN). Methods A total of 190 paired pelvic CT and iCBCT image datasets were included in the study, out of which 150 were used for model training and the remaining 40 were used for model testing. For the registration network, we proposed a 3D multi-stage registration network (MSnet) to deform planning CT images to agree with iCBCT images, and the contours from CT images were propagated to the corresponding iCBCT images through a deformation matrix. The overlap between the deformed contours (dpCT) and the fixed contours (iCBCT) was calculated for purposes of evaluating the registration accuracy. For the sCT generation, we trained the 2D CycleGAN using the deformation-registered CT-iCBCT slicers and generated the sCT with corresponding iCBCT image data. Then, on sCT images, physicians re-delineated the contours that were compared with contours of manually delineated iCBCT images. The organs for contour comparison included the bladder, spinal cord, femoral head left, femoral head right, and bone marrow. The dice similarity coefficient (DSC) was used to evaluate the accuracy of registration and the accuracy of sCT generation. Results The DSC values of the registration and sCT generation were found to be 0.769 and 0.884 for the bladder (p < 0.05), 0.765 and 0.850 for the spinal cord (p < 0.05), 0.918 and 0.923 for the femoral head left (p > 0.05), 0.916 and 0.921 for the femoral head right (p > 0.05), and 0.878 and 0.916 for the bone marrow (p < 0.05), respectively. When the bladder volume difference in planning CT and iCBCT scans was more than double, the accuracy of sCT generation was significantly better than that of registration (DSC of bladder: 0.859 vs. 0.596, p < 0.05). Conclusion The registration and sCT generation could both improve the iCBCT image quality effectively, and the sCT generation could achieve higher accuracy when the difference in planning CT and iCBCT was large.
Collapse
Affiliation(s)
- Bo Yang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Yankui Chang
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
| | - Yongguang Liang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Zhiqun Wang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Xi Pei
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
- Technology Development Department, Anhui Wisdom Technology Co., Ltd., Hefei, China
| | - Xie George Xu
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
- Department of Radiation Oncology, First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Jie Qiu
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
- *Correspondence: Jie Qiu,
| |
Collapse
|
19
|
Xue X, Ding Y, Shi J, Hao X, Li X, Li D, Wu Y, An H, Jiang M, Wei W, Wang X. Cone Beam CT (CBCT) Based Synthetic CT Generation Using Deep Learning Methods for Dose Calculation of Nasopharyngeal Carcinoma Radiotherapy. Technol Cancer Res Treat 2021; 20:15330338211062415. [PMID: 34851204 PMCID: PMC8649448 DOI: 10.1177/15330338211062415] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
Objective: To generate synthetic CT (sCT) images with high quality
from CBCT and planning CT (pCT) for dose calculation by using deep learning
methods. Methods: 169 NPC patients with a total of 20926 slices of
CBCT and pCT images were included. In this study the CycleGAN, Pix2pix and U-Net
models were used to generate the sCT images. The Mean Absolute Error (MAE), Root
Mean Squared Error (RMSE), Peak Signal to Noise Ratio (PSNR), and Structural
Similarity Index (SSIM) were used to quantify the accuracy of the proposed
models in a testing cohort of 34 patients. Radiation dose were calculated on pCT
and sCT following the same protocol. Dose distributions were evaluated for 4
patients by comparing the dose-volume-histogram (DVH) and 2D gamma index
analysis. Results: The average MAE and RMSE values between sCT by
three models and pCT reduced by 15.4 HU and 26.8 HU at least, while the mean
PSNR and SSIM metrics between sCT by different models and pCT added by 10.6 and
0.05 at most, respectively. There were only slight differences for DVH of
selected contours between different plans. The passing rates of 2D gamma index
analysis under 3 mm/3% 3 mm/2%, 2 mm/3%and 2 mm/2% criteria were all higher than
95%. Conclusions: All the sCT had achieved better evaluation
metrics than those of original CBCT, while the performance of CycleGAN model was
proved to be best among three methods. The dosimetric agreement confirmed the HU
accuracy and consistent anatomical structures of sCT by deep learning
methods.
Collapse
Affiliation(s)
- Xudong Xue
- 117922Hubei Cancer Hospital, Tongji Medical College, Huzhong University of Science and Technology, Wuhan, Hubei, China
| | - Yi Ding
- 117922Hubei Cancer Hospital, Tongji Medical College, Huzhong University of Science and Technology, Wuhan, Hubei, China
| | - Jun Shi
- 546497School of Computer Science and Technology, 12652University of Science and Technology of China, Hefei, 117556Anhui, China
| | - Xiaoyu Hao
- 546497School of Computer Science and Technology, 12652University of Science and Technology of China, Hefei, 117556Anhui, China
| | - Xiangbin Li
- 117922Hubei Cancer Hospital, Tongji Medical College, Huzhong University of Science and Technology, Wuhan, Hubei, China
| | - Dan Li
- 117922Hubei Cancer Hospital, Tongji Medical College, Huzhong University of Science and Technology, Wuhan, Hubei, China
| | - Yuan Wu
- 117922Hubei Cancer Hospital, Tongji Medical College, Huzhong University of Science and Technology, Wuhan, Hubei, China
| | - Hong An
- 546497School of Computer Science and Technology, 12652University of Science and Technology of China, Hefei, 117556Anhui, China
| | - Man Jiang
- School of Energy and Power Engineering, 12443Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Wei Wei
- 117922Hubei Cancer Hospital, Tongji Medical College, Huzhong University of Science and Technology, Wuhan, Hubei, China
| | - Xiao Wang
- Rutgers-Cancer Institute of New Jersey, Rutgers-Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| |
Collapse
|
20
|
Abstract
Radiation therapy treatments are typically planned based on a single image set, assuming that the patient's anatomy and its position relative to the delivery system remains constant during the course of treatment. Similarly, the prescription dose assumes constant biological dose-response over the treatment course. However, variations can and do occur on multiple time scales. For treatment sites with significant intra-fractional motion, geometric changes happen over seconds or minutes, while biological considerations change over days or weeks. At an intermediate timescale, geometric changes occur between daily treatment fractions. Adaptive radiation therapy is applied to consider changes in patient anatomy during the course of fractionated treatment delivery. While traditionally adaptation has been done off-line with replanning based on new CT images, online treatment adaptation based on on-board imaging has gained momentum in recent years due to advanced imaging techniques combined with treatment delivery systems. Adaptation is particularly important in proton therapy where small changes in patient anatomy can lead to significant dose perturbations due to the dose conformality and finite range of proton beams. This review summarizes the current state-of-the-art of on-line adaptive proton therapy and identifies areas requiring further research.
Collapse
Affiliation(s)
- Harald Paganetti
- Department of Radiation Oncology, Physics Division, Massachusetts General Hospital, Boston, Massachusetts, United States of America
- Harvard Medical School, Boston, Massachusetts, United States of America
| | - Pablo Botas
- Department of Radiation Oncology, Physics Division, Massachusetts General Hospital, Boston, Massachusetts, United States of America
- Harvard Medical School, Boston, Massachusetts, United States of America
- Foundation 29 of February, Pozuelo de Alarcón, Madrid, Spain
| | - Gregory C Sharp
- Department of Radiation Oncology, Physics Division, Massachusetts General Hospital, Boston, Massachusetts, United States of America
- Harvard Medical School, Boston, Massachusetts, United States of America
| | - Brian Winey
- Department of Radiation Oncology, Physics Division, Massachusetts General Hospital, Boston, Massachusetts, United States of America
- Harvard Medical School, Boston, Massachusetts, United States of America
| |
Collapse
|
21
|
Chen X, Yang B, Li J, Zhu J, Ma X, Chen D, Hu Z, Men K, Dai J. A deep-learning method for generating synthetic kV-CT and improving tumor segmentation for helical tomotherapy of nasopharyngeal carcinoma. Phys Med Biol 2021; 66. [PMID: 34700300 DOI: 10.1088/1361-6560/ac3345] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 10/26/2021] [Indexed: 12/11/2022]
Abstract
Objective:Megavoltage computed tomography (MV-CT) is used for setup verification and adaptive radiotherapy in tomotherapy. However, its low contrast and high noise lead to poor image quality. This study aimed to develop a deep-learning-based method to generate synthetic kilovoltage CT (skV-CT) and then evaluate its ability to improve image quality and tumor segmentation.Approach:The planning kV-CT and MV-CT images of 270 patients with nasopharyngeal carcinoma (NPC) treated on an Accuray TomoHD system were used. An improved cycle-consistent adversarial network which used residual blocks as its generator was adopted to learn the mapping between MV-CT and kV-CT and then generate skV-CT from MV-CT. A Catphan 700 phantom and 30 patients with NPC were used to evaluate image quality. The quantitative indices included contrast-to-noise ratio (CNR), uniformity and signal-to-noise ratio (SNR) for the phantom and the structural similarity index measure (SSIM), mean absolute error (MAE), and peak signal-to-noise ratio (PSNR) for patients. Next, we trained three models for segmentation of the clinical target volume (CTV): MV-CT, skV-CT, and MV-CT combined with skV-CT. The segmentation accuracy was compared with indices of the dice similarity coefficient (DSC) and mean distance agreement (MDA).Mainresults:Compared with MV-CT, skV-CT showed significant improvement in CNR (184.0%), image uniformity (34.7%), and SNR (199.0%) in the phantom study and improved SSIM (1.7%), MAE (24.7%), and PSNR (7.5%) in the patient study. For CTV segmentation with only MV-CT, only skV-CT, and MV-CT combined with skV-CT, the DSCs were 0.75 ± 0.04, 0.78 ± 0.04, and 0.79 ± 0.03, respectively, and the MDAs (in mm) were 3.69 ± 0.81, 3.14 ± 0.80, and 2.90 ± 0.62, respectively.Significance:The proposed method improved the image quality of MV-CT and thus tumor segmentation in helical tomotherapy. The method potentially can benefit adaptive radiotherapy.
Collapse
Affiliation(s)
- Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Jingwen Li
- Cloud Computing and Big Data Research Institute, China Academy of Information and Communications Technology, People's Republic of China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Xiangyu Ma
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Deqi Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Zhihui Hu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| |
Collapse
|
22
|
Lee H, Sung J, Choi Y, Kim JW, Lee IJ. Mutual Information-Based Non-Local Total Variation Denoiser for Low-Dose Cone-Beam Computed Tomography. Front Oncol 2021; 11:751057. [PMID: 34745978 PMCID: PMC8567105 DOI: 10.3389/fonc.2021.751057] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Accepted: 09/28/2021] [Indexed: 12/15/2022] Open
Abstract
Conventional non-local total variation (NLTV) approaches use the weight of a non-local means (NLM) filter, which degrades performance in low-dose cone-beam computed tomography (CBCT) images generated with a low milliampere-seconds (mAs) parameter value because a local patch used to determine the pixel weights comprises noisy-damaged pixels that reduce the similarity between corresponding patches. In this paper, we propose a novel type of NLTV based on a combination of mutual information (MI): MI-NLTV. It is based on a statistical measure for a similarity calculation between the corresponding bins of non-local patches vs. a reference patch. The weight is determined in terms of a statistical measure comprising the MI value between corresponding non-local patches and the reference-patch entropy. The MI-NLTV denoising process is applied to CBCT images generated by the analytical reconstruction algorithm using a ray-driven backprojector (RDB). The MI-NLTV objective function is minimized based on the steepest gradient descent optimization to augment the difference between a real structure and noise, cleaning noisy pixels without significant loss of the fine structure and details that remain in the reconstructed images. The proposed method was evaluated using patient data and actual phantom measurement data acquired with lower mAs. The results show that integrating the RDB further enhances the MI-NLTV denoising-based analytical reconstruction algorithm to achieve a higher CBCT image quality when compared with those generated by NLTV denoising-based approach, with an average of 15.97% higher contrast-to-noise ratio, 2.67% lower root mean square error, 0.12% lower spatial non-uniformity, 1.14% higher correlation, and an average of 18.11% higher detectability index. These quantitative results indicate that the incorporation of MI makes the NLTV more stable and robust than the conventional NLM filter for low-dose CBCT imaging. In addition, achieving clinically acceptable CBCT image quality despite low-mAs projection acquisition can reduce the burden on common online CBCT imaging, improving patient safety throughout the course of radiotherapy.
Collapse
Affiliation(s)
- Ho Lee
- Department of Radiation Oncology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Jiwon Sung
- Department of Radiation Oncology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Yeonho Choi
- Department of Radiation Oncology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Jun Won Kim
- Department of Radiation Oncology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Ik Jae Lee
- Department of Radiation Oncology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| |
Collapse
|
23
|
Rossi M, Belotti G, Paganelli C, Pella A, Barcellini A, Cerveri P, Baroni G. Image-based shading correction for narrow-FOV truncated pelvic CBCT with deep convolutional neural networks and transfer learning. Med Phys 2021; 48:7112-7126. [PMID: 34636429 PMCID: PMC9297981 DOI: 10.1002/mp.15282] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 09/29/2021] [Accepted: 10/01/2021] [Indexed: 11/21/2022] Open
Abstract
Purpose: Cone beam computed tomography (CBCT) is a standard solution for in‐room image guidance for radiation therapy. It is used to evaluate and compensate for anatomopathological changes between the dose delivery plan and the fraction delivery day. CBCT is a fast and versatile solution, but it suffers from drawbacks like low contrast and requires proper calibration to derive density values. Although these limitations are even more prominent with in‐room customized CBCT systems, strategies based on deep learning have shown potential in improving image quality. As such, this article presents a method based on a convolutional neural network and a novel two‐step supervised training based on the transfer learning paradigm for shading correction in CBCT volumes with narrow field of view (FOV) acquired with an ad hoc in‐room system. Methods: We designed a U‐Net convolutional neural network, trained on axial slices of corresponding CT/CBCT couples. To improve the generalization capability of the network, we exploited two‐stage learning using two distinct data sets. At first, the network weights were trained using synthetic CBCT scans generated from a public data set, and then only the deepest layers of the network were trained again with real‐world clinical data to fine‐tune the weights. Synthetic data were generated according to real data acquisition parameters. The network takes a single grayscale volume as input and outputs the same volume with corrected shading and improved HU values. Results: Evaluation was carried out with a leave‐one‐out cross‐validation, computed on 18 unique CT/CBCT pairs from six different patients from a real‐world dataset. Comparing original CBCT to CT and improved CBCT to CT, we obtained an average improvement of 6 dB on peak signal‐to‐noise ratio (PSNR), +2% on structural similarity index measure (SSIM). The median interquartile range (IQR) Hounsfield unit (HU) difference between CBCT and CT improved from 161.37 (162.54) HU to 49.41 (66.70) HU. Region of interest (ROI)‐based HU difference was narrowed by 75% in the spongy bone (femoral head), 89% in the bladder, 85% for fat, and 83% for muscle. The improvement in contrast‐to‐noise ratio for these ROIs was about 67%. Conclusions: We demonstrated that shading correction obtaining CT‐compatible data from narrow‐FOV CBCTs acquired with a customized in‐room system is possible. Moreover, the transfer learning approach proved particularly beneficial for such a shading correction approach.
Collapse
Affiliation(s)
- Matteo Rossi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Gabriele Belotti
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Chiara Paganelli
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Andrea Pella
- Bioengineering Unit, Clinical Department, National Center for Oncological Hadrontherapy (CNAO), Pavia, Italy
| | - Amelia Barcellini
- Radiation Oncology Unit, Clinical Department, National Center for Oncological Hadrontherapy (CNAO), Pavia, Italy
| | - Pietro Cerveri
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Guido Baroni
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy.,Bioengineering Unit, Clinical Department, National Center for Oncological Hadrontherapy (CNAO), Pavia, Italy
| |
Collapse
|
24
|
Taniguchi T, Hara T, Shimozato T, Hyodo F, Ono K, Nakaya S, Noda Y, Kato H, Tanaka O, Matsuo M. Effect of computed tomography value error on dose calculation in adaptive radiotherapy with Elekta X-ray volume imaging cone beam computed tomography. J Appl Clin Med Phys 2021; 22:271-279. [PMID: 34375008 PMCID: PMC8425939 DOI: 10.1002/acm2.13384] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 06/03/2021] [Accepted: 07/12/2021] [Indexed: 11/30/2022] Open
Abstract
Purpose We evaluated the effect of changing the scan mode of the Elekta X‐ray volume imaging cone beam computed tomography (CBCT) on the accuracy of dose calculation, which may be affected by computed tomography (CT) value errors in three dimensions. Methods We used the electron density phantom and measured the CT values in three dimensions. CT values were compared with planning computed tomography (pCT) values for various materials. The evaluated scan modes were for head and neck (S‐scan), chest (M‐scan), and pelvis (L‐scan) with various collimators and filter systems. To evaluate the effects of the CT value error of the CBCT on dose error, Monte Carlo calculations of dosimetry were performed using pCT and CBCT images. Results The L‐scan had a CT value error of approximately 800 HU at the isocenter compared with the pCT. Furthermore, inhomogeneity in the longitudinal CT value profile was observed in the bone material. The dose error for ±100 HU difference in CT values for the S‐scan and M‐scan was within ±2%. The center of the L‐scan had a CT error of approximately 800 HU and a dose error of approximately 6%. The dose error of the L‐scan occurred in the beam path in the case of both single field and two parallel opposed fields, and the maximum error occurred at the center of the phantom in the case of both the 4‐field box and single‐arc techniques. Conclusions We demonstrated the three‐dimensional CT value characteristics of the CBCT by evaluating the CT value error obtained under various imaging conditions. It was found that the L‐scan is considerably affected by not having a unique bowtie filter, and the S‐scan without the bowtie filter causes CT value errors in the longitudinal direction. Moreover, the CBCT dose errors for the 4‐field box and single‐arc irradiation techniques converge to the isocenter.
Collapse
Affiliation(s)
- Takuya Taniguchi
- Department of Radiation Oncology, Asahi University Hospital, Gifu, Japan.,Department of Radiology, Gifu University, Gifu, Japan
| | - Takanori Hara
- Department of Medical Technology, Nakatsugawa Municipal General Hospital, Gifu, Japan
| | - Tomohiro Shimozato
- Faculty of Radiological Technology, School of Health Sciences, Gifu University of Medical Science, Seki, Japan
| | - Fuminori Hyodo
- Department of Radiology Frontier Science for Imaging, School of Medicine, Gifu University, Gifu, Japan
| | - Kose Ono
- Department of Radiation Oncology, Asahi University Hospital, Gifu, Japan
| | - Shuto Nakaya
- Department of Radiation Oncology, Asahi University Hospital, Gifu, Japan
| | | | - Hiroki Kato
- Department of Radiology, Gifu University, Gifu, Japan
| | - Osamu Tanaka
- Department of Radiation Oncology, Asahi University Hospital, Gifu, Japan
| | | |
Collapse
|
25
|
Rossi M, Cerveri P. Comparison of Supervised and Unsupervised Approaches for the Generation of Synthetic CT from Cone-Beam CT. Diagnostics (Basel) 2021; 11:diagnostics11081435. [PMID: 34441369 PMCID: PMC8395013 DOI: 10.3390/diagnostics11081435] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 07/30/2021] [Accepted: 08/07/2021] [Indexed: 12/04/2022] Open
Abstract
Due to major artifacts and uncalibrated Hounsfield units (HU), cone-beam computed tomography (CBCT) cannot be used readily for diagnostics and therapy planning purposes. This study addresses image-to-image translation by convolutional neural networks (CNNs) to convert CBCT to CT-like scans, comparing supervised to unsupervised training techniques, exploiting a pelvic CT/CBCT publicly available dataset. Interestingly, quantitative results were in favor of supervised against unsupervised approach showing improvements in the HU accuracy (62% vs. 50%), structural similarity index (2.5% vs. 1.1%) and peak signal-to-noise ratio (15% vs. 8%). Qualitative results conversely showcased higher anatomical artifacts in the synthetic CBCT generated by the supervised techniques. This was motivated by the higher sensitivity of the supervised training technique to the pixel-wise correspondence contained in the loss function. The unsupervised technique does not require correspondence and mitigates this drawback as it combines adversarial, cycle consistency, and identity loss functions. Overall, two main impacts qualify the paper: (a) the feasibility of CNN to generate accurate synthetic CT from CBCT images, which is fast and easy to use compared to traditional techniques applied in clinics; (b) the proposal of guidelines to drive the selection of the better training technique, which can be shifted to more general image-to-image translation.
Collapse
|
26
|
Eckl M, Sarria GR, Springer S, Willam M, Ruder AM, Steil V, Ehmann M, Wenz F, Fleckenstein J. Dosimetric benefits of daily treatment plan adaptation for prostate cancer stereotactic body radiotherapy. Radiat Oncol 2021; 16:145. [PMID: 34348765 PMCID: PMC8335467 DOI: 10.1186/s13014-021-01872-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 07/27/2021] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND Hypofractionation is increasingly being applied in radiotherapy for prostate cancer, requiring higher accuracy of daily treatment deliveries than in conventional image-guided radiotherapy (IGRT). Different adaptive radiotherapy (ART) strategies were evaluated with regard to dosimetric benefits. METHODS Treatments plans for 32 patients were retrospectively generated and analyzed according to the PACE-C trial treatment scheme (40 Gy in 5 fractions). Using a previously trained cycle-generative adversarial network algorithm, synthetic CT (sCT) were generated out of five daily cone-beam CT. Dose calculation on sCT was performed for four different adaptation approaches: IGRT without adaptation, adaptation via segment aperture morphing (SAM) and segment weight optimization (ART1) or additional shape optimization (ART2) as well as a full re-optimization (ART3). Dose distributions were evaluated regarding dose-volume parameters and a penalty score. RESULTS Compared to the IGRT approach, the ART1, ART2 and ART3 approaches substantially reduced the V37Gy(bladder) and V36Gy(rectum) from a mean of 7.4cm3 and 2.0cm3 to (5.9cm3, 6.1cm3, 5.2cm3) as well as to (1.4cm3, 1.4cm3, 1.0cm3), respectively. Plan adaptation required on average 2.6 min for the ART1 approach and yielded doses to the rectum being insignificantly different from the ART2 approach. Based on an accumulation over the total patient collective, a penalty score revealed dosimetric violations reduced by 79.2%, 75.7% and 93.2% through adaptation. CONCLUSION Treatment plan adaptation was demonstrated to adequately restore relevant dose criteria on a daily basis. While for SAM adaptation approaches dosimetric benefits were realized through ensuring sufficient target coverage, a full re-optimization mainly improved OAR sparing which helps to guide the decision of when to apply which adaptation strategy.
Collapse
Affiliation(s)
- Miriam Eckl
- Department of Radiation Oncology, University Medical Centre Mannheim, University of Heidelberg, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Gustavo R Sarria
- Department of Radiation Oncology, University Hospital Bonn, University of Bonn, Bonn, Germany
| | - Sandra Springer
- Department of Radiation Oncology, University Medical Centre Mannheim, University of Heidelberg, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Marvin Willam
- Department of Radiation Oncology, University Medical Centre Mannheim, University of Heidelberg, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Arne M Ruder
- Department of Radiation Oncology, University Medical Centre Mannheim, University of Heidelberg, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Volker Steil
- Department of Radiation Oncology, University Medical Centre Mannheim, University of Heidelberg, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Michael Ehmann
- Department of Radiation Oncology, University Medical Centre Mannheim, University of Heidelberg, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Frederik Wenz
- University Medical Center Freiburg, University of Freiburg, Freiburg im Breisgau, Germany
| | - Jens Fleckenstein
- Department of Radiation Oncology, University Medical Centre Mannheim, University of Heidelberg, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| |
Collapse
|
27
|
Yong TH, Yang S, Lee SJ, Park C, Kim JE, Huh KH, Lee SS, Heo MS, Yi WJ. QCBCT-NET for direct measurement of bone mineral density from quantitative cone-beam CT: a human skull phantom study. Sci Rep 2021; 11:15083. [PMID: 34301984 PMCID: PMC8302740 DOI: 10.1038/s41598-021-94359-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 07/12/2021] [Indexed: 02/07/2023] Open
Abstract
The purpose of this study was to directly and quantitatively measure BMD from Cone-beam CT (CBCT) images by enhancing the linearity and uniformity of the bone intensities based on a hybrid deep-learning model (QCBCT-NET) of combining the generative adversarial network (Cycle-GAN) and U-Net, and to compare the bone images enhanced by the QCBCT-NET with those by Cycle-GAN and U-Net. We used two phantoms of human skulls encased in acrylic, one for the training and validation datasets, and the other for the test dataset. We proposed the QCBCT-NET consisting of Cycle-GAN with residual blocks and a multi-channel U-Net using paired training data of quantitative CT (QCT) and CBCT images. The BMD images produced by QCBCT-NET significantly outperformed the images produced by the Cycle-GAN or the U-Net in mean absolute difference (MAD), peak signal to noise ratio (PSNR), normalized cross-correlation (NCC), structural similarity (SSIM), and linearity when compared to the original QCT image. The QCBCT-NET improved the contrast of the bone images by reflecting the original BMD distribution of the QCT image locally using the Cycle-GAN, and also spatial uniformity of the bone images by globally suppressing image artifacts and noise using the two-channel U-Net. The QCBCT-NET substantially enhanced the linearity, uniformity, and contrast as well as the anatomical and quantitative accuracy of the bone images, and demonstrated more accuracy than the Cycle-GAN and the U-Net for quantitatively measuring BMD in CBCT.
Collapse
Affiliation(s)
- Tae-Hoon Yong
- grid.31501.360000 0004 0470 5905Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Korea
| | - Su Yang
- grid.31501.360000 0004 0470 5905Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Korea
| | - Sang-Jeong Lee
- grid.31501.360000 0004 0470 5905Dental Research Institute, Seoul National University, Seoul, Korea
| | - Chansoo Park
- grid.31501.360000 0004 0470 5905Department of Oral and Maxillofacial Radiology, School of Dentistry, Seoul National University, Seoul, Korea
| | - Jo-Eun Kim
- grid.459982.b0000 0004 0647 7483Department of Oral and Maxillofacial Radiology, Seoul National University Dental Hospital, Seoul, Korea
| | - Kyung-Hoe Huh
- grid.31501.360000 0004 0470 5905Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea
| | - Sam-Sun Lee
- grid.31501.360000 0004 0470 5905Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea
| | - Min-Suk Heo
- grid.31501.360000 0004 0470 5905Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea
| | - Won-Jin Yi
- grid.31501.360000 0004 0470 5905Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Korea ,grid.31501.360000 0004 0470 5905Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea
| |
Collapse
|
28
|
Dong G, Zhang C, Liang X, Deng L, Zhu Y, Zhu X, Zhou X, Song L, Zhao X, Xie Y. A Deep Unsupervised Learning Model for Artifact Correction of Pelvis Cone-Beam CT. Front Oncol 2021; 11:686875. [PMID: 34350115 PMCID: PMC8327750 DOI: 10.3389/fonc.2021.686875] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Accepted: 06/25/2021] [Indexed: 11/13/2022] Open
Abstract
Purpose In recent years, cone-beam computed tomography (CBCT) is increasingly used in adaptive radiation therapy (ART). However, compared with planning computed tomography (PCT), CBCT image has much more noise and imaging artifacts. Therefore, it is necessary to improve the image quality and HU accuracy of CBCT. In this study, we developed an unsupervised deep learning network (CycleGAN) model to calibrate CBCT images for the pelvis to extend potential clinical applications in CBCT-guided ART. Methods To train CycleGAN to generate synthetic PCT (sPCT), we used CBCT and PCT images as inputs from 49 patients with unpaired data. Additional deformed PCT (dPCT) images attained as CBCT after deformable registration are utilized as the ground truth before evaluation. The trained uncorrected CBCT images are converted into sPCT images, and the obtained sPCT images have the characteristics of PCT images while keeping the anatomical structure of CBCT images unchanged. To demonstrate the effectiveness of the proposed CycleGAN, we use additional nine independent patients for testing. Results We compared the sPCT with dPCT images as the ground truth. The average mean absolute error (MAE) of the whole image on testing data decreased from 49.96 ± 7.21HU to 14.6 ± 2.39HU, the average MAE of fat and muscle ROIs decreased from 60.23 ± 7.3HU to 16.94 ± 7.5HU, and from 53.16 ± 9.1HU to 13.03 ± 2.63HU respectively. Conclusion We developed an unsupervised learning method to generate high-quality corrected CBCT images (sPCT). Through further evaluation and clinical implementation, it can replace CBCT in ART.
Collapse
Affiliation(s)
- Guoya Dong
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, China.,Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin, China
| | - Chenglong Zhang
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, China.,Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin, China.,Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lei Deng
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yulin Zhu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xuanyu Zhu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, QLD, Australia
| | - Xuanru Zhou
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Liming Song
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xiang Zhao
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, China
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
29
|
Zhao J, Chen Z, Wang J, Xia F, Peng J, Hu Y, Hu W, Zhang Z. MV CBCT-Based Synthetic CT Generation Using a Deep Learning Method for Rectal Cancer Adaptive Radiotherapy. Front Oncol 2021; 11:655325. [PMID: 34136391 PMCID: PMC8201514 DOI: 10.3389/fonc.2021.655325] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 04/26/2021] [Indexed: 01/04/2023] Open
Abstract
Due to image quality limitations, online Megavoltage cone beam CT (MV CBCT), which represents real online patient anatomy, cannot be used to perform adaptive radiotherapy (ART). In this study, we used a deep learning method, the cycle-consistent adversarial network (CycleGAN), to improve the MV CBCT image quality and Hounsfield-unit (HU) accuracy for rectal cancer patients to make the generated synthetic CT (sCT) eligible for ART. Forty rectal cancer patients treated with the intensity modulated radiotherapy (IMRT) were involved in this study. The CT and MV CBCT images of 30 patients were used for model training, and the images of the remaining 10 patients were used for evaluation. Image quality, autosegmentation capability and dose calculation capability using the autoplanning technique of the generated sCT were evaluated. The mean absolute error (MAE) was reduced from 135.84 ± 41.59 HU for the CT and CBCT comparison to 52.99 ± 12.09 HU for the CT and sCT comparison. The structural similarity (SSIM) index for the CT and sCT comparison was 0.81 ± 0.03, which is a great improvement over the 0.44 ± 0.07 for the CT and CBCT comparison. The autosegmentation model performance on sCT for femoral heads was accurate and required almost no manual modification. For the CTV and bladder, although modification was needed for autocontouring, the Dice similarity coefficient (DSC) indices were high, at 0.93 and 0.94 for the CTV and bladder, respectively. For dose evaluation, the sCT-based plan has a much smaller dose deviation from the CT-based plan than that of the CBCT-based plan. The proposed method solved a key problem for rectal cancer ART realization based on MV CBCT. The generated sCT enables ART based on the actual patient anatomy at the treatment position.
Collapse
Affiliation(s)
- Jun Zhao
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Zhi Chen
- Department of Medical Physics, Shanghai Proton and Heavy Ion Center, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Fan Xia
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Jiayuan Peng
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yiwen Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Zhen Zhang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| |
Collapse
|
30
|
Zhang Y, Yue N, Su MY, Liu B, Ding Y, Zhou Y, Wang H, Kuang Y, Nie K. Improving CBCT quality to CT level using deep learning with generative adversarial network. Med Phys 2021; 48:2816-2826. [PMID: 33259647 DOI: 10.1002/mp.14624] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Revised: 10/26/2020] [Accepted: 11/04/2020] [Indexed: 11/08/2022] Open
Abstract
PURPOSE To improve image quality and computed tomography (CT) number accuracy of daily cone beam CT (CBCT) through a deep learning methodology with generative adversarial network. METHODS One hundred and fifty paired pelvic CT and CBCT scans were used for model training and validation. An unsupervised deep learning method, 2.5D pixel-to-pixel generative adversarial network (GAN) model with feature mapping was proposed. A total of 12 000 slice pairs of CT and CBCT were used for model training, while ten-fold cross validation was applied to verify model robustness. Paired CT-CBCT scans from an additional 15 pelvic patients and 10 head-and-neck (HN) patients with CBCT images collected at a different machine were used for independent testing purpose. Besides the proposed method above, other network architectures were also tested as: 2D vs 2.5D; GAN model with vs without feature mapping; GAN model with vs without additional perceptual loss; and previously reported models as U-net and cycleGAN with or without identity loss. Image quality of deep-learning generated synthetic CT (sCT) images was quantitatively compared against the reference CT (rCT) image using mean absolute error (MAE) of Hounsfield units (HU) and peak signal-to-noise ratio (PSNR). The dosimetric calculation accuracy was further evaluated with both photon and proton beams. RESULTS The deep-learning generated sCTs showed improved image quality with reduced artifact distortion and improved soft tissue contrast. The proposed algorithm of 2.5 Pix2pix GAN with feature matching (FM) was shown to be the best model among all tested methods producing the highest PSNR and the lowest MAE to rCT. The dose distribution demonstrated a high accuracy in the scope of photon-based planning, yet more work is needed for proton-based treatment. Once the model was trained, it took 11-12 ms to process one slice, and could generate a 3D volume of dCBCT (80 slices) in less than a second using a NVIDIA GeForce GTX Titan X GPU (12 GB, Maxwell architecture). CONCLUSION The proposed deep learning algorithm is promising to improve CBCT image quality in an efficient way, thus has a potential to support online CBCT-based adaptive radiotherapy.
Collapse
Affiliation(s)
- Yang Zhang
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, USA.,Department of Radiological Sciences, University of California, Irvine, CA, USA
| | - Ning Yue
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - Min-Ying Su
- Department of Radiological Sciences, University of California, Irvine, CA, USA
| | - Bo Liu
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - Yi Ding
- Department of Radiation Oncology, Hubei Cancer Hospital, Wuhan, China
| | - Yongkang Zhou
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Hao Wang
- Department of Radiation Oncology, Zhongshan Hospital, Shanghai, China
| | - Yu Kuang
- Department of Integrated Health Sciences, University of Nebraska, Las Vegas, NV, USA
| | - Ke Nie
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| |
Collapse
|
31
|
Ip WY, Yeung FK, Yung SPF, Yu HCJ, So TH, Vardhanabhuti V. Current landscape and potential future applications of artificial intelligence in medical physics and radiotherapy. Artif Intell Med Imaging 2021; 2:37-55. [DOI: 10.35711/aimi.v2.i2.37] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 04/01/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) has seen tremendous growth over the past decade and stands to disrupts the medical industry. In medicine, this has been applied in medical imaging and other digitised medical disciplines, but in more traditional fields like medical physics, the adoption of AI is still at an early stage. Though AI is anticipated to be better than human in certain tasks, with the rapid growth of AI, there is increasing concerns for its usage. The focus of this paper is on the current landscape and potential future applications of artificial intelligence in medical physics and radiotherapy. Topics on AI for image acquisition, image segmentation, treatment delivery, quality assurance and outcome prediction will be explored as well as the interaction between human and AI. This will give insights into how we should approach and use the technology for enhancing the quality of clinical practice.
Collapse
Affiliation(s)
- Wing-Yan Ip
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| | - Fu-Ki Yeung
- Medical Physics and Research Department, The Hong Kong Sanitorium & Hospital, Hong Kong SAR, China and Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| | - Shang-Peng Felix Yung
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| | | | - Tsz-Him So
- Department of Clinical Oncology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
32
|
Chen W, Li Y, Yuan N, Qi J, Dyer BA, Sensoy L, Benedict SH, Shang L, Rao S, Rong Y. Clinical Enhancement in AI-Based Post-processed Fast-Scan Low-Dose CBCT for Head and Neck Adaptive Radiotherapy. Front Artif Intell 2021; 3:614384. [PMID: 33733226 PMCID: PMC7904899 DOI: 10.3389/frai.2020.614384] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 12/28/2020] [Indexed: 11/13/2022] Open
Abstract
Purpose: To assess image quality and uncertainty in organ-at-risk segmentation on cone beam computed tomography (CBCT) enhanced by deep-learning convolutional neural network (DCNN) for head and neck cancer. Methods: An in-house DCNN was trained using forty post-operative head and neck cancer patients with their planning CT and first-fraction CBCT images. Additional fifteen patients with repeat simulation CT (rCT) and CBCT scan taken on the same day (oCBCT) were used for validation and clinical utility assessment. Enhanced CBCT (eCBCT) images were generated from the oCBCT using the in-house DCNN. Quantitative imaging quality improvement was evaluated using HU accuracy, signal-to-noise-ratio (SNR), and structural similarity index measure (SSIM). Organs-at-risk (OARs) were delineated on o/eCBCT and compared with manual structures on the same day rCT. Contour accuracy was assessed using dice similarity coefficient (DSC), Hausdorff distance (HD), and center of mass (COM) displacement. Qualitative assessment of users’ confidence in manual segmenting OARs was performed on both eCBCT and oCBCT by visual scoring. Results: eCBCT organs-at-risk had significant improvement on mean pixel values, SNR (p < 0.05), and SSIM (p < 0.05) compared to oCBCT images. Mean DSC of eCBCT-to-rCT (0.83 ± 0.06) was higher than oCBCT-to-rCT (0.70 ± 0.13). Improvement was observed for mean HD of eCBCT-to-rCT (0.42 ± 0.13 cm) vs. oCBCT-to-rCT (0.72 ± 0.25 cm). Mean COM was less for eCBCT-to-rCT (0.28 ± 0.19 cm) comparing to oCBCT-to-rCT (0.44 ± 0.22 cm). Visual scores showed OAR segmentation was more accessible on eCBCT than oCBCT images. Conclusion: DCNN improved fast-scan low-dose CBCT in terms of the HU accuracy, image contrast, and OAR delineation accuracy, presenting potential of eCBCT for adaptive radiotherapy.
Collapse
Affiliation(s)
- Wen Chen
- Department of Radiation Oncology, Xiangya Hospital, Central South University, Changsha, China.,Department of Radiation Oncology, University of California Davis Medical Center, Sacramento, CA, United States
| | - Yimin Li
- Department of Radiation Oncology, University of California Davis Medical Center, Sacramento, CA, United States.,Department of Radiation Oncology, Xiamen Cancer Center, The First Affiliated Hospital of Xiamen University, Xiamen, China
| | - Nimu Yuan
- Department of Biomedical Engineering, University of California, Davis, CA, United States
| | - Jinyi Qi
- Department of Biomedical Engineering, University of California, Davis, CA, United States
| | - Brandon A Dyer
- Department of Radiation Oncology, University of Washington, Seattle, WA, United States
| | - Levent Sensoy
- Department of Radiation Oncology, University of California Davis Medical Center, Sacramento, CA, United States
| | - Stanley H Benedict
- Department of Radiation Oncology, University of California Davis Medical Center, Sacramento, CA, United States
| | - Lu Shang
- Department of Radiation Oncology, University of California Davis Medical Center, Sacramento, CA, United States
| | - Shyam Rao
- Department of Radiation Oncology, University of California Davis Medical Center, Sacramento, CA, United States
| | - Yi Rong
- Department of Radiation Oncology, University of California Davis Medical Center, Sacramento, CA, United States.,Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, AZ, United States
| |
Collapse
|
33
|
Paganetti H, Beltran C, Both S, Dong L, Flanz J, Furutani K, Grassberger C, Grosshans DR, Knopf AC, Langendijk JA, Nystrom H, Parodi K, Raaymakers BW, Richter C, Sawakuchi GO, Schippers M, Shaitelman SF, Teo BKK, Unkelbach J, Wohlfahrt P, Lomax T. Roadmap: proton therapy physics and biology. Phys Med Biol 2021; 66. [DOI: 10.1088/1361-6560/abcd16] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 11/23/2020] [Indexed: 12/12/2022]
|
34
|
Lee H, Park J, Choi Y, Park KR, Min BJ, Lee IJ. Low-dose CBCT reconstruction via joint non-local total variation denoising and cubic B-spline interpolation. Sci Rep 2021; 11:3681. [PMID: 33574477 DOI: 10.1038/s41598-021-83266-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 01/28/2021] [Indexed: 12/11/2022] Open
Abstract
This study develops an improved Feldkamp–Davis–Kress (FDK) reconstruction algorithm using non-local total variation (NLTV) denoising and a cubic B-spline interpolation-based backprojector to enhance the image quality of low-dose cone-beam computed tomography (CBCT). The NLTV objective function is minimized on all log-transformed projections using steepest gradient descent optimization with an adaptive control of the step size to augment the difference between a real structure and noise. The proposed algorithm was evaluated using a phantom data set acquired from a low-dose protocol with lower milliampere-seconds (mAs).The combination of NLTV minimization and cubic B-spline interpolation rendered the enhanced reconstruction images with significantly reduced noise compared to conventional FDK and local total variation with anisotropic penalty. The artifacts were remarkably suppressed in the reconstructed images. Quantitative analysis of reconstruction images using low-dose projections acquired from low mAs showed a contrast-to-noise ratio with spatial resolution comparable to images reconstructed using projections acquired from high mAs. The proposed approach produced the lowest RMSE and the highest correlation. These results indicate that the proposed algorithm enables application of the conventional FDK algorithm for low mAs image reconstruction in low-dose CBCT imaging, thereby eliminating the need for more computationally demanding algorithms. The substantial reductions in radiation exposure associated with the low mAs projection acquisition may facilitate wider practical applications of daily online CBCT imaging.
Collapse
|
35
|
Tien HJ, Yang HC, Shueng PW, Chen JC. Cone-beam CT image quality improvement using Cycle-Deblur consistent adversarial networks (Cycle-Deblur GAN) for chest CT imaging in breast cancer patients. Sci Rep 2021; 11:1133. [PMID: 33441936 DOI: 10.1038/s41598-020-80803-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 12/23/2020] [Indexed: 01/26/2023] Open
Abstract
Cone-beam computed tomography (CBCT) integrated with a linear accelerator is widely used to increase the accuracy of radiotherapy and plays an important role in image-guided radiotherapy (IGRT). For comparison with fan-beam computed tomography (FBCT), the image quality of CBCT is indistinct due to X-ray scattering, noise, and artefacts. We proposed a deep learning model, “Cycle-Deblur GAN”, combined with CycleGAN and Deblur-GAN models to improve the image quality of chest CBCT images. The 8706 CBCT and FBCT image pairs were used for training, and 1150 image pairs were used for testing in deep learning. The generated CBCT images from the Cycle-Deblur GAN model demonstrated closer CT values to FBCT in the lung, breast, mediastinum, and sternum compared to the CycleGAN and RED-CNN models. The quantitative evaluations of MAE, PSNR, and SSIM for CBCT generated from the Cycle-Deblur GAN model demonstrated better results than the CycleGAN and RED-CNN models. The Cycle-Deblur GAN model improved image quality and CT-value accuracy and preserved structural details for chest CBCT images.
Collapse
|
36
|
Wang T, Lei Y, Fu Y, Wynne JF, Curran WJ, Liu T, Yang X. A review on medical imaging synthesis using deep learning and its clinical applications. J Appl Clin Med Phys 2021; 22:11-36. [PMID: 33305538 PMCID: PMC7856512 DOI: 10.1002/acm2.13121] [Citation(s) in RCA: 94] [Impact Index Per Article: 31.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/12/2020] [Accepted: 11/21/2020] [Indexed: 02/06/2023] Open
Abstract
This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Jacob F. Wynne
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
37
|
Korreman S, Eriksen JG, Grau C. The changing role of radiation oncology professionals in a world of AI - Just jobs lost - Or a solution to the under-provision of radiotherapy? Clin Transl Radiat Oncol 2020; 26:104-107. [PMID: 33364449 PMCID: PMC7752957 DOI: 10.1016/j.ctro.2020.04.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Revised: 04/24/2020] [Accepted: 04/27/2020] [Indexed: 02/07/2023] Open
Affiliation(s)
- Stine Korreman
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark.,Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark.,Department of Clinical Medicine, Health, Aarhus University, Aarhus, Denmark
| | - Jesper Grau Eriksen
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark.,Department of Experimental Clinical Oncology, Aarhus University Hospital, Aarhus, Denmark.,Department of Clinical Medicine, Health, Aarhus University, Aarhus, Denmark
| | - Cai Grau
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark.,Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark.,Department of Clinical Medicine, Health, Aarhus University, Aarhus, Denmark
| |
Collapse
|
38
|
Lagerwerf MJ, Pelt DM, Palenstijn WJ, Batenburg KJ. A Computationally Efficient Reconstruction Algorithm for Circular Cone-Beam Computed Tomography Using Shallow Neural Networks. J Imaging 2020; 6:135. [PMID: 34460532 PMCID: PMC8321184 DOI: 10.3390/jimaging6120135] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 12/04/2020] [Accepted: 12/04/2020] [Indexed: 11/16/2022] Open
Abstract
Circular cone-beam (CCB) Computed Tomography (CT) has become an integral part of industrial quality control, materials science and medical imaging. The need to acquire and process each scan in a short time naturally leads to trade-offs between speed and reconstruction quality, creating a need for fast reconstruction algorithms capable of creating accurate reconstructions from limited data. In this paper, we introduce the Neural Network Feldkamp-Davis-Kress (NN-FDK) algorithm. This algorithm adds a machine learning component to the FDK algorithm to improve its reconstruction accuracy while maintaining its computational efficiency. Moreover, the NN-FDK algorithm is designed such that it has low training data requirements and is fast to train. This ensures that the proposed algorithm can be used to improve image quality in high-throughput CT scanning settings, where FDK is currently used to keep pace with the acquisition speed using readily available computational resources. We compare the NN-FDK algorithm to two standard CT reconstruction algorithms and to two popular deep neural networks trained to remove reconstruction artifacts from the 2D slices of an FDK reconstruction. We show that the NN-FDK reconstruction algorithm is substantially faster in computing a reconstruction than all the tested alternative methods except for the standard FDK algorithm and we show it can compute accurate CCB CT reconstructions in cases of high noise, a low number of projection angles or large cone angles. Moreover, we show that the training time of an NN-FDK network is orders of magnitude lower than the considered deep neural networks, with only a slight reduction in reconstruction accuracy.
Collapse
Affiliation(s)
- Marinus J. Lagerwerf
- Centrum Wiskunde & Informatica, Science Park 123, 1098 XG Amsterdam, The Netherlands; (D.M.P.); (W.J.P.); (K.J.B.)
| | - Daniël M. Pelt
- Centrum Wiskunde & Informatica, Science Park 123, 1098 XG Amsterdam, The Netherlands; (D.M.P.); (W.J.P.); (K.J.B.)
- Leiden Institute of Advanced Computer Science, Universiteit Leiden, 2333 CA Leiden, The Netherlands
| | - Willem Jan Palenstijn
- Centrum Wiskunde & Informatica, Science Park 123, 1098 XG Amsterdam, The Netherlands; (D.M.P.); (W.J.P.); (K.J.B.)
| | - Kees Joost Batenburg
- Centrum Wiskunde & Informatica, Science Park 123, 1098 XG Amsterdam, The Netherlands; (D.M.P.); (W.J.P.); (K.J.B.)
- Leiden Institute of Advanced Computer Science, Universiteit Leiden, 2333 CA Leiden, The Netherlands
| |
Collapse
|
39
|
Thummerer A, de Jong BA, Zaffino P, Meijers A, Marmitt GG, Seco J, Steenbakkers RJHM, Langendijk JA, Both S, Spadea MF, Knopf AC. Comparison of the suitability of CBCT- and MR-based synthetic CTs for daily adaptive proton therapy in head and neck patients. ACTA ACUST UNITED AC 2020; 65:235036. [DOI: 10.1088/1361-6560/abb1d6] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
40
|
Lalonde A, Winey B, Verburg J, Paganetti H, Sharp GC. Evaluation of CBCT scatter correction using deep convolutional neural networks for head and neck adaptive proton therapy. Phys Med Biol 2020; 65. [DOI: 10.1088/1361-6560/ab9fcb] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Accepted: 06/24/2020] [Indexed: 12/11/2022]
|
41
|
Huynh E, Hosny A, Guthier C, Bitterman DS, Petit SF, Haas-Kogan DA, Kann B, Aerts HJWL, Mak RH. Artificial intelligence in radiation oncology. Nat Rev Clin Oncol. 2020;17:771-781. [PMID: 32843739 DOI: 10.1038/s41571-020-0417-8] [Citation(s) in RCA: 129] [Impact Index Per Article: 32.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/14/2020] [Indexed: 12/14/2022]
Abstract
Artificial intelligence (AI) has the potential to fundamentally alter the way medicine is practised. AI platforms excel in recognizing complex patterns in medical data and provide a quantitative, rather than purely qualitative, assessment of clinical conditions. Accordingly, AI could have particularly transformative applications in radiation oncology given the multifaceted and highly technical nature of this field of medicine with a heavy reliance on digital data processing and computer software. Indeed, AI has the potential to improve the accuracy, precision, efficiency and overall quality of radiation therapy for patients with cancer. In this Perspective, we first provide a general description of AI methods, followed by a high-level overview of the radiation therapy workflow with discussion of the implications that AI is likely to have on each step of this process. Finally, we describe the challenges associated with the clinical development and implementation of AI platforms in radiation oncology and provide our perspective on how these platforms might change the roles of radiotherapy medical professionals.
Collapse
|
42
|
van der Heyden B, Uray M, Fonseca GP, Huber P, Us D, Messner I, Law A, Parii A, Reisz N, Rinaldi I, Vilches Freixas G, Deutschmann H, Verhaegen F, Steininger P. A Monte Carlo based scatter removal method for non-isocentric cone-beam CT acquisitions using a deep convolutional autoencoder. ACTA ACUST UNITED AC 2020; 65:145002. [DOI: 10.1088/1361-6560/ab8954] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
43
|
Maspero M, Houweling AC, Savenije MHF, van Heijst TCF, Verhoeff JJC, Kotte ANTJ, van den Berg CAT. A single neural network for cone-beam computed tomography-based radiotherapy of head-and-neck, lung and breast cancer. Phys Imaging Radiat Oncol 2020; 14:24-31. [PMID: 33458310 PMCID: PMC7807541 DOI: 10.1016/j.phro.2020.04.002] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Revised: 04/24/2020] [Accepted: 04/29/2020] [Indexed: 01/28/2023]
Abstract
A deep learning network facilitated dose calculation from CBCT. A single network achieved CBCT-based dose calculation generating synthetic CT for head-and-neck, lung, and breast cancer patients with similar performance to a network specifically trained for each anatomical site. Generation of synthetic-CT can be achieved within 10 s, facilitating online adaptive radiotherapy scenarios.
Background and purpose Adaptive radiotherapy based on cone-beam computed tomography (CBCT) requires high CT number accuracy to ensure accurate dose calculations. Recently, deep learning has been proposed for fast CBCT artefact corrections on single anatomical sites. This study investigated the feasibility of applying a single convolutional network to facilitate dose calculation based on CBCT for head-and-neck, lung and breast cancer patients. Materials and Methods Ninety-nine patients diagnosed with head-and-neck, lung or breast cancer undergoing radiotherapy with CBCT-based position verification were included in this study. The CBCTs were registered to planning CT according to clinical procedures. Three cycle-consistent generative adversarial networks (cycle-GANs) were trained in an unpaired manner on 15 patients per anatomical site generating synthetic-CTs (sCTs). Another network was trained with all the anatomical sites together. Performances of all four networks were compared and evaluated for image similarity against rescan CT (rCT). Clinical plans were recalculated on rCT and sCT and analysed through voxel-based dose differences and γ-analysis. Results A sCT was generated in 10 s. Image similarity was comparable between models trained on different anatomical sites and a single model for all sites. Mean dose differences <0.5% were obtained in high-dose regions. Mean gamma (3%, 3 mm) pass-rates >95% were achieved for all sites. Conclusion Cycle-GAN reduced CBCT artefacts and increased similarity to CT, enabling sCT-based dose calculations. A single network achieved CBCT-based dose calculation generating synthetic CT for head-and-neck, lung, and breast cancer patients with similar performance to a network specifically trained for each anatomical site.
Collapse
Affiliation(s)
- Matteo Maspero
- Department of radiotherapy, division of imaging & oncology, University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, The Netherlands.,Computational imaging group for MR diagnostics & therapy, center for image sciences, University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, The Netherlands
| | - Antonetta C Houweling
- Department of radiotherapy, division of imaging & oncology, University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, The Netherlands
| | - Mark H F Savenije
- Department of radiotherapy, division of imaging & oncology, University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, The Netherlands.,Computational imaging group for MR diagnostics & therapy, center for image sciences, University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, The Netherlands
| | - Tristan C F van Heijst
- Department of radiotherapy, division of imaging & oncology, University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, The Netherlands
| | - Joost J C Verhoeff
- Department of radiotherapy, division of imaging & oncology, University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, The Netherlands
| | - Alexis N T J Kotte
- Department of radiotherapy, division of imaging & oncology, University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, The Netherlands
| | - Cornelis A T van den Berg
- Department of radiotherapy, division of imaging & oncology, University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, The Netherlands.,Computational imaging group for MR diagnostics & therapy, center for image sciences, University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, The Netherlands
| |
Collapse
|
44
|
Thummerer A, Zaffino P, Meijers A, Marmitt GG, Seco J, Steenbakkers RJHM, Langendijk JA, Both S, Spadea MF, Knopf AC. Comparison of CBCT based synthetic CT methods suitable for proton dose calculations in adaptive proton therapy. Phys Med Biol 2020; 65:095002. [PMID: 32143207 DOI: 10.1088/1361-6560/ab7d54] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
In-room imaging is a prerequisite for adaptive proton therapy. The use of onboard cone-beam computed tomography (CBCT) imaging, which is routinely acquired for patient position verification, can enable daily dose reconstructions and plan adaptation decisions. Image quality deficiencies though, hamper dose calculation accuracy and make corrections of CBCTs a necessity. This study compared three methods to correct CBCTs and create synthetic CTs that are suitable for proton dose calculations. CBCTs, planning CTs and repeated CTs (rCT) from 33 H&N cancer patients were used to compare a deep convolutional neural network (DCNN), deformable image registration (DIR) and an analytical image-based correction method (AIC) for synthetic CT (sCT) generation. Image quality of sCTs was evaluated by comparison with a same-day rCT, using mean absolute error (MAE), mean error (ME), Dice similarity coefficient (DSC), structural non-uniformity (SNU) and signal/contrast-to-noise ratios (SNR/CNR) as metrics. Dosimetric accuracy was investigated in an intracranial setting by performing gamma analysis and calculating range shifts. Neural network-based sCTs resulted in the lowest MAE and ME (37/2 HU) and the highest DSC (0.96). While DIR and AIC generated images with a MAE of 44/77 HU, a ME of -8/1 HU and a DSC of 0.94/0.90. Gamma and range shift analysis showed almost no dosimetric difference between DCNN and DIR based sCTs. The lower image quality of AIC based sCTs affected dosimetric accuracy and resulted in lower pass ratios and higher range shifts. Patient-specific differences highlighted the advantages and disadvantages of each method. For the set of patients, the DCNN created synthetic CTs with the highest image quality. Accurate proton dose calculations were achieved by both DCNN and DIR based sCTs. The AIC method resulted in lower image quality and dose calculation accuracy was reduced compared to the other methods.
Collapse
Affiliation(s)
- Adrian Thummerer
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | | | | | | | | | | | | | | | | | | |
Collapse
|
45
|
Kusunose K, Haga A, Inoue M, Fukuda D, Yamada H, Sata M. Clinically Feasible and Accurate View Classification of Echocardiographic Images Using Deep Learning. Biomolecules 2020; 10:E665. [PMID: 32344829 DOI: 10.3390/biom10050665] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2020] [Revised: 04/14/2020] [Accepted: 04/20/2020] [Indexed: 12/19/2022] Open
Abstract
A proper echocardiographic study requires several video clips recorded from different acquisition angles for observation of the complex cardiac anatomy. However, these video clips are not necessarily labeled in a database. Identification of the acquired view becomes the first step of analyzing an echocardiogram. Currently, there is no consensus whether the mislabeled samples can be used to create a feasible clinical prediction model of ejection fraction (EF). The aim of this study was to test two types of input methods for the classification of images, and to test the accuracy of the prediction model for EF in a learning database containing mislabeled images that were not checked by observers. We enrolled 340 patients with five standard views (long axis, short axis, 3-chamber view, 4-chamber view and 2-chamber view) and 10 images in a cycle, used for training a convolutional neural network to classify views (total 17,000 labeled images). All DICOM images were rigidly registered and rescaled into a reference image to fit the size of echocardiographic images. We employed 5-fold cross validation to examine model performance. We tested models trained by two types of data, averaged images and 10 selected images. Our best model (from 10 selected images) classified video views with 98.1% overall test accuracy in the independent cohort. In our view classification model, 1.9% of the images were mislabeled. To determine if this 98.1% accuracy was acceptable for creating the clinical prediction model using echocardiographic data, we tested the prediction model for EF using learning data with a 1.9% error rate. The accuracy of the prediction model for EF was warranted, even with training data containing 1.9% mislabeled images. The CNN algorithm can classify images into five standard views in a clinical setting. Our results suggest that this approach may provide a clinically feasible accuracy level of view classification for the analysis of echocardiographic data.
Collapse
|
46
|
Liu Y, Lei Y, Wang T, Fu Y, Tang X, Curran WJ, Liu T, Patel P, Yang X. CBCT-based synthetic CT generation using deep-attention cycleGAN for pancreatic adaptive radiotherapy. Med Phys 2020; 47:2472-2483. [PMID: 32141618 DOI: 10.1002/mp.14121] [Citation(s) in RCA: 93] [Impact Index Per Article: 23.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 02/27/2020] [Accepted: 02/27/2020] [Indexed: 12/18/2022] Open
Abstract
PURPOSE Current clinical application of cone-beam CT (CBCT) is limited to patient setup. Imaging artifacts and Hounsfield unit (HU) inaccuracy make the process of CBCT-based adaptive planning presently impractical. In this study, we developed a deep-learning-based approach to improve CBCT image quality and HU accuracy for potential extended clinical use in CBCT-guided pancreatic adaptive radiotherapy. METHODS Thirty patients previously treated with pancreas SBRT were included. The CBCT acquired prior to the first fraction of treatment was registered to the planning CT for training and generation of synthetic CT (sCT). A self-attention cycle generative adversarial network (cycleGAN) was used to generate CBCT-based sCT. For the cohort of 30 patients, the CT-based contours and treatment plans were transferred to the first fraction CBCTs and sCTs for dosimetric comparison. RESULTS At the site of abdomen, mean absolute error (MAE) between CT and sCT was 56.89 ± 13.84 HU, comparing to 81.06 ± 15.86 HU between CT and the raw CBCT. No significant differences (P > 0.05) were observed in the PTV and OAR dose-volume-histogram (DVH) metrics between the CT- and sCT-based plans, while significant differences (P < 0.05) were found between the CT- and the CBCT-based plans. CONCLUSIONS The image similarity and dosimetric agreement between the CT and sCT-based plans validated the dose calculation accuracy carried by sCT. The CBCT-based sCT approach can potentially increase treatment precision and thus minimize gastrointestinal toxicity.
Collapse
Affiliation(s)
- Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
47
|
Yuan N, Dyer B, Rao S, Chen Q, Benedict S, Shang L, Kang Y, Qi J, Rong Y. Convolutional neural network enhancement of fast-scan low-dose cone-beam CT images for head and neck radiotherapy. Phys Med Biol 2020; 65:035003. [PMID: 31842014 PMCID: PMC8011532 DOI: 10.1088/1361-6560/ab6240] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
To improve image quality and CT number accuracy of fast-scan low-dose cone-beam computed tomography (CBCT) through a deep-learning convolutional neural network (CNN) methodology for head-and-neck (HN) radiotherapy. Fifty-five paired CBCT and CT images from HN patients were retrospectively analysed. Among them, 15 patients underwent adaptive replanning during treatment, thus had same-day CT/CBCT pairs. The remaining 40 patients (post-operative) had paired planning CT and 1st fraction CBCT images with minimal anatomic changes. A 2D U-Net architecture with 27-layers in 5 depths was built for the CNN. CNN training was performed using data from 40 post-operative HN patients with 2080 paired CT/CBCT slices. Validation and test datasets include 5 same-day datasets with 260 slice pairs and 10 same-day datasets with 520 slice pairs, respectively. To examine the impact of differences in training dataset selection and network performance as a function of training data size, additional networks were trained using 30, 40 and 50 datasets. Image quality of enhanced CBCT images were quantitatively compared against the CT image using mean absolute error (MAE) of Hounsfield units (HU), signal-to-noise ratio (SNR) and structural similarity (SSIM). Enhanced CBCT images reduced artifact distortion and improved soft tissue contrast. Networks trained with 40 datasets had imaging performance comparable to those trained with 50 datasets and outperformed those trained with 30 datasets. Comparison of CBCT and enhanced CBCT images demonstrated improvement in average MAE from 172.73 to 49.28 HU, SNR from 8.27 to 14.25 dB, and SSIM from 0.42 to 0.85. The image processing time is 2 s per patient using a NVIDIA GeForce GTX 1080 Ti GPU. The proposed deep-leaning methodology was fast and effective for image quality enhancement of fast-scan low-dose CBCT. This method has potential to support fast online-adaptive re-planning for HN cancer patients.
Collapse
Affiliation(s)
- Nimu Yuan
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, Liaoning, People's Republic of China
- Department of Biomedical Engineering, University of California, Davis, CA, United States of America
| | - Brandon Dyer
- Department of Radiation Oncology, University of California Davis Medical Center, Sacramento, CA, United States of America
- Department of Radiation Oncology, University of Washington, Seattle, WA, United States of America
| | - Shyam Rao
- Department of Radiation Oncology, University of California Davis Medical Center, Sacramento, CA, United States of America
| | - Quan Chen
- Department of Radiation Medicine, University of Kentucky, Lexington, KY, United States of America
| | - Stanley Benedict
- Department of Radiation Oncology, University of California Davis Medical Center, Sacramento, CA, United States of America
| | - Lu Shang
- Department of Radiation Oncology, University of California Davis Medical Center, Sacramento, CA, United States of America
| | - Yan Kang
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, Liaoning, People's Republic of China
| | - Jinyi Qi
- Department of Biomedical Engineering, University of California, Davis, CA, United States of America
| | - Yi Rong
- Department of Radiation Oncology, University of California Davis Medical Center, Sacramento, CA, United States of America
| |
Collapse
|
48
|
Kurz C, Maspero M, Savenije MHF, Landry G, Kamp F, Pinto M, Li M, Parodi K, Belka C, van den Berg CAT. CBCT correction using a cycle-consistent generative adversarial network and unpaired training to enable photon and proton dose calculation. Phys Med Biol 2019; 64:225004. [PMID: 31610527 DOI: 10.1088/1361-6560/ab4d8c] [Citation(s) in RCA: 64] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
In presence of inter-fractional anatomical changes, clinical benefits are anticipated from image-guided adaptive radiotherapy. Nowadays, cone-beam CT (CBCT) imaging is mostly utilized during pre-treatment imaging for position verification. Due to various artifacts, image quality is typically not sufficient for photon or proton dose calculation, thus demanding accurate CBCT correction, as potentially provided by deep learning techniques. This work aimed at investigating the feasibility of utilizing a cycle-consistent generative adversarial network (cycleGAN) for prostate CBCT correction using unpaired training. Thirty-three patients were included. The network was trained to translate uncorrected, original CBCT images (CBCTorg) into planning CT equivalent images (CBCTcycleGAN). HU accuracy was determined by comparison to a previously validated CBCT correction technique (CBCTcor). Dosimetric accuracy was inferred for volumetric-modulated arc photon therapy (VMAT) and opposing single-field uniform dose (OSFUD) proton plans, optimized on CBCTcor and recalculated on CBCTcycleGAN. Single-sided SFUD proton plans were utilized to assess proton range accuracy. The mean HU error of CBCTcycleGAN with respect to CBCTcor decreased from 24 HU for CBCTorg to -6 HU. Dose calculation accuracy was high for VMAT, with average pass-rates of 100%/89% for a 2%/1% dose difference criterion. For proton OSFUD plans, the average pass-rate for a 2% dose difference criterion was 80%. Using a (2%, 2 mm) gamma criterion, the pass-rate was 96%. 93% of all analyzed SFUD profiles had a range agreement better than 3 mm. CBCT correction time was reduced from 6-10 min for CBCTcor to 10 s for CBCTcycleGAN. Our study demonstrated the feasibility of utilizing a cycleGAN for CBCT correction, achieving high dose calculation accuracy for VMAT. For proton therapy, further improvements may be required. Due to unpaired training, the approach does not rely on anatomically consistent training data or potentially inaccurate deformable image registration. The substantial speed-up for CBCT correction renders the method particularly interesting for adaptive radiotherapy.
Collapse
Affiliation(s)
- Christopher Kurz
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich, Germany. Department of Radiotherapy, Center for Image Sciences, Universitair Medisch Centrum Utrecht, Utrecht, the Netherlands. Department of Medical Physics, Fakultät für Physik, Ludwig-Maximilians-Universität München (LMU Munich), Garching, Germany. Author to whom correspondence should be addressed
| | | | | | | | | | | | | | | | | | | |
Collapse
|
49
|
Harms J, Lei Y, Wang T, Zhang R, Zhou J, Tang X, Curran WJ, Liu T, Yang X. Paired cycle-GAN-based image correction for quantitative cone-beam computed tomography. Med Phys 2019; 46:3998-4009. [PMID: 31206709 DOI: 10.1002/mp.13656] [Citation(s) in RCA: 132] [Impact Index Per Article: 26.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 06/07/2019] [Accepted: 06/07/2019] [Indexed: 12/15/2022] Open
Abstract
PURPOSE The incorporation of cone-beam computed tomography (CBCT) has allowed for enhanced image-guided radiation therapy. While CBCT allows for daily 3D imaging, images suffer from severe artifacts, limiting the clinical potential of CBCT. In this work, a deep learning-based method for generating high quality corrected CBCT (CCBCT) images is proposed. METHODS The proposed method integrates a residual block concept into a cycle-consistent adversarial network (cycle-GAN) framework, called res-cycle GAN, to learn a mapping between CBCT images and paired planning CT images. Compared with a GAN, a cycle-GAN includes an inverse transformation from CBCT to CT images, which constrains the model by forcing calculation of both a CCBCT and a synthetic CBCT. A fully convolution neural network with residual blocks is used in the generator to enable end-to-end CBCT-to-CT transformations. The proposed algorithm was evaluated using 24 sets of patient data in the brain and 20 sets of patient data in the pelvis. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC) indices, and spatial non-uniformity (SNU) were used to quantify the correction accuracy of the proposed algorithm. The proposed method is compared to both a conventional scatter correction and another machine learning-based CBCT correction method. RESULTS Overall, the MAE, PSNR, NCC, and SNU were 13.0 HU, 37.5 dB, 0.99, and 0.05 in the brain, 16.1 HU, 30.7 dB, 0.98, and 0.09 in the pelvis for the proposed method, improvements of 45%, 16%, 1%, and 93% in the brain, and 71%, 38%, 2%, and 65% in the pelvis, over the CBCT image. The proposed method showed superior image quality as compared to the scatter correction method, reducing noise and artifact severity. The proposed method produced images with less noise and artifacts than the comparison machine learning-based method. CONCLUSIONS The authors have developed a novel deep learning-based method to generate high-quality corrected CBCT images. The proposed method increases onboard CBCT image quality, making it comparable to that of the planning CT. With further evaluation and clinical implementation, this method could lead to quantitative adaptive radiation therapy.
Collapse
Affiliation(s)
- Joseph Harms
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Rongxiao Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
50
|
Li Y, Zhu J, Liu Z, Teng J, Xie Q, Zhang L, Liu X, Shi J, Chen L. A preliminary study of using a deep convolution neural network to generate synthesized CT images based on CBCT for adaptive radiotherapy of nasopharyngeal carcinoma. Phys Med Biol 2019; 64:145010. [PMID: 31170699 DOI: 10.1088/1361-6560/ab2770] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
This study aims to utilize a deep convolutional neural network (DCNN) for synthesized CT image generation based on cone-beam CT (CBCT) and to apply the images to dose calculations for nasopharyngeal carcinoma (NPC). An encoder-decoder 2D U-Net neural network was produced. A total of 70 CBCT/CT paired images of NPC cancer patients were used for training (50), validation (10) and testing (10) datasets. The testing datasets were treated with the same prescription dose (70 Gy to PTVnx70, 68 Gy to PTVnd68, 62 Gy to the PTV62 and 54 Gy to the PTV54). The mean error (ME) and mean absolute error (MAE) for the true CT images were calculated for image quality evaluation of the synthesized CT. The dose-volume histogram (DVH) dose metric difference and 3D gamma pass rate for the true CT images were calculated for dose analysis, and the results were compared with those for the CBCT images (original CBCT images without any correction) and a patient-specific calibration (PSC) method. Compared with CBCT, the range of the MAE for synthesized CT images improved from (60, 120) to (6, 27) Hounsfield units (HU), and the ME improved from (-74, 51) to (-26, 4) HU. Compared with the true CT method, the average DVH dose metric differences for the CBCT, PSC and synthesized CT methods were 0.8% ± 1.9%, 0.4% ± 0.7% and 0.2% ± 0.6%, respectively. The 1%/1 mm gamma pass rates within the body for the CBCT, PSC and synthesized CT methods were 90.8% ± 6.2%, 94.1% ± 4.4% and 95.5% ± 1.6%, respectively, and the rates within the PTVnx70 were 80.3% ± 16.6%, 87.9% ± 19.7%, 98.6% ± 2.9%, respectively. The DCNN model can generate high-quality synthesized CT images from CBCT images and be used for accurate dose calculations for NPC patients. This finding has great significance for the clinical application of adaptive radiotherapy for NPC.
Collapse
Affiliation(s)
- Yinghui Li
- School of Physics, Sun Yat-sen University, Guangzhou, Guangdong, People's Republic of China. Physics Department of the Radiotherapy Department, The First People's Hospital of FoShan (Affiliated FoShan Hospital of Sun Yat-sen University), Foshan, Guangdong, People's Republic of China. State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Sun Yat-Sen University of Medical Sciences, Guangzhou, Guangdong, People's Republic of China
| | | | | | | | | | | | | | | | | |
Collapse
|