1
|
Fu Y, Zhang P, Fan Q, Cai W, Pham H, Rimner A, Cuaron J, Cervino L, Moran JM, Li T, Li X. Deep learning-based target decomposition for markerless lung tumor tracking in radiotherapy. Med Phys 2024; 51:4271-4282. [PMID: 38507259 DOI: 10.1002/mp.17039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 02/07/2024] [Accepted: 03/04/2024] [Indexed: 03/22/2024] Open
Abstract
BACKGROUND In radiotherapy, real-time tumor tracking can verify tumor position during beam delivery, guide the radiation beam to target the tumor, and reduce the chance of a geometric miss. Markerless kV x-ray image-based tumor tracking is challenging due to the low tumor visibility caused by tumor-obscuring structures. Developing a new method to enhance tumor visibility for real-time tumor tracking is essential. PURPOSE To introduce a novel method for markerless kV image-based tracking of lung tumors via deep learning-based target decomposition. METHODS We utilized a conditional Generative Adversarial Network (cGAN), known as Pix2Pix, to build a patient-specific model and generate the synthetic decomposed target image (sDTI) to enhance tumor visibility on the real-time kV projection images acquired by the onboard kV imager equipped on modern linear accelerators. We used 4DCT simulation images to generate the digitally reconstructed radiograph (DRR) and DTI image pairs for model training. We augmented the training dataset by randomly shifting the 4DCT in the superior-inferior, anterior-posterior, and left-right directions during the DRR and DTI generation process. We performed real-time 2D tumor tracking via template matching between the DTI generated from the CT simulation and the sDTI generated from the real-time kV projection images. We validated the proposed method using nine patients' datasets with implanted beacons near the tumor. RESULTS The sDTI can effectively improve the image contrast around the lung tumors on the kV projection images for the nine patients. With the beacon motion as ground truth, the tracking errors were on average 0.8 ± 0.7 mm in the superior-inferior (SI) direction and 0.9 ± 0.8 mm in the in-plane left-right (IPLR) direction. The percentage of successful tracking, defined as a tracking error less than 2 mm in the SI direction, is 92.2% on the 4312 tested images. The patient-specific model took approximately 12 h to train. During testing, it took approximately 35 ms to generate one sDTI, and 13 ms to perform the tumor tracking using template matching. CONCLUSIONS Our method offers the potential solution for nearly real-time markerless lung tumor tracking. It achieved a high level of accuracy and an impressive tracking rate. Further development of 3D lung tumor tracking is warranted.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - Pengpeng Zhang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - Qiyong Fan
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - Weixing Cai
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - Hai Pham
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - Andreas Rimner
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - John Cuaron
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - Laura Cervino
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - Jean M Moran
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - Tianfang Li
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - Xiang Li
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
2
|
Wen X, Zhao C, Zhao B, Yuan M, Chang J, Liu W, Meng J, Shi L, Yang S, Zeng J, Yang Y. Application of deep learning in radiation therapy for cancer. Cancer Radiother 2024; 28:208-217. [PMID: 38519291 DOI: 10.1016/j.canrad.2023.07.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 07/17/2023] [Accepted: 07/18/2023] [Indexed: 03/24/2024]
Abstract
In recent years, with the development of artificial intelligence, deep learning has been gradually applied to clinical treatment and research. It has also found its way into the applications in radiotherapy, a crucial method for cancer treatment. This study summarizes the commonly used and latest deep learning algorithms (including transformer, and diffusion models), introduces the workflow of different radiotherapy, and illustrates the application of different algorithms in different radiotherapy modules, as well as the defects and challenges of deep learning in the field of radiotherapy, so as to provide some help for the development of automatic radiotherapy for cancer.
Collapse
Affiliation(s)
- X Wen
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - C Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Minhang District, Shanghai, China
| | - B Zhao
- Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - M Yuan
- Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - J Chang
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - W Liu
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - J Meng
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - L Shi
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - S Yang
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - J Zeng
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - Y Yang
- Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China.
| |
Collapse
|
3
|
Huang L, Kurz C, Freislederer P, Manapov F, Corradini S, Niyazi M, Belka C, Landry G, Riboldi M. Simultaneous object detection and segmentation for patient-specific markerless lung tumor tracking in simulated radiographs with deep learning. Med Phys 2024; 51:1957-1973. [PMID: 37683107 DOI: 10.1002/mp.16705] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 04/23/2023] [Accepted: 05/12/2023] [Indexed: 09/10/2023] Open
Abstract
BACKGROUND Real-time tumor tracking is one motion management method to address motion-induced uncertainty. To date, fiducial markers are often required to reliably track lung tumors with X-ray imaging, which carries risks of complications and leads to prolonged treatment time. A markerless tracking approach is thus desirable. Deep learning-based approaches have shown promise for markerless tracking, but systematic evaluation and procedures to investigate applicability in individual cases are missing. Moreover, few efforts have been made to provide bounding box prediction and mask segmentation simultaneously, which could allow either rigid or deformable multi-leaf collimator tracking. PURPOSE The purpose of this study was to implement a deep learning-based markerless lung tumor tracking model exploiting patient-specific training which outputs both a bounding box and a mask segmentation simultaneously. We also aimed to compare the two kinds of predictions and to implement a specific procedure to understand the feasibility of markerless tracking on individual cases. METHODS We first trained a Retina U-Net baseline model on digitally reconstructed radiographs (DRRs) generated from a public dataset containing 875 CT scans and corresponding lung nodule annotations. Afterwards, we used an independent cohort of 97 lung patients to develop a patient-specific refinement procedure. In order to determine the optimal hyperparameters for automatic patient-specific training, we selected 13 patients for validation where the baseline model predicted a bounding box on planning CT (PCT)-DRR with intersection over union (IoU) with the ground-truth higher than 0.7. The final test set contained the remaining 84 patients with varying PCT-DRR IoU. For each testing patient, the baseline model was refined on the PCT-DRR to generate a patient-specific model, which was then tested on a separate 10-phase 4DCT-DRR to mimic the intrafraction motion during treatment. A template matching algorithm served as benchmark model. The testing results were evaluated by four metrics: the center of mass (COM) error and the Dice similarity coefficient (DSC) for segmentation masks, and the center of box (COB) error and the DSC for bounding box detections. Performance was compared to the benchmark model including statistical testing for significance. RESULTS A PCT-DRR IoU value of 0.2 was shown to be the threshold dividing inconsistent (68%) and consistent (100%) success (defined as mean bounding box DSC > 0.6) of PS models on 4DCT-DRRs. Thirty-seven out of the eighty-four testing cases had a PCT-DRR IoU above 0.2. For these 37 cases, the mean COM error was 2.6 mm, the mean segmentation DSC was 0.78, the mean COB error was 2.7 mm, and the mean box DSC was 0.83. Including the validation cases, the model was applicable to 50 out of 97 patients when using the PCT-DRR IoU threshold of 0.2. The inference time per frame was 170 ms. The model outperformed the benchmark model on all metrics, and the comparison was significant (p < 0.001) over the 37 PCT-DRR IoU > 0.2 cases, but not over the undifferentiated 84 testing cases. CONCLUSIONS The implemented patient-specific refinement approach based on a pre-trained baseline model was shown to be applicable to markerless tumor tracking in simulated radiographs for lung cases.
Collapse
Affiliation(s)
- Lili Huang
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
- Department of Medical Physics, Faculty of Physics, Ludwig-Maximilians-Universität München, München, Germany
| | - Christopher Kurz
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Philipp Freislederer
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Farkhad Manapov
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Stefanie Corradini
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Maximilian Niyazi
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Claus Belka
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
- German Cancer Consortium (DKTK), partner site Munich, a partnership between DKFZ and LMU University Hospital Munich, Germany
- Bavarian Cancer Research Center (BZKF), Munich, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Marco Riboldi
- Department of Medical Physics, Faculty of Physics, Ludwig-Maximilians-Universität München, München, Germany
| |
Collapse
|
4
|
Winter JD, Reddy V, Li W, Craig T, Raman S. Impact of technological advances in treatment planning, image guidance, and treatment delivery on target margin design for prostate cancer radiotherapy: an updated review. Br J Radiol 2024; 97:31-40. [PMID: 38263844 PMCID: PMC11027310 DOI: 10.1093/bjr/tqad041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 08/22/2023] [Accepted: 11/21/2023] [Indexed: 01/25/2024] Open
Abstract
Recent innovations in image guidance, treatment delivery, and adaptive radiotherapy (RT) have created a new paradigm for planning target volume (PTV) margin design for patients with prostate cancer. We performed a review of the recent literature on PTV margin selection and design for intact prostate RT, excluding post-operative RT, brachytherapy, and proton therapy. Our review describes the increased focus on prostate and seminal vesicles as heterogenous deforming structures with further emergence of intra-prostatic GTV boost and concurrent pelvic lymph node treatment. To capture recent innovations, we highlight the evolution in cone beam CT guidance, and increasing use of MRI for improved target delineation and image registration and supporting online adaptive RT. Moreover, we summarize new and evolving image-guidance treatment platforms as well as recent reports of novel immobilization strategies and motion tracking. Our report also captures recent implementations of artificial intelligence to support image guidance and adaptive RT. To characterize the clinical impact of PTV margin changes via model-based risk estimates and clinical trials, we highlight recent high impact reports. Our report focusses on topics in the context of PTV margins but also showcase studies attempting to move beyond the PTV margin recipes with robust optimization and probabilistic planning approaches. Although guidelines exist for target margins conventional using CT-based image guidance, further validation is required to understand the optimal margins for online adaptation either alone or combined with real-time motion compensation to minimize systematic and random uncertainties in the treatment of patients with prostate cancer.
Collapse
Affiliation(s)
- Jeff D Winter
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON M5G 2M9, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON M5T 1P5, Canada
| | - Varun Reddy
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON M5G 2M9, Canada
| | - Winnie Li
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON M5G 2M9, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON M5T 1P5, Canada
| | - Tim Craig
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON M5G 2M9, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON M5T 1P5, Canada
| | - Srinivas Raman
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON M5G 2M9, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON M5T 1P5, Canada
| |
Collapse
|
5
|
Zhang X, Liu W, Xu F, He W, Song Y, Li G, Zhang Y, Dai G, Xiao Q, Meng Q, Zeng X, Bai S, Zhong R. Neural signals-based respiratory motion tracking: a proof-of-concept study. Phys Med Biol 2023; 68:195015. [PMID: 37683675 DOI: 10.1088/1361-6560/acf819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 09/08/2023] [Indexed: 09/10/2023]
Abstract
Objective.Respiratory motion tracking techniques can provide optimal treatment accuracy for thoracoabdominal radiotherapy and robotic surgery. However, conventional imaging-based respiratory motion tracking techniques are time-lagged owing to the system latency of medical linear accelerators and surgical robots. This study aims to investigate the precursor time of respiratory-related neural signals and analyze the potential of neural signals-based respiratory motion tracking.Approach.The neural signals and respiratory motion from eighteen healthy volunteers were acquired simultaneously using a 256-channel scalp electroencephalography (EEG) system. The neural signals were preprocessed using the MNE python package to extract respiratory-related EEG neural signals. Cross-correlation analysis was performed to assess the precursor time and cross-correlation coefficient between respiratory-related EEG neural signals and respiratory motion.Main results.Respiratory-related neural signals that precede the emergence of respiratory motion are detectable via non-invasive EEG. On average, the precursor time of respiratory-related EEG neural signals was 0.68 s. The representative cross-correlation coefficients between EEG neural signals and respiratory motion of the eighteen healthy subjects varied from 0.22 to 0.87.Significance.Our findings suggest that neural signals have the potential to compensate for the system latency of medical linear accelerators and surgical robots. This indicates that neural signals-based respiratory motion tracking is a potential promising solution to respiratory motion and could be useful in thoracoabdominal radiotherapy and robotic surgery.
Collapse
Affiliation(s)
- Xiangbin Zhang
- Radiotherapy Physics and Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, People's Republic of China
| | - Wenjie Liu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, People's Republic of China
| | - Feng Xu
- Lung Cancer Center, West China Hospital, Sichuan University, Chengdu, People's Republic of China
| | - Weizhong He
- Magstim Electrical Geodesics, Inc, Plymouth, MA, United States of America
| | - Yingpeng Song
- Radiotherapy Physics and Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, People's Republic of China
| | - Guangjun Li
- Radiotherapy Physics and Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, People's Republic of China
| | - Yingjie Zhang
- Radiotherapy Physics and Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, People's Republic of China
| | - Guyu Dai
- Radiotherapy Physics and Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, People's Republic of China
| | - Qing Xiao
- Radiotherapy Physics and Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, People's Republic of China
| | - Qianqian Meng
- Radiotherapy Physics and Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, People's Republic of China
| | - Xianhu Zeng
- Radiotherapy Physics and Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, People's Republic of China
| | - Sen Bai
- Radiotherapy Physics and Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, People's Republic of China
| | - Renming Zhong
- Radiotherapy Physics and Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, People's Republic of China
| |
Collapse
|
6
|
Teuwen J, Gouw ZA, Sonke JJ. Artificial Intelligence for Image Registration in Radiation Oncology. Semin Radiat Oncol 2022; 32:330-342. [DOI: 10.1016/j.semradonc.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
7
|
Liang X, Bassenne M, Hristov DH, Islam T, Zhao W, Jia M, Zhang Z, Gensheimer M, Beadle B, Le Q, Xing L. Human-level comparable control volume mapping with a deep unsupervised-learning model for image-guided radiation therapy. Comput Biol Med 2022; 141:105139. [PMID: 34942395 PMCID: PMC8810749 DOI: 10.1016/j.compbiomed.2021.105139] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 12/10/2021] [Accepted: 12/11/2021] [Indexed: 02/03/2023]
Abstract
PURPOSE To develop a deep unsupervised learning method with control volume (CV) mapping from patient positioning daily CT (dCT) to planning computed tomography (pCT) for precise patient positioning. METHODS We propose an unsupervised learning framework, which maps CVs from dCT to pCT to automatically generate the couch shifts, including translation and rotation dimensions. The network inputs are dCT, pCT and CV positions in the pCT. The output is the transformation parameter of the dCT used to setup the head and neck cancer (HNC) patients. The network is trained to maximize image similarity between the CV in the pCT and the CV in the dCT. A total of 554 CT scans from 158 HNC patients were used for the evaluation of the proposed model. At different points in time, each patient had many CT scans. Couch shifts are calculated for the testing by averaging the translation and rotation from the CVs. The ground-truth of the shifts come from bone landmarks determined by an experienced radiation oncologist. RESULTS The system positioning errors of translation and rotation are less than 0.47 mm and 0.17°, respectively. The random positioning errors of translation and rotation are less than 1.13 mm and 0.29°, respectively. The proposed method enhanced the proportion of cases registered within a preset tolerance (2.0 mm/1.0°) from 66.67% to 90.91% as compared to standard registrations. CONCLUSIONS We proposed a deep unsupervised learning architecture for patient positioning with inclusion of CVs mapping, which weights the CVs regions differently to mitigate any potential adverse influence of image artifacts on the registration. Our experimental results show that the proposed method achieved efficient and effective HNC patient positioning.
Collapse
Affiliation(s)
- Xiaokun Liang
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Maxime Bassenne
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Dimitre H. Hristov
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Tauhidul Islam
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Wei Zhao
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Mengyu Jia
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Michael Gensheimer
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Beth Beadle
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Quynh Le
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| |
Collapse
|
8
|
Zhou D, Nakamura M, Mukumoto N, Yoshimura M, Mizowaki T. Development of a deep learning-based patient-specific target contour prediction model for markerless tumor positioning. Med Phys 2022; 49:1382-1390. [PMID: 35026057 DOI: 10.1002/mp.15456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Revised: 12/03/2021] [Accepted: 12/28/2021] [Indexed: 11/11/2022] Open
Abstract
PURPOSE For pancreatic cancer patients, image guided radiation therapy and real-time tumor tracking (RTTT) techniques can deliver radiation to the target accurately. Currently, for the radiation therapy machine with kV X-ray imaging systems, internal markers must be implemented to facilitate tumor tracking. The purpose of this study was to develop a markerless deep learning-based pancreatic tumor positioning procedure for real-time tumor tracking with a kV X-ray imaging system. METHODS AND MATERIALS Fourteen pancreatic cancer patients treated with intensity-modulated radiation therapy from six fixed gantry angles with a gimbal-head radiotherapy system were included in this study. For a gimbal-head radiotherapy system, the three-dimensional (3D) intrafraction target position can be determined using an orthogonal kV X-ray imaging system. All patients underwent four-dimensional computed tomography (4DCT) simulations for treatment planning, which were divided into 10 respiratory phases. After a patient's 4DCT was acquired, for each X-ray tube angle, 10 digitally reconstructed radiograph (DRR) images were obtained. Then, a data augmentation procedure was conducted. The data augmentation procedure first rotated the CT volume around the superior-inferior and anterior-posterior directions from -3° to 3° in 1.5° intervals. Then, the Super-SloMo model was adapted to interpolate 10 frames between respiratory phases. In total, the data augmentation procedure expanded the data scale 250-fold. In this study, for each patient, 12 datasets containing the DRR images from each specific X-ray tube angle based on the radiation therapy plan were obtained. The augmented dataset was randomly divided into training and testing datasets. The training dataset contained 2000 DRR images with clinical target volume (CTV) contours labeled for fine-tuning the pre-trained target contour prediction model. After the fine-tuning, the patient and X-ray tube angle-specific CTV contour prediction model was acquired. The testing dataset contained the remaining 500 images to evaluate the performance of the CTV contour prediction model. The dice similarity coefficient (DSC) between the area enclosed by the CTV contour and predicted contour was calculated to evaluate the model's contour prediction performance. The 3D position of the CTV was calculated based on the centroid of the contour in the orthogonal DRR images, and the 3D error of the prediction position was calculated to evaluate the CTV positioning performance. For each patient, the DSC results from 12 X-ray tube angles and 3D error from 6 gantry angles were calculated, representing the novelty of this study. RESULTS The mean and standard deviation (SD) of all patients' DSCs were 0.98 and 0.015, respectively. The mean and SD of the 3D error were 0.29 mm and 0.14 mm, respectively. The global maximum 3D error was 1.66 mm, and the global minimum DSC was 0.81. The mean calculation time for CTV contour prediction was 55 ms per image. This fulfills the requirement of RTTT. CONCLUSIONS Regarding the positioning accuracy and calculation efficiency, the presented procedure can provide a solution for markerless real-time tumor tracking for pancreatic cancer patients. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Dejun Zhou
- Division of Medical Physics, Department of Information Technology and Medical Engineering, Human Health Sciences, Graduate School of Medicine, Kyoto University, 53 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Mitsuhiro Nakamura
- Division of Medical Physics, Department of Information Technology and Medical Engineering, Human Health Sciences, Graduate School of Medicine, Kyoto University, 53 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan.,Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Nobutaka Mukumoto
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Michio Yoshimura
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Takashi Mizowaki
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan
| |
Collapse
|
9
|
Zhao W, Shen L, Islam MT, Qin W, Zhang Z, Liang X, Zhang G, Xu S, Li X. Artificial intelligence in image-guided radiotherapy: a review of treatment target localization. Quant Imaging Med Surg 2021; 11:4881-4894. [PMID: 34888196 PMCID: PMC8611462 DOI: 10.21037/qims-21-199] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Accepted: 07/05/2021] [Indexed: 01/06/2023]
Abstract
Modern conformal beam delivery techniques require image-guidance to ensure the prescribed dose to be delivered as planned. Recent advances in artificial intelligence (AI) have greatly augmented our ability to accurately localize the treatment target while sparing the normal tissues. In this paper, we review the applications of AI-based algorithms in image-guided radiotherapy (IGRT), and discuss the indications of these applications to the future of clinical practice of radiotherapy. The benefits, limitations and some important trends in research and development of the AI-based IGRT techniques are also discussed. AI-based IGRT techniques have the potential to monitor tumor motion, reduce treatment uncertainty and improve treatment precision. Particularly, these techniques also allow more healthy tissue to be spared while keeping tumor coverage the same or even better.
Collapse
Affiliation(s)
- Wei Zhao
- School of Physics, Beihang University, Beijing, China
| | - Liyue Shen
- Department of Radiation Oncology, Stanford University, Stanford, USA
| | - Md Tauhidul Islam
- Department of Radiation Oncology, Stanford University, Stanford, USA
| | - Wenjian Qin
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, USA
| | - Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Gaolong Zhang
- School of Physics, Beihang University, Beijing, China
| | - Shouping Xu
- Department of Radiation Oncology, PLA General Hospital, Beijing, China
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong, China
| |
Collapse
|
10
|
Mylonas A, Booth J, Nguyen DT. A review of artificial intelligence applications for motion tracking in radiotherapy. J Med Imaging Radiat Oncol 2021; 65:596-611. [PMID: 34288501 DOI: 10.1111/1754-9485.13285] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 06/29/2021] [Indexed: 11/28/2022]
Abstract
During radiotherapy, the organs and tumour move as a result of the dynamic nature of the body; this is known as intrafraction motion. Intrafraction motion can result in tumour underdose and healthy tissue overdose, thereby reducing the effectiveness of the treatment while increasing toxicity to the patients. There is a growing appreciation of intrafraction target motion management by the radiation oncology community. Real-time image-guided radiation therapy (IGRT) can track the target and account for the motion, improving the radiation dose to the tumour and reducing the dose to healthy tissue. Recently, artificial intelligence (AI)-based approaches have been applied to motion management and have shown great potential. In this review, four main categories of motion management using AI are summarised: marker-based tracking, markerless tracking, full anatomy monitoring and motion prediction. Marker-based and markerless tracking approaches focus on tracking the individual target throughout the treatment. Full anatomy algorithms monitor for intrafraction changes in the full anatomy within the field of view. Motion prediction algorithms can be used to account for the latencies due to the time for the system to localise, process and act.
Collapse
Affiliation(s)
- Adam Mylonas
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,School of Biomedical Engineering, University of Technology Sydney, Sydney, New South Wales, Australia
| | - Jeremy Booth
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales, Australia.,Institute of Medical Physics, School of Physics, The University of Sydney, Sydney, New South Wales, Australia
| | - Doan Trang Nguyen
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,School of Biomedical Engineering, University of Technology Sydney, Sydney, New South Wales, Australia.,Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales, Australia
| |
Collapse
|
11
|
Field M, Hardcastle N, Jameson M, Aherne N, Holloway L. Machine learning applications in radiation oncology. PHYSICS & IMAGING IN RADIATION ONCOLOGY 2021; 19:13-24. [PMID: 34307915 PMCID: PMC8295850 DOI: 10.1016/j.phro.2021.05.007] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 05/19/2021] [Accepted: 05/22/2021] [Indexed: 12/23/2022]
Abstract
Machine learning technology has a growing impact on radiation oncology with an increasing presence in research and industry. The prevalence of diverse data including 3D imaging and the 3D radiation dose delivery presents potential for future automation and scope for treatment improvements for cancer patients. Harnessing this potential requires standardization of tools and data, and focused collaboration between fields of expertise. The rapid advancement of radiation oncology treatment technologies presents opportunities for machine learning integration with investments targeted towards data quality, data extraction, software, and engagement with clinical expertise. In this review, we provide an overview of machine learning concepts before reviewing advances in applying machine learning to radiation oncology and integrating these techniques into the radiation oncology workflows. Several key areas are outlined in the radiation oncology workflow where machine learning has been applied and where it can have a significant impact in terms of efficiency, consistency in treatment and overall treatment outcomes. This review highlights that machine learning has key early applications in radiation oncology due to the repetitive nature of many tasks that also currently have human review. Standardized data management of routinely collected imaging and radiation dose data are also highlighted as enabling engagement in research utilizing machine learning and the ability integrate these technologies into clinical workflow to benefit patients. Physicists need to be part of the conversation to facilitate this technical integration.
Collapse
Affiliation(s)
- Matthew Field
- South Western Sydney Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia.,Ingham Institute for Applied Medical Research, Sydney, NSW, Australia
| | - Nicholas Hardcastle
- Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, VIC, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, Australia
| | - Michael Jameson
- GenesisCare, Alexandria, NSW, Australia.,St Vincent's Clinical School, Faculty of Medicine, University of New South Wales, Australia
| | - Noel Aherne
- Mid North Coast Cancer Institute, NSW, Australia.,Rural Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia
| | - Lois Holloway
- South Western Sydney Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia.,Ingham Institute for Applied Medical Research, Sydney, NSW, Australia.,Cancer Therapy Centre, Liverpool Hospital, Sydney, NSW, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, Australia
| |
Collapse
|
12
|
Development and Validation of an Interpretable Artificial Intelligence Model to Predict 10-Year Prostate Cancer Mortality. Cancers (Basel) 2021; 13:cancers13123064. [PMID: 34205398 PMCID: PMC8234681 DOI: 10.3390/cancers13123064] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 06/03/2021] [Accepted: 06/17/2021] [Indexed: 01/31/2023] Open
Abstract
Simple Summary This article presents a gradient-boosted model that can predict 10-year prostate cancer mortality with high accuracy. The model was developed and validated on prospective multicenter data from the PLCO trial. Using XGBoost and Shapley values, it provides interpretability to understand its prediction. It can be used online to provide predictions and support informed decision-making in PCa treatment. Abstract Prostate cancer treatment strategies are guided by risk-stratification. This stratification can be difficult in some patients with known comorbidities. New models are needed to guide strategies and determine which patients are at risk of prostate cancer mortality. This article presents a gradient-boosting model to predict the risk of prostate cancer mortality within 10 years after a cancer diagnosis, and to provide an interpretable prediction. This work uses prospective data from the PLCO Cancer Screening and selected patients who were diagnosed with prostate cancer. During follow-up, 8776 patients were diagnosed with prostate cancer. The dataset was randomly split into a training (n = 7021) and testing (n = 1755) dataset. Accuracy was 0.98 (±0.01), and the area under the receiver operating characteristic was 0.80 (±0.04). This model can be used to support informed decision-making in prostate cancer treatment. AI interpretability provides a novel understanding of the predictions to the users.
Collapse
|
13
|
Ip WY, Yeung FK, Yung SPF, Yu HCJ, So TH, Vardhanabhuti V. Current landscape and potential future applications of artificial intelligence in medical physics and radiotherapy. Artif Intell Med Imaging 2021; 2:37-55. [DOI: 10.35711/aimi.v2.i2.37] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 04/01/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) has seen tremendous growth over the past decade and stands to disrupts the medical industry. In medicine, this has been applied in medical imaging and other digitised medical disciplines, but in more traditional fields like medical physics, the adoption of AI is still at an early stage. Though AI is anticipated to be better than human in certain tasks, with the rapid growth of AI, there is increasing concerns for its usage. The focus of this paper is on the current landscape and potential future applications of artificial intelligence in medical physics and radiotherapy. Topics on AI for image acquisition, image segmentation, treatment delivery, quality assurance and outcome prediction will be explored as well as the interaction between human and AI. This will give insights into how we should approach and use the technology for enhancing the quality of clinical practice.
Collapse
Affiliation(s)
- Wing-Yan Ip
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| | - Fu-Ki Yeung
- Medical Physics and Research Department, The Hong Kong Sanitorium & Hospital, Hong Kong SAR, China and Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| | - Shang-Peng Felix Yung
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| | | | - Tsz-Him So
- Department of Clinical Oncology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
14
|
Estimating dual-energy CT imaging from single-energy CT data with material decomposition convolutional neural network. Med Image Anal 2021; 70:102001. [PMID: 33640721 DOI: 10.1016/j.media.2021.102001] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 02/06/2021] [Accepted: 02/11/2021] [Indexed: 01/12/2023]
Abstract
Dual-energy computed tomography (DECT) is of great significance for clinical practice due to its huge potential to provide material-specific information. However, DECT scanners are usually more expensive than standard single-energy CT (SECT) scanners and thus are less accessible to undeveloped regions. In this paper, we show that the energy-domain correlation and anatomical consistency between standard DECT images can be harnessed by a deep learning model to provide high-performance DECT imaging from fully-sampled low-energy data together with single-view high-energy data. We demonstrate the feasibility of the approach with two independent cohorts (the first cohort including contrast-enhanced DECT scans of 5753 image slices from 22 patients and the second cohort including spectral CT scans without contrast injection of 2463 image slices from other 22 patients) and show its superior performance on DECT applications. The deep-learning-based approach could be useful to further significantly reduce the radiation dose of current premium DECT scanners and has the potential to simplify the hardware of DECT imaging systems and to enable DECT imaging using standard SECT scanners.
Collapse
|
15
|
|
16
|
Dhont J, Verellen D, Mollaert I, Vanreusel V, Vandemeulebroucke J. RealDRR - Rendering of realistic digitally reconstructed radiographs using locally trained image-to-image translation. Radiother Oncol 2020; 153:213-219. [PMID: 33039426 DOI: 10.1016/j.radonc.2020.10.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 09/30/2020] [Accepted: 10/01/2020] [Indexed: 12/25/2022]
Abstract
INTRODUCTION Digitally reconstructed radiographs (DRRs) represent valuable patient-specific pre-treatment training data for tumor tracking algorithms. However, using current rendering methods, the similarity of the DRRs to real X-ray images is limited, requires time-consuming measurements and/or are computationally expensive. In this study we present RealDRR, a novel framework for highly realistic and computationally efficient DRR rendering. MATERIALS AND METHODS RealDRR consists of two components applied sequentially to render a DRR. First, a raytracer is applied for forward projection from 3D CT data to a 2D image. Second, a conditional Generative Adverserial Network (cGAN) is applied to translate the 2D forward projection to a realistic 2D DRR. The planning CT and CBCT projections from a CIRS thorax phantom and 6 radiotherapy patients (3 prostate, 3 brain) were split in training and test sets for evaluating the intra-patient, inter-patient and inter-anatomical region generalization performance of the trained framework. Several image similarity metrics, as well as a verification based on template matching, were used between the rendered DRRs and respective CBCT projections in the test sets, and results were compared to those of a current state-of-the-art DRR rendering method. RESULTS When trained on 800 CBCT projection images from two patients and tested on a third unseen patient from either anatomical region, RealDRR outperformed the current state-of-the-art with statistical significance on all metrics (two-sample t-test, p < 0.05). Once trained, the framework is able to render 100 highly realistic DRRs in under two minutes. CONCLUSION A novel framework for realistic and efficient DRR rendering was proposed. As the framework requires a reasonable amount of computational resources, the internal parameters can be tailored to imaging systems and protocols through on-site training on retrospective imaging data.
Collapse
Affiliation(s)
- Jennifer Dhont
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels, Belgium; Imec, Leuven, Belgium; Faculty of Medicine and Pharmaceutical Sciences, Vrije Universiteit Brussel, Brussels, Belgium.
| | - Dirk Verellen
- Iridium Kankernetwerk, Antwerp, Belgium; University of Antwerp, Faculty of Medicine and Health Sciences, Antwerp, Belgium
| | | | | | - Jef Vandemeulebroucke
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels, Belgium; Imec, Leuven, Belgium
| |
Collapse
|
17
|
Siddique S, Chow JC. Artificial intelligence in radiotherapy. Rep Pract Oncol Radiother 2020; 25:656-666. [PMID: 32617080 PMCID: PMC7321818 DOI: 10.1016/j.rpor.2020.03.015] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Revised: 01/06/2020] [Accepted: 03/27/2020] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) has already been implemented widely in the medical field in the recent years. This paper first reviews the background of AI and radiotherapy. Then it explores the basic concepts of different AI algorithms and machine learning methods, such as neural networks, that are available to us today and how they are being implemented in radiotherapy and diagnostic processes, such as medical imaging, treatment planning, patient simulation, quality assurance and radiation dose delivery. It also explores the ongoing research on AI methods that are to be implemented in radiotherapy in the future. The review shows very promising progress and future for AI to be widely used in various areas of radiotherapy. However, basing on various concerns such as availability and security of using big data, and further work on polishing and testing AI algorithms, it is found that we may not ready to use AI primarily in radiotherapy at the moment.
Collapse
Affiliation(s)
- Sarkar Siddique
- Department of Physics, Ryerson University, Toronto, ON M5B 2K3, Canada
| | - James C.L. Chow
- Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, ON M5G 1X6, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON M5T 1P5, Canada
| |
Collapse
|
18
|
Shan H, Jia X, Yan P, Li Y, Paganetti H, Wang G. Synergizing medical imaging and radiotherapy with deep learning. MACHINE LEARNING-SCIENCE AND TECHNOLOGY 2020. [DOI: 10.1088/2632-2153/ab869f] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
19
|
Kai Y, Arimura H, Ninomiya K, Saito T, Shimohigashi Y, Kuraoka A, Maruyama M, Toya R, Oya N. Semi-automated prediction approach of target shifts using machine learning with anatomical features between planning and pretreatment CT images in prostate radiotherapy. JOURNAL OF RADIATION RESEARCH 2020; 61:285-297. [PMID: 31994702 PMCID: PMC7246080 DOI: 10.1093/jrr/rrz105] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Revised: 10/26/2019] [Accepted: 01/10/2020] [Indexed: 06/10/2023]
Abstract
The goal of this study was to develop a semi-automated prediction approach of target shifts using machine learning architecture (MLA) with anatomical features for prostate radiotherapy. Our hypothesis was that anatomical features between planning computed tomography (pCT) and pretreatment cone-beam computed tomography (CBCT) images could be used to predict the target, i.e. clinical target volume (CTV) shifts, with small errors. The pCT and daily CBCT images of 20 patients with prostate cancer were selected. The first 10 patients were employed for the development, and the second 10 patients for a validation test. The CTV position errors between the pCT and CBCT images were determined as reference CTV shifts (teacher data) after an automated bone-based registration. The anatomical features associated with rectum, bladder and prostate were calculated from the pCT and CBCT images. The features were fed as the input with the teacher data into five MLAs, i.e. three types of artificial neural networks, support vector regression (SVR) and random forests. Since the CTV shifts along the left-right direction were negligible, the MLAs were developed along the superior-inferior and anterior-posterior directions. The proposed framework was evaluated from the residual errors between the reference and predicted CTV shifts. In the validation test, the mean residual error with its standard deviation was 1.01 ± 1.09 mm in SVR using only one feature (one click), which was associated with positional difference of the upper rectal wall. The results suggested that MLAs with anatomical features could be useful in prediction of CTV shifts for prostate radiotherapy.
Collapse
Affiliation(s)
- Yudai Kai
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku Fukuoka 812-8582, Japan
- Department of Radiological Technology, Kumamoto University Hospital, 1-1-1, Honjo, Chuo-ku, Kumamoto, 860-8556, Japan
| | - Hidetaka Arimura
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku Fukuoka 812-8582, Japan
| | - Kenta Ninomiya
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku Fukuoka 812-8582, Japan
| | - Tetsuo Saito
- Department of Radiation Oncology, Kumamoto University Hospital, 1-1-1, Honjo, Chuo-ku, Kumamoto, 860-8556, Japan
| | - Yoshinobu Shimohigashi
- Department of Radiological Technology, Kumamoto University Hospital, 1-1-1, Honjo, Chuo-ku, Kumamoto, 860-8556, Japan
| | - Akiko Kuraoka
- Department of Radiological Technology, Kumamoto University Hospital, 1-1-1, Honjo, Chuo-ku, Kumamoto, 860-8556, Japan
| | - Masato Maruyama
- Department of Radiological Technology, Kumamoto University Hospital, 1-1-1, Honjo, Chuo-ku, Kumamoto, 860-8556, Japan
| | - Ryo Toya
- Department of Radiation Oncology, Kumamoto University Hospital, 1-1-1, Honjo, Chuo-ku, Kumamoto, 860-8556, Japan
| | - Natsuo Oya
- Department of Radiation Oncology, Kumamoto University Hospital, 1-1-1, Honjo, Chuo-ku, Kumamoto, 860-8556, Japan
| |
Collapse
|
20
|
Chamunyonga C, Edwards C, Caldwell P, Rutledge P, Burbery J. The Impact of Artificial Intelligence and Machine Learning in Radiation Therapy: Considerations for Future Curriculum Enhancement. J Med Imaging Radiat Sci 2020; 51:214-220. [PMID: 32115386 DOI: 10.1016/j.jmir.2020.01.008] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 01/31/2020] [Accepted: 01/31/2020] [Indexed: 12/14/2022]
Abstract
Artificial intelligence (AI) and machine learning (ML) approaches have caught the attention of many in health care. Current literature suggests there are many potential benefits that could transform future clinical workflows and decision making. Embedding AI and ML concepts in radiation therapy education could be a fundamental step in equipping radiation therapists (RTs) to engage in competent and safe practice as they utilise clinical technologies. In this discussion paper, the authors provide a brief review of some applications of AI and ML in radiation therapy and discuss pertinent considerations for radiation therapy curriculum enhancement. As the current literature suggests, AI and ML approaches will impose changes to routine clinical radiation therapy tasks. The emphasis in RT education could be on critical evaluation of AI and ML application in routine clinical workflows and gaining an understanding of the impact on quality assurance, provision of quality of care and safety in radiation therapy as well as research. It is also imperative RTs have a broader understanding of AI/ML impact on health care, including ethical and legal considerations. The paper concludes with recommendations and suggestions to deliberately embed AI and ML aspects in RT education to empower future RT practitioners.
Collapse
Affiliation(s)
- Crispen Chamunyonga
- School of Clinical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia.
| | - Christopher Edwards
- School of Clinical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Peter Caldwell
- School of Clinical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Peta Rutledge
- School of Clinical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Julie Burbery
- School of Clinical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| |
Collapse
|
21
|
Wang C, Hunt M, Zhang L, Rimner A, Yorke E, Lovelock M, Li X, Li T, Mageras G, Zhang P. Technical Note: 3D localization of lung tumors on cone beam CT projections via a convolutional recurrent neural network. Med Phys 2020; 47:1161-1166. [PMID: 31899807 DOI: 10.1002/mp.14007] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 12/16/2019] [Accepted: 12/28/2019] [Indexed: 11/07/2022] Open
Abstract
PURPOSE To design a convolutional recurrent neural network (CRNN) that calculates three-dimensional (3D) positions of lung tumors from continuously acquired cone beam computed tomography (CBCT) projections, and facilitates the sorting and reconstruction of 4D-CBCT images. METHOD Under an IRB-approved clinical lung protocol, kilovoltage (kV) projections of the setup CBCT were collected in free-breathing. Concurrently, an electromagnetic signal-guided system recorded motion traces of three transponders implanted in or near the tumor. Convolutional recurrent neural network was designed to utilize a convolutional neural network (CNN) for extracting relevant features of the kV projections around the tumor, followed by a recurrent neural network for analyzing the temporal patterns of the moving features. Convolutional recurrent neural network was trained on the simultaneously collected kV projections and motion traces, subsequently utilized to calculate motion traces solely based on the continuous feed of kV projections. To enhance performance, CRNN was also facilitated by frequent calibrations (e.g., at 10° gantry rotation intervals) derived from cross-correlation-based registrations between kV projections and templates created from the planning 4DCT. Convolutional recurrent neural network was validated on a leave-one-out strategy using data from 11 lung patients, including 5500 kV images. The root-mean-square error between the CRNN and motion traces was calculated to evaluate the localization accuracy. RESULT Three-dimensional displacement around the simulation position shown in the Calypso traces was 3.4 ± 1.7 mm. Using motion traces as ground truth, the 3D localization error of CRNN with calibrations was 1.3 ± 1.4 mm. CRNN had a success rate of 86 ± 8% in determining whether the motion was within a 3D displacement window of 2 mm. The latency was 20 ms when CRNN ran on a high-performance computer cluster. CONCLUSIONS CRNN is able to provide accurate localization of lung tumors with aid from frequent recalibrations using the conventional cross-correlation-based registration approach, and has the potential to remove reliance on the implanted fiducials.
Collapse
Affiliation(s)
- Chuang Wang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Margie Hunt
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Lei Zhang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Andreas Rimner
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Ellen Yorke
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Michael Lovelock
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Xiang Li
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Tianfang Li
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Gig Mageras
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Pengpeng Zhang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| |
Collapse
|
22
|
Zhao W, Lv T, Lee R, Chen Y, Xing L. Obtaining dual-energy computed tomography (CT) information from a single-energy CT image for quantitative imaging analysis of living subjects by using deep learning. PACIFIC SYMPOSIUM ON BIOCOMPUTING. PACIFIC SYMPOSIUM ON BIOCOMPUTING 2020; 25:139-148. [PMID: 31797593 PMCID: PMC6938283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
Computed tomographic (CT) is a fundamental imaging modality to generate cross-sectional views of internal anatomy in a living subject or interrogate material composition of an object, and it has been routinely used in clinical applications and nondestructive testing. In a standard CT image, pixels having the same Hounsfield Units (HU) can correspond to different materials, and it is therefore challenging to differentiate and quantify materials. Dual-energy CT (DECT) is desirable to differentiate multiple materials, but the costly DECT scanners are not widely available as single-energy CT (SECT) scanners. Recent advancement in deep learning provides an enabling tool to map images between different modalities with incorporated prior knowledge. Here we develop a deep learning approach to perform DECT imaging by using the standard SECT data. The end point of the approach is a model capable of providing the high-energy CT image for a given input low-energy CT image. The feasibility of the deep learning-based DECT imaging method using a SECT data is demonstrated using contrast-enhanced DECT images and evaluated using clinical relevant indexes. This work opens new opportunities for numerous DECT clinical applications with a standard SECT data and may enable significantly simplified hardware design, scanning dose and image cost reduction for future DECT systems.
Collapse
Affiliation(s)
| | | | - Rena Lee
- Department of Bioengineering, Ehwa Womens University, Seoul, Korea
| | | | - Lei Xing
- Department of Radiation Oncology, Stanford University, Palo Alto, CA 94306, USA
| |
Collapse
|
23
|
Patient-specific reconstruction of volumetric computed tomography images from a single projection view via deep learning. Nat Biomed Eng 2019; 3:880-888. [PMID: 31659306 PMCID: PMC6858583 DOI: 10.1038/s41551-019-0466-4] [Citation(s) in RCA: 109] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Accepted: 09/19/2019] [Indexed: 12/12/2022]
Abstract
Tomographic imaging via penetrating waves generates cross-sectional views of the internal anatomy of a living subject. For artefact-free volumetric imaging, projection views from a large number of angular positions are required. Here, we show that a deep-learning model trained to map projection radiographs of a patient to the corresponding 3D anatomy can subsequently generate volumetric tomographic X-ray images of the patient from a single projection view. We demonstrate the feasibility of the approach with upper-abdomen, lung, and head-and-neck computed tomography scans from three patients. Volumetric reconstruction via deep learning could be useful in image-guided interventional procedures such as radiation therapy and needle biopsy, and might help simplify the hardware of tomographic imaging systems.
Collapse
|