1
|
Keall PJ, El Naqa I, Fast MF, Hewson EA, Hindley N, Poulsen P, Sengupta C, Tyagi N, Waddington DEJ. Critical Review: Real-Time Dose-Guided Radiation Therapy. Int J Radiat Oncol Biol Phys 2025:S0360-3016(25)00386-4. [PMID: 40327027 DOI: 10.1016/j.ijrobp.2025.04.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2024] [Revised: 04/15/2025] [Accepted: 04/15/2025] [Indexed: 05/07/2025]
Abstract
Dramatic strides have been made in real-time adaptive radiation therapy, where treating single tumors as dynamic but rigid bodies has demonstrated a halving of toxicities for prostate cancer. However, the human body is much more complex than a rigid body. This review explores the ongoing development and future potential of dose-guided radiation therapy, where the three core process steps of volumetric imaging of the patient, dose accumulation, and dose-guided treatment adaptation occur quasi-continuously during treatment, fully accounting for the complexity of the dynamic human body. The clinical evidence supporting real-time adaptive radiation therapy was reviewed. The foundational studies, status, and potential of real-time volumetric imaging using both x-ray and magnetic resonance imaging technology were described. The development of real-time dose accumulation to the dynamic patient was evaluated, and a method to measure real-time dose delivery was assessed. The growth of real-time treatment adaptation was examined. Literature demonstrates continued improvements in patient outcomes because the treatment becomes more conformal to the dynamic patient. Real-time volumetric imaging using both x-ray and magnetic resonance imaging technology is poised for broader implementation. Real-time dose accumulation has demonstrated clinical feasibility, with approximations made to achieve real-time operation. Real-time treatment adaptation to deforming targets and multiple targets has been experimentally demonstrated. Tying together the inputs of the real-time volumetric anatomy and dose accumulation is real-time treatment adaptation that uses the available degrees of freedom to optimize the dose delivered to the patient, maximizing the treatment intent. Opportunities exist for artificial intelligence to accelerate the application of dose-guided radiation therapy to broader patient use. In summary, the emerging field of real-time dose-guided radiation therapy has the potential to significantly improve patient outcomes. The advances are primarily software-driven and therefore could be widely available and cost-effective upgrades to improve imaging and targeting cancer.
Collapse
Affiliation(s)
- Paul J Keall
- Image X Institute, University of Sydney, Sydney, Australia.
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, Florida
| | - Martin F Fast
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Emily A Hewson
- Image X Institute, University of Sydney, Sydney, Australia
| | | | - Per Poulsen
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark; Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | | | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York
| | | |
Collapse
|
2
|
Shao HC, Mengke T, Pan T, Zhang Y. Real-time CBCT imaging and motion tracking via a single arbitrarily-angled x-ray projection by a joint dynamic reconstruction and motion estimation (DREME) framework. Phys Med Biol 2025; 70:025026. [PMID: 39746309 PMCID: PMC11747166 DOI: 10.1088/1361-6560/ada519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2024] [Revised: 12/02/2024] [Accepted: 01/02/2025] [Indexed: 01/04/2025]
Abstract
Objective.Real-time cone-beam computed tomography (CBCT) provides instantaneous visualization of patient anatomy for image guidance, motion tracking, and online treatment adaptation in radiotherapy. While many real-time imaging and motion tracking methods leveraged patient-specific prior information to alleviate under-sampling challenges and meet the temporal constraint (<500 ms), the prior information can be outdated and introduce biases, thus compromising the imaging and motion tracking accuracy. To address this challenge, we developed a frameworkdynamicreconstruction andmotionestimation (DREME) for real-time CBCT imaging and motion estimation, without relying on patient-specific prior knowledge.Approach.DREME incorporates a deep learning-based real-time CBCT imaging and motion estimation method into a dynamic CBCT reconstruction framework. The reconstruction framework reconstructs a dynamic sequence of CBCTs in a data-driven manner from a standard pre-treatment scan, without requiring patient-specific prior knowledge. Meanwhile, a convolutional neural network-based motion encoder is jointly trained during the reconstruction to learn motion-related features relevant for real-time motion estimation, based on a single arbitrarily-angled x-ray projection. DREME was tested on digital phantom simulations and real patient studies.Main Results.DREME accurately solved 3D respiration-induced anatomical motion in real time (∼1.5 ms inference time for each x-ray projection). For the digital phantom studies, it achieved an average lung tumor center-of-mass localization error of 1.2 ± 0.9 mm (Mean ± SD). For the patient studies, it achieved a real-time tumor localization accuracy of 1.6 ± 1.6 mm in the projection domain.Significance.DREME achieves CBCT and volumetric motion estimation in real time from a single x-ray projection at arbitrary angles, paving the way for future clinical applications in intra-fractional motion management. In addition, it can be used for dose tracking and treatment assessment, when combined with real-time dose calculation.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Tielige Mengke
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Tinsu Pan
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, United States of America
| | - You Zhang
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| |
Collapse
|
3
|
Salari E, Wang J, Wynne JF, Chang C, Wu Y, Yang X. Artificial intelligence-based motion tracking in cancer radiotherapy: A review. J Appl Clin Med Phys 2024; 25:e14500. [PMID: 39194360 PMCID: PMC11540048 DOI: 10.1002/acm2.14500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 07/13/2024] [Accepted: 07/27/2024] [Indexed: 08/29/2024] Open
Abstract
Radiotherapy aims to deliver a prescribed dose to the tumor while sparing neighboring organs at risk (OARs). Increasingly complex treatment techniques such as volumetric modulated arc therapy (VMAT), stereotactic radiosurgery (SRS), stereotactic body radiotherapy (SBRT), and proton therapy have been developed to deliver doses more precisely to the target. While such technologies have improved dose delivery, the implementation of intra-fraction motion management to verify tumor position at the time of treatment has become increasingly relevant. Artificial intelligence (AI) has recently demonstrated great potential for real-time tracking of tumors during treatment. However, AI-based motion management faces several challenges, including bias in training data, poor transparency, difficult data collection, complex workflows and quality assurance, and limited sample sizes. This review presents the AI algorithms used for chest, abdomen, and pelvic tumor motion management/tracking for radiotherapy and provides a literature summary on the topic. We will also discuss the limitations of these AI-based studies and propose potential improvements.
Collapse
Affiliation(s)
- Elahheh Salari
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | | | - Chih‐Wei Chang
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| | - Yizhou Wu
- School of Electrical and Computer EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
4
|
Zhang H, Chen K, Xu X, You T, Sun W, Dang J. Spatiotemporal correlation enhanced real-time 4D-CBCT imaging using convolutional LSTM networks. Front Oncol 2024; 14:1390398. [PMID: 39161388 PMCID: PMC11330803 DOI: 10.3389/fonc.2024.1390398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 07/01/2024] [Indexed: 08/21/2024] Open
Abstract
Purpose To enhance the accuracy of real-time four-dimensional cone beam CT (4D-CBCT) imaging by incorporating spatiotemporal correlation from the sequential projection image into the single projection-based 4D-CBCT estimation process. Methods We first derived 4D deformation vector fields (DVFs) from patient 4D-CT. Principal component analysis (PCA) was then employed to extract distinctive feature labels for each DVF, focusing on the first three PCA coefficients. To simulate a wide range of respiratory motion, we expanded the motion amplitude and used random sampling to generate approximately 900 sets of PCA labels. These labels were used to produce 900 simulated 4D-DVFs, which in turn deformed the 0% phase 4D-CT to obtain 900 CBCT volumes with continuous motion amplitudes. Following this, the forward projection was performed at one angle to get all of the digital reconstructed radiographs (DRRs). These DRRs and the PCA labels were used as the training data set. To capture the spatiotemporal correlation in the projections, we propose to use the convolutional LSTM (ConvLSTM) network for PCA coefficient estimation. For network testing, when several online CBCT projections (with different motion amplitudes that cover the full respiration range) are acquired and sent into the network, the corresponding 4D-PCA coefficients will be obtained and finally lead to a full online 4D-CBCT prediction. A phantom experiment is first performed with the XCAT phantom; then, a pilot clinical evaluation is further conducted. Results Results on the XCAT phantom and the patient data show that the proposed approach outperformed other networks in terms of visual inspection and quantitative metrics. For the XCAT phantom experiment, ConvLSTM achieves the highest quantification accuracy with MAPE(Mean Absolute Percentage Error), PSNR (Peak Signal-to-Noise Ratio), and RMSE(Root Mean Squared Error) of 0.0459, 64.6742, and 0.0011, respectively. For the patient pilot clinical experiment, ConvLSTM also achieves the best quantification accuracy with that of 0.0934, 63.7294, and 0.0019, respectively. The quantification evaluation labels that we used are 1) the Mean Absolute Error (MAE), 2) the Normalized Cross Correlation (NCC), 3)the Structural Similarity Index Measurement(SSIM), 4)the Peak Signal-to-Noise Ratio (PSNR), 5)the Root Mean Squared Error(RMSE), and 6) the Absolute Percentage Error (MAPE). Conclusion The spatiotemporal correlation-based respiration motion modeling supplied a potential solution for accurate real-time 4D-CBCT reconstruction.
Collapse
Affiliation(s)
- Hua Zhang
- School of Biomedical Engineering, Southern Medical University, Guang Zhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guang Zhou, Guangdong, China
| | - Kai Chen
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Xiaotong Xu
- School of Biomedical Engineering, Southern Medical University, Guang Zhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guang Zhou, Guangdong, China
| | - Tao You
- Department of Radiation Oncology, The Affiliated Hospital of Jiangsu University, Zhenjiang, Jiangsu, China
| | - Wenzheng Sun
- Department of Radiation Oncology, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jun Dang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| |
Collapse
|
5
|
Shao HC, Mengke T, Pan T, Zhang Y. Dynamic CBCT imaging using prior model-free spatiotemporal implicit neural representation (PMF-STINR). Phys Med Biol 2024; 69:115030. [PMID: 38697195 PMCID: PMC11133878 DOI: 10.1088/1361-6560/ad46dc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 04/12/2024] [Accepted: 05/01/2024] [Indexed: 05/04/2024]
Abstract
Objective. Dynamic cone-beam computed tomography (CBCT) can capture high-spatial-resolution, time-varying images for motion monitoring, patient setup, and adaptive planning of radiotherapy. However, dynamic CBCT reconstruction is an extremely ill-posed spatiotemporal inverse problem, as each CBCT volume in the dynamic sequence is only captured by one or a few x-ray projections, due to the slow gantry rotation speed and the fast anatomical motion (e.g. breathing).Approach. We developed a machine learning-based technique, prior-model-free spatiotemporal implicit neural representation (PMF-STINR), to reconstruct dynamic CBCTs from sequentially acquired x-ray projections. PMF-STINR employs a joint image reconstruction and registration approach to address the under-sampling challenge, enabling dynamic CBCT reconstruction from singular x-ray projections. Specifically, PMF-STINR uses spatial implicit neural representations to reconstruct a reference CBCT volume, and it applies temporal INR to represent the intra-scan dynamic motion of the reference CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a learning-based B-spline motion model to capture time-varying deformable motion during the reconstruction. Compared with the previous methods, the spatial INR, the temporal INR, and the B-spline model of PMF-STINR are all learned on the fly during reconstruction in a one-shot fashion, without using any patient-specific prior knowledge or motion sorting/binning.Main results. PMF-STINR was evaluated via digital phantom simulations, physical phantom measurements, and a multi-institutional patient dataset featuring various imaging protocols (half-fan/full-fan, full sampling/sparse sampling, different energy and mAs settings, etc). The results showed that the one-shot learning-based PMF-STINR can accurately and robustly reconstruct dynamic CBCTs and capture highly irregular motion with high temporal (∼ 0.1 s) resolution and sub-millimeter accuracy.Significance. PMF-STINR can reconstruct dynamic CBCTs and solve the intra-scan motion from conventional 3D CBCT scans without using any prior anatomical/motion model or motion sorting/binning. It can be a promising tool for motion management by offering richer motion information than traditional 4D-CBCTs.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Tielige Mengke
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Tinsu Pan
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, TX, 77030, United States of America
| | - You Zhang
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| |
Collapse
|
6
|
Zhu M, Fu Q, Liu B, Zhang M, Li B, Luo X, Zhou F. RT-SRTS: Angle-agnostic real-time simultaneous 3D reconstruction and tumor segmentation from single X-ray projection. Comput Biol Med 2024; 173:108390. [PMID: 38569234 DOI: 10.1016/j.compbiomed.2024.108390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 03/24/2024] [Accepted: 03/26/2024] [Indexed: 04/05/2024]
Abstract
Radiotherapy is one of the primary treatment methods for tumors, but the organ movement caused by respiration limits its accuracy. Recently, 3D imaging from a single X-ray projection has received extensive attention as a promising approach to address this issue. However, current methods can only reconstruct 3D images without directly locating the tumor and are only validated for fixed-angle imaging, which fails to fully meet the requirements of motion control in radiotherapy. In this study, a novel imaging method RT-SRTS is proposed which integrates 3D imaging and tumor segmentation into one network based on multi-task learning (MTL) and achieves real-time simultaneous 3D reconstruction and tumor segmentation from a single X-ray projection at any angle. Furthermore, the attention enhanced calibrator (AEC) and uncertain-region elaboration (URE) modules have been proposed to aid feature extraction and improve segmentation accuracy. The proposed method was evaluated on fifteen patient cases and compared with three state-of-the-art methods. It not only delivers superior 3D reconstruction but also demonstrates commendable tumor segmentation results. Simultaneous reconstruction and segmentation can be completed in approximately 70 ms, significantly faster than the required time threshold for real-time tumor tracking. The efficacies of both AEC and URE have also been validated in ablation studies. The code of work is available at https://github.com/ZywooSimple/RT-SRTS.
Collapse
Affiliation(s)
- Miao Zhu
- Image Processing Center, Beihang University, Beijing, 100191, PR China
| | - Qiming Fu
- Image Processing Center, Beihang University, Beijing, 100191, PR China
| | - Bo Liu
- Image Processing Center, Beihang University, Beijing, 100191, PR China.
| | - Mengxi Zhang
- Image Processing Center, Beihang University, Beijing, 100191, PR China
| | - Bojian Li
- Image Processing Center, Beihang University, Beijing, 100191, PR China
| | - Xiaoyan Luo
- Image Processing Center, Beihang University, Beijing, 100191, PR China.
| | - Fugen Zhou
- Image Processing Center, Beihang University, Beijing, 100191, PR China
| |
Collapse
|
7
|
Dai J, Dong G, Zhang C, He W, Liu L, Wang T, Jiang Y, Zhao W, Zhao X, Xie Y, Liang X. Volumetric tumor tracking from a single cone-beam X-ray projection image enabled by deep learning. Med Image Anal 2024; 91:102998. [PMID: 37857066 DOI: 10.1016/j.media.2023.102998] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 09/19/2023] [Accepted: 10/06/2023] [Indexed: 10/21/2023]
Abstract
Radiotherapy serves as a pivotal treatment modality for malignant tumors. However, the accuracy of radiotherapy is significantly compromised due to respiratory-induced fluctuations in the size, shape, and position of the tumor. To address this challenge, we introduce a deep learning-anchored, volumetric tumor tracking methodology that employs single-angle X-ray projection images. This process involves aligning the intraoperative two-dimensional (2D) X-ray images with the pre-treatment three-dimensional (3D) planning Computed Tomography (CT) scans, enabling the extraction of the 3D tumor position and segmentation. Prior to therapy, a bespoke patient-specific tumor tracking model is formulated, leveraging a hybrid data augmentation, style correction, and registration network to create a mapping from single-angle 2D X-ray images to the corresponding 3D tumors. During the treatment phase, real-time X-ray images are fed into the trained model, producing the respective 3D tumor positioning. Rigorous validation conducted on actual patient lung data and lung phantoms attests to the high localization precision of our method at lowered radiation doses, thus heralding promising strides towards enhancing the precision of radiotherapy.
Collapse
Affiliation(s)
- Jingjing Dai
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Guoya Dong
- School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Hebei Key Laboratory of Bioelectromagnetics and Neural Engineering, Tianjin Key Laboratory of Bioelectricity and Intelligent Health, 300130, Tianjin, China
| | - Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wenfeng He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Lin Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Tangsheng Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yuming Jiang
- Department of Radiation Oncology, Wake Forest University School of Medicine, Winston-Salem,North Carolina, 27157, USA
| | - Wei Zhao
- School of Physics, Beihang University, Beijing, 100191, China
| | - Xiang Zhao
- Department of Radiology, Tianjin Medical University General Hospital, 300050, China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
8
|
Shao HC, Li Y, Wang J, Jiang S, Zhang Y. Real-time liver motion estimation via deep learning-based angle-agnostic X-ray imaging. Med Phys 2023; 50:6649-6662. [PMID: 37922461 PMCID: PMC10629841 DOI: 10.1002/mp.16691] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 07/17/2023] [Accepted: 08/06/2023] [Indexed: 11/05/2023] Open
Abstract
BACKGROUND Real-time liver imaging is challenged by the short imaging time (within hundreds of milliseconds) to meet the temporal constraint posted by rapid patient breathing, resulting in extreme under-sampling for desired 3D imaging. Deep learning (DL)-based real-time imaging/motion estimation techniques are emerging as promising solutions, which can use a single X-ray projection to estimate 3D moving liver volumes by solved deformable motion. However, such techniques were mostly developed for a specific, fixed X-ray projection angle, thereby impractical to verify and guide arc-based radiotherapy with continuous gantry rotation. PURPOSE To enable deformable motion estimation and 3D liver imaging from individual X-ray projections acquired at arbitrary X-ray scan angles, and to further improve the accuracy of single X-ray-driven motion estimation. METHODS We developed a DL-based method, X360, to estimate the deformable motion of the liver boundary using an X-ray projection acquired at an arbitrary gantry angle (angle-agnostic). X360 incorporated patient-specific prior information from planning 4D-CTs to address the under-sampling issue, and adopted a deformation-driven approach to deform a prior liver surface mesh to new meshes that reflect real-time motion. The liver mesh motion is solved via motion-related image features encoded in the arbitrary-angle X-ray projection, and through a sequential combination of rigid and deformable registration modules. To achieve the angle agnosticism, a geometry-informed X-ray feature pooling layer was developed to allow X360 to extract angle-dependent image features for motion estimation. As a liver boundary motion solver, X360 was also combined with priorly-developed, DL-based optical surface imaging and biomechanical modeling techniques for intra-liver motion estimation and tumor localization. RESULTS With geometry-aware feature pooling, X360 can solve the liver boundary motion from an arbitrary-angle X-ray projection. Evaluated on a set of 10 liver patient cases, the mean (± s.d.) 95-percentile Hausdorff distance between the solved liver boundary and the "ground-truth" decreased from 10.9 (±4.5) mm (before motion estimation) to 5.5 (±1.9) mm (X360). When X360 was further integrated with surface imaging and biomechanical modeling for liver tumor localization, the mean (± s.d.) center-of-mass localization error of the liver tumors decreased from 9.4 (± 5.1) mm to 2.2 (± 1.7) mm. CONCLUSION X360 can achieve fast and robust liver boundary motion estimation from arbitrary-angle X-ray projections for real-time imaging guidance. Serving as a surface motion solver, X360 can be integrated into a combined framework to achieve accurate, real-time, and marker-less liver tumor localization.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Dallas, Texas, USA
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Dallas, Texas, USA
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Yunxiang Li
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Dallas, Texas, USA
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Dallas, Texas, USA
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jing Wang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Dallas, Texas, USA
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Dallas, Texas, USA
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Steve Jiang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Dallas, Texas, USA
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Dallas, Texas, USA
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - You Zhang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Dallas, Texas, USA
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Dallas, Texas, USA
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
9
|
Xiao H, Xue X, Zhu M, Jiang X, Xia Q, Chen K, Li H, Long L, Peng K. Deep learning-based lung image registration: A review. Comput Biol Med 2023; 165:107434. [PMID: 37696177 DOI: 10.1016/j.compbiomed.2023.107434] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 08/13/2023] [Accepted: 08/28/2023] [Indexed: 09/13/2023]
Abstract
Lung image registration can effectively describe the relative motion of lung tissues, thereby helping to solve series problems in clinical applications. Since the lungs are soft and fairly passive organs, they are influenced by respiration and heartbeat, resulting in discontinuity of lung motion and large deformation of anatomic features. This poses great challenges for accurate registration of lung image and its applications. The recent application of deep learning (DL) methods in the field of medical image registration has brought promising results. However, a versatile registration framework has not yet emerged due to diverse challenges of registration for different regions of interest (ROI). DL-based image registration methods used for other ROI cannot achieve satisfactory results in lungs. In addition, there are few review articles available on DL-based lung image registration. In this review, the development of conventional methods for lung image registration is briefly described and a more comprehensive survey of DL-based methods for lung image registration is illustrated. The DL-based methods are classified according to different supervision types, including fully-supervised, weakly-supervised and unsupervised. The contributions of researchers in addressing various challenges are described, as well as the limitations of these approaches. This review also presents a comprehensive statistical analysis of the cited papers in terms of evaluation metrics and loss functions. In addition, publicly available datasets for lung image registration are also summarized. Finally, the remaining challenges and potential trends in DL-based lung image registration are discussed.
Collapse
Affiliation(s)
- Hanguang Xiao
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Xufeng Xue
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Mi Zhu
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China.
| | - Xin Jiang
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Qingling Xia
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Kai Chen
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Huanqi Li
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Li Long
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Ke Peng
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China.
| |
Collapse
|
10
|
Shao HC, Li Y, Wang J, Jiang S, Zhang Y. Real-time liver tumor localization via combined surface imaging and a single x-ray projection. Phys Med Biol 2023; 68:065002. [PMID: 36731143 PMCID: PMC10394117 DOI: 10.1088/1361-6560/acb889] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 01/12/2023] [Accepted: 02/01/2023] [Indexed: 02/04/2023]
Abstract
Objective. Real-time imaging, a building block of real-time adaptive radiotherapy, provides instantaneous knowledge of anatomical motion to drive delivery adaptation to improve patient safety and treatment efficacy. The temporal constraint of real-time imaging (<500 milliseconds) significantly limits the imaging signals that can be acquired, rendering volumetric imaging and 3D tumor localization extremely challenging. Real-time liver imaging is particularly difficult, compounded by the low soft tissue contrast within the liver. We proposed a deep learning (DL)-based framework (Surf-X-Bio), to track 3D liver tumor motion in real-time from combined optical surface image and a single on-board x-ray projection.Approach. Surf-X-Bio performs mesh-based deformable registration to track/localize liver tumors volumetrically via three steps. First, a DL model was built to estimate liver boundary motion from an optical surface image, using learnt motion correlations between the respiratory-induced external body surface and liver boundary. Second, the residual liver boundary motion estimation error was further corrected by a graph neural network-based DL model, using information extracted from a single x-ray projection. Finally, a biomechanical modeling-driven DL model was applied to solve the intra-liver motion for tumor localization, using the liver boundary motion derived via prior steps.Main results. Surf-X-Bio demonstrated higher accuracy and better robustness in tumor localization, as compared to surface-image-only and x-ray-only models. By Surf-X-Bio, the mean (±s.d.) 95-percentile Hausdorff distance of the liver boundary from the 'ground-truth' decreased from 9.8 (±4.5) (before motion estimation) to 2.4 (±1.6) mm. The mean (±s.d.) center-of-mass localization error of the liver tumors decreased from 8.3 (±4.8) to 1.9 (±1.6) mm.Significance. Surf-X-Bio can accurately track liver tumors from combined surface imaging and x-ray imaging. The fast computational speed (<250 milliseconds per inference) allows it to be applied clinically for real-time motion management and adaptive radiotherapy.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Yunxiang Li
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Jing Wang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Steve Jiang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - You Zhang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| |
Collapse
|
11
|
Zhang Y, Shao HC, Pan T, Mengke T. Dynamic cone-beam CT reconstruction using spatial and temporal implicit neural representation learning (STINR). Phys Med Biol 2023; 68:045005. [PMID: 36638543 PMCID: PMC10087494 DOI: 10.1088/1361-6560/acb30d] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 12/27/2022] [Accepted: 01/13/2023] [Indexed: 01/15/2023]
Abstract
Objective. Dynamic cone-beam CT (CBCT) imaging is highly desired in image-guided radiation therapy to provide volumetric images with high spatial and temporal resolutions to enable applications including tumor motion tracking/prediction and intra-delivery dose calculation/accumulation. However, dynamic CBCT reconstruction is a substantially challenging spatiotemporal inverse problem, due to the extremely limited projection sample available for each CBCT reconstruction (one projection for one CBCT volume).Approach. We developed a simultaneous spatial and temporal implicit neural representation (STINR) method for dynamic CBCT reconstruction. STINR mapped the unknown image and the evolution of its motion into spatial and temporal multi-layer perceptrons (MLPs), and iteratively optimized the neuron weightings of the MLPs via acquired projections to represent the dynamic CBCT series. In addition to the MLPs, we also introduced prior knowledge, in the form of principal component analysis (PCA)-based patient-specific motion models, to reduce the complexity of the temporal mapping to address the ill-conditioned dynamic CBCT reconstruction problem. We used the extended-cardiac-torso (XCAT) phantom and a patient 4D-CBCT dataset to simulate different lung motion scenarios to evaluate STINR. The scenarios contain motion variations including motion baseline shifts, motion amplitude/frequency variations, and motion non-periodicity. The XCAT scenarios also contain inter-scan anatomical variations including tumor shrinkage and tumor position change.Main results. STINR shows consistently higher image reconstruction and motion tracking accuracy than a traditional PCA-based method and a polynomial-fitting-based neural representation method. STINR tracks the lung target to an average center-of-mass error of 1-2 mm, with corresponding relative errors of reconstructed dynamic CBCTs around 10%.Significance. STINR offers a general framework allowing accurate dynamic CBCT reconstruction for image-guided radiotherapy. It is a one-shot learning method that does not rely on pre-training and is not susceptible to generalizability issues. It also allows natural super-resolution. It can be readily applied to other imaging modalities as well.
Collapse
Affiliation(s)
- You Zhang
- Advanced Imaging and Informatics in Radiation Therapy (AIRT) Laboratory, Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, 75235, United States of America
| | - Hua-Chieh Shao
- Advanced Imaging and Informatics in Radiation Therapy (AIRT) Laboratory, Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, 75235, United States of America
| | - Tinsu Pan
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, TX, 77030, United States of America
| | - Tielige Mengke
- Advanced Imaging and Informatics in Radiation Therapy (AIRT) Laboratory, Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, 75235, United States of America
| |
Collapse
|
12
|
Dong G, Dai J, Li N, Zhang C, He W, Liu L, Chan Y, Li Y, Xie Y, Liang X. 2D/3D Non-Rigid Image Registration via Two Orthogonal X-ray Projection Images for Lung Tumor Tracking. Bioengineering (Basel) 2023; 10:bioengineering10020144. [PMID: 36829638 PMCID: PMC9951849 DOI: 10.3390/bioengineering10020144] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 01/10/2023] [Accepted: 01/16/2023] [Indexed: 01/24/2023] Open
Abstract
Two-dimensional (2D)/three-dimensional (3D) registration is critical in clinical applications. However, existing methods suffer from long alignment times and high doses. In this paper, a non-rigid 2D/3D registration method based on deep learning with orthogonal angle projections is proposed. The application can quickly achieve alignment using only two orthogonal angle projections. We tested the method with lungs (with and without tumors) and phantom data. The results show that the Dice and normalized cross-correlations are greater than 0.97 and 0.92, respectively, and the registration time is less than 1.2 seconds. In addition, the proposed model showed the ability to track lung tumors, highlighting the clinical potential of the proposed method.
Collapse
Affiliation(s)
- Guoya Dong
- School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300130, China
- Hebei Key Laboratory of Bioelectromagnetics and Neural Engineering, Tianjin 300130, China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Tianjin 300130, China
| | - Jingjing Dai
- School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300130, China
- Hebei Key Laboratory of Bioelectromagnetics and Neural Engineering, Tianjin 300130, China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Tianjin 300130, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Na Li
- Department of Biomedical Engineering, Guangdong Medical University, Dongguan 523808, China
| | - Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Wenfeng He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Lin Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yinping Chan
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yunhui Li
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Correspondence:
| |
Collapse
|
13
|
Zhang M, Wei R, Liu B, Xu S, Bai X, Zhou F. CG-DRR:digital reconstructed radiograph generation algorithm based on Cycle-GAN. JOURNAL OF IMAGE AND GRAPHICS 2023; 28:1212-1222. [DOI: 10.11834/jig.210868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
14
|
Shao HC, Wang J, Bai T, Chun J, Park JC, Jiang S, Zhang Y. Real-time liver tumor localization via a single x-ray projection using deep graph neural network-assisted biomechanical modeling. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac6b7b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 04/28/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Real-time imaging is highly desirable in image-guided radiotherapy, as it provides instantaneous knowledge of patients’ anatomy and motion during treatments and enables online treatment adaptation to achieve the highest tumor targeting accuracy. Due to extremely limited acquisition time, only one or few x-ray projections can be acquired for real-time imaging, which poses a substantial challenge to localize the tumor from the scarce projections. For liver radiotherapy, such a challenge is further exacerbated by the diminished contrast between the tumor and the surrounding normal liver tissues. Here, we propose a framework combining graph neural network-based deep learning and biomechanical modeling to track liver tumor in real-time from a single onboard x-ray projection. Approach. Liver tumor tracking is achieved in two steps. First, a deep learning network is developed to predict the liver surface deformation using image features learned from the x-ray projection. Second, the intra-liver deformation is estimated through biomechanical modeling, using the liver surface deformation as the boundary condition to solve tumor motion by finite element analysis. The accuracy of the proposed framework was evaluated using a dataset of 10 patients with liver cancer. Main results. The results show accurate liver surface registration from the graph neural network-based deep learning model, which translates into accurate, fiducial-less liver tumor localization after biomechanical modeling (<1.2 (±1.2) mm average localization error). Significance. The method demonstrates its potentiality towards intra-treatment and real-time 3D liver tumor monitoring and localization. It could be applied to facilitate 4D dose accumulation, multi-leaf collimator tracking and real-time plan adaptation. The method can be adapted to other anatomical sites as well.
Collapse
|
15
|
Hsu CF, Chien TW, Yan YH. An application for classifying perceptions on my health bank in Taiwan using convolutional neural networks and web-based computerized adaptive testing: A development and usability study. Medicine (Baltimore) 2021; 100:e28457. [PMID: 34967385 PMCID: PMC8718177 DOI: 10.1097/md.0000000000028457] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 12/02/2021] [Accepted: 12/09/2021] [Indexed: 01/05/2023] Open
Abstract
BACKGROUND The classification of a respondent's opinions online into positive and negative classes using a minimal number of questions is gradually changing and helps turn techniques into practices. A survey incorporating convolutional neural networks (CNNs) into web-based computerized adaptive testing (CAT) was used to collect perceptions on My Health Bank (MHB) from users in Taiwan. This study designed an online module to accurately and efficiently turn a respondent's perceptions into positive and negative classes using CNNs and web-based CAT. METHODS In all, 640 patients, family members, and caregivers with ages ranging from 20 to 70 years who were registered MHB users were invited to complete a 3-domain, 26-item, 5-category questionnaire asking about their perceptions on MHB (PMHB26) in 2019. The CNN algorithm and k-means clustering were used for dividing respondents into 2 classes of unsatisfied and satisfied classes and building a PMHB26 predictive model to estimate parameters. Exploratory factor analysis, the Rasch model, and descriptive statistics were used to examine the demographic characteristics and PMHB26 factors that were suitable for use in CNNs and Rasch multidimensional CAT (MCAT). An application was then designed to classify MHB perceptions. RESULTS We found that 3 construct factors were extracted from PMHB26. The reliability of PMHB26 for each subscale beyond 0.94 was evident based on internal consistency and stability in the data. We further found the following: the accuracy of PMHB26 with CNN yields a higher accuracy rate (0.98) with an area under the curve of 0.98 (95% confidence interval, 0.97-0.99) based on the 391 returned questionnaires; and for the efficiency, approximately one-third of the items were not necessary to answer in reducing the respondents' burdens using Rasch MCAT. CONCLUSIONS The PMHB26 CNN model, combined with the Rasch online MCAT, is recommended for improving the accuracy and efficiency of classifying patients' perceptions of MHB utility. An application developed for helping respondents self-assess the MHB cocreation of value can be applied to other surveys in the future.
Collapse
Affiliation(s)
- Chen-Fang Hsu
- Department of Pediatrics, Chi Mei Medical Center, Tainan, Taiwan
- School of Medicine, College of Medicine, Chung Shan Medical University, Taichung, Taiwan
- School of Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
- School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
| | - Tsair-Wei Chien
- Department of Medical Research Department, Chi-Mei Medical Center, Tainan, Taiwan
| | - Yu-Hua Yan
- Superintendent Office, Tainan Municipal Hospital (Managed by Show Chwan Medical Care Corporation), Tainan, Taiwan
- Department of Hospital and Health Care Administration, Chia Nan University of Pharmacy and Science, Tainan, Taiwan
| |
Collapse
|
16
|
Hayashi R, Miyazaki K, Takao S, Yokokawa K, Tanaka S, Matsuura T, Taguchi H, Katoh N, Shimizu S, Umegaki K, Miyamoto N. Real-time CT image generation based on voxel-by-voxel modeling of internal deformation by utilizing the displacement of fiducial markers. Med Phys 2021; 48:5311-5326. [PMID: 34260755 DOI: 10.1002/mp.15095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 06/17/2021] [Accepted: 07/07/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE To show the feasibility of real-time CT image generation technique utilizing internal fiducial markers that facilitate the evaluation of internal deformation. METHODS In the proposed method, a linear regression model that can derive internal deformation from the displacement of fiducial markers is built for each voxel in the training process before the treatment session. Marker displacement and internal deformation are derived from the four-dimensional computed tomography (4DCT) dataset. In the treatment session, the three-dimensional deformation vector field is derived according to the marker displacement, which is monitored by the real-time imaging system. The whole CT image can be synthesized by deforming the reference CT image with a deformation vector field in real-time. To show the feasibility of the technique, image synthesis accuracy and tumor localization accuracy were evaluated using the dataset generated by extended NURBS-Based Cardiac-Torso (XCAT) phantom and clinical 4DCT datasets from six patients, containing 10 CT datasets each. In the validation with XCAT phantom, motion range of the tumor in training data and validation data were about 10 and 15 mm, respectively, so as to simulate motion variation between 4DCT acquisition and treatment session. In the validation with patient 4DCT dataset, eight CT datasets from the 4DCT dataset were used in the training process. Two excluded inhale CT datasets can be regarded as the datasets with large deformations more than training dataset. CT images were generated for each respiratory phase using the corresponding marker displacement. Root mean squared error (RMSE), normalized RMSE (NRMSE), and structural similarity index measure (SSIM) between the original CT images and the synthesized CT images were evaluated as the quantitative indices of the accuracy of image synthesis. The accuracy of tumor localization was also evaluated. RESULTS In the validation with XCAT phantom, the mean NRMSE, SSIM, and three-dimensional tumor localization error were 7.5 ± 1.1%, 0.95 ± 0.02, and 0.4 ± 0.3 mm, respectively. In the validation with patient 4DCT dataset, the mean RMSE, NRMSE, SSIM, and three-dimensional tumor localization error in six patients were 73.7 ± 19.6 HU, 9.2 ± 2.6%, 0.88 ± 0.04, and 0.8 ± 0.6 mm, respectively. These results suggest that the accuracy of the proposed technique is adequate when the respiratory motion is within the range of the training dataset. In the evaluation with a marker displacement larger than that of the training dataset, the mean RMSE, NRMSE, and tumor localization error were about 100 HU, 13%, and <2.0 mm, respectively, except for one case having large motion variation. The performance of the proposed method was similar to those of previous studies. Processing time to generate the volumetric image was <100 ms. CONCLUSION We have shown the feasibility of the real-time CT image generation technique for volumetric imaging.
Collapse
Affiliation(s)
- Risa Hayashi
- Graduate School of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Koichi Miyazaki
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Seishin Takao
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Kohei Yokokawa
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Sodai Tanaka
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Taeko Matsuura
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Hiroshi Taguchi
- Department of Radiation Oncology, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Norio Katoh
- Faculty of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Shinichi Shimizu
- Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan.,Faculty of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Kikuo Umegaki
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Naoki Miyamoto
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| |
Collapse
|
17
|
Dang J, You T, Sun W, Xiao H, Li L, Chen X, Dai C, Li Y, Song Y, Zhang T, Chen D. Fully Automatic Sliding Motion Compensated and Simultaneous 4D-CBCT via Bilateral Filtering. Front Oncol 2021; 10:568627. [PMID: 33537233 PMCID: PMC7849763 DOI: 10.3389/fonc.2020.568627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Accepted: 11/16/2020] [Indexed: 12/03/2022] Open
Abstract
Purpose To incorporate the bilateral filtering into the Deformable Vector Field (DVF) based 4D-CBCT reconstruction for realizing a fully automatic sliding motion compensated 4D-CBCT. Materials and Methods Initially, a motion compensated simultaneous algebraic reconstruction technique (mSART) is used to generate a high quality reference phase (e.g. 0% phase) by using all phase projections together with the initial 4D-DVFs. The initial 4D-DVF were generated via Demons registration between 0% phase and each other phase image. The 4D-DVF will then kept updating by matching the forward projection of the deformed high quality 0% phase with the measured projection of the target phase. The loss function during this optimization contains an projection intensity difference matching criterion plus a DVF smoothing constrain term. We introduce a bilateral filtering kernel into the DVF constrain term to estimate the sliding motion automatically. The bilateral filtering kernel contains three sub-kernels: 1) an spatial domain Guassian kernel; 2) an image intensity domain Guassian kernel; and 3) a DVF domain Guassian kernel. By choosing suitable kernel variances, the sliding motion can be extracted. A non-linear conjugate gradient optimizer was used. We validated the algorithm on a non-uniform rotational B-spline based cardiac-torso (NCAT) phantom and four anonymous patient data. For quantification, we used: 1) the Root-Mean-Square-Error (RMSE) together with the Maximum-Error (MaxE); 2) the Dice coefficient of the extracted lung contour from the final reconstructed images and 3) the relative reconstruction error (RE) to evaluate the algorithm's performance. Results For NCAT phantom, the motion trajectory's RMSE/MaxE are 0.796/1.02 mm for bilateral filtering reconstruction; and 2.704/4.08 mm for original reconstruction. For patient pilot study, the 4D-Dice coefficient obtained with bilateral filtering are consistently higher than that without bilateral filtering. Meantime several image content such as the rib position, the heart edge definition, the fibrous structures all has been better corrected with bilateral filtering. Conclusion We developed a bilateral filtering based fully automatic sliding motion compensated 4D-CBCT scheme. Both digital phantom and initial patient pilot studies confirmed the improved motion estimation and image reconstruction ability. It can be used as a 4D-CBCT image guidance tool for lung SBRT treatment.
Collapse
Affiliation(s)
- Jun Dang
- Department of Oncology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Tao You
- Department of Radiation Oncology, The Affiliated Hospital of Jiangsu University, Zhenjiang, China
| | - Wenzheng Sun
- Department of Radiation Oncology, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Hanguan Xiao
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Longhao Li
- Department of Oncology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Xiaopin Chen
- Department of Oncology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Chunhua Dai
- Department of Radiation Oncology, The Affiliated Hospital of Jiangsu University, Zhenjiang, China
| | - Ying Li
- Department of Oncology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Yanbo Song
- Department of Oncology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Tao Zhang
- Department of Oncology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Deyu Chen
- Department of Radiation Oncology, The Affiliated Hospital of Jiangsu University, Zhenjiang, China
| |
Collapse
|