1
|
Zhang H, Chen K, Xu X, You T, Sun W, Dang J. Spatiotemporal correlation enhanced real-time 4D-CBCT imaging using convolutional LSTM networks. Front Oncol 2024; 14:1390398. [PMID: 39161388 PMCID: PMC11330803 DOI: 10.3389/fonc.2024.1390398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 07/01/2024] [Indexed: 08/21/2024] Open
Abstract
Purpose To enhance the accuracy of real-time four-dimensional cone beam CT (4D-CBCT) imaging by incorporating spatiotemporal correlation from the sequential projection image into the single projection-based 4D-CBCT estimation process. Methods We first derived 4D deformation vector fields (DVFs) from patient 4D-CT. Principal component analysis (PCA) was then employed to extract distinctive feature labels for each DVF, focusing on the first three PCA coefficients. To simulate a wide range of respiratory motion, we expanded the motion amplitude and used random sampling to generate approximately 900 sets of PCA labels. These labels were used to produce 900 simulated 4D-DVFs, which in turn deformed the 0% phase 4D-CT to obtain 900 CBCT volumes with continuous motion amplitudes. Following this, the forward projection was performed at one angle to get all of the digital reconstructed radiographs (DRRs). These DRRs and the PCA labels were used as the training data set. To capture the spatiotemporal correlation in the projections, we propose to use the convolutional LSTM (ConvLSTM) network for PCA coefficient estimation. For network testing, when several online CBCT projections (with different motion amplitudes that cover the full respiration range) are acquired and sent into the network, the corresponding 4D-PCA coefficients will be obtained and finally lead to a full online 4D-CBCT prediction. A phantom experiment is first performed with the XCAT phantom; then, a pilot clinical evaluation is further conducted. Results Results on the XCAT phantom and the patient data show that the proposed approach outperformed other networks in terms of visual inspection and quantitative metrics. For the XCAT phantom experiment, ConvLSTM achieves the highest quantification accuracy with MAPE(Mean Absolute Percentage Error), PSNR (Peak Signal-to-Noise Ratio), and RMSE(Root Mean Squared Error) of 0.0459, 64.6742, and 0.0011, respectively. For the patient pilot clinical experiment, ConvLSTM also achieves the best quantification accuracy with that of 0.0934, 63.7294, and 0.0019, respectively. The quantification evaluation labels that we used are 1) the Mean Absolute Error (MAE), 2) the Normalized Cross Correlation (NCC), 3)the Structural Similarity Index Measurement(SSIM), 4)the Peak Signal-to-Noise Ratio (PSNR), 5)the Root Mean Squared Error(RMSE), and 6) the Absolute Percentage Error (MAPE). Conclusion The spatiotemporal correlation-based respiration motion modeling supplied a potential solution for accurate real-time 4D-CBCT reconstruction.
Collapse
Affiliation(s)
- Hua Zhang
- School of Biomedical Engineering, Southern Medical University, Guang Zhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guang Zhou, Guangdong, China
| | - Kai Chen
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Xiaotong Xu
- School of Biomedical Engineering, Southern Medical University, Guang Zhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guang Zhou, Guangdong, China
| | - Tao You
- Department of Radiation Oncology, The Affiliated Hospital of Jiangsu University, Zhenjiang, Jiangsu, China
| | - Wenzheng Sun
- Department of Radiation Oncology, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jun Dang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| |
Collapse
|
2
|
Zhang Y, Jiang Z, Zhang Y, Ren L. A review on 4D cone-beam CT (4D-CBCT) in radiation therapy: Technical advances and clinical applications. Med Phys 2024; 51:5164-5180. [PMID: 38922912 PMCID: PMC11321939 DOI: 10.1002/mp.17269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 03/05/2024] [Accepted: 06/01/2024] [Indexed: 06/28/2024] Open
Abstract
Cone-beam CT (CBCT) is the most commonly used onboard imaging technique for target localization in radiation therapy. Conventional 3D CBCT acquires x-ray cone-beam projections at multiple angles around the patient to reconstruct 3D images of the patient in the treatment room. However, despite its wide usage, 3D CBCT is limited in imaging disease sites affected by respiratory motions or other dynamic changes within the body, as it lacks time-resolved information. To overcome this limitation, 4D-CBCT was developed to incorporate a time dimension in the imaging to account for the patient's motion during the acquisitions. For example, respiration-correlated 4D-CBCT divides the breathing cycles into different phase bins and reconstructs 3D images for each phase bin, ultimately generating a complete set of 4D images. 4D-CBCT is valuable for localizing tumors in the thoracic and abdominal regions where the localization accuracy is affected by respiratory motions. This is especially important for hypofractionated stereotactic body radiation therapy (SBRT), which delivers much higher fractional doses in fewer fractions than conventional fractionated treatments. Nonetheless, 4D-CBCT does face certain limitations, including long scanning times, high imaging doses, and compromised image quality due to the necessity of acquiring sufficient x-ray projections for each respiratory phase. In order to address these challenges, numerous methods have been developed to achieve fast, low-dose, and high-quality 4D-CBCT. This paper aims to review the technical developments surrounding 4D-CBCT comprehensively. It will explore conventional algorithms and recent deep learning-based approaches, delving into their capabilities and limitations. Additionally, the paper will discuss the potential clinical applications of 4D-CBCT and outline a future roadmap, highlighting areas for further research and development. Through this exploration, the readers will better understand 4D-CBCT's capabilities and potential to enhance radiation therapy.
Collapse
Affiliation(s)
- Yawei Zhang
- University of Florida Proton Therapy Institute, Jacksonville, FL 32206, USA
- Department of Radiation Oncology, University of Florida College of Medicine, Gainesville, FL 32608, USA
| | - Zhuoran Jiang
- Medical Physics Graduate Program, Duke University, Durham, NC 27710, USA
| | - You Zhang
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Lei Ren
- Department of Radiation Oncology, University of Maryland, Baltimore, MD 21201, USA
| |
Collapse
|
3
|
Dong G, Zhang C, Deng L, Zhu Y, Dai J, Song L, Meng R, Niu T, Liang X, Xie Y. A deep unsupervised learning framework for the 4D CBCT artifact correction. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac55a5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 02/16/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. Four-dimensional cone-beam computed tomography (4D CBCT) has unique advantages in moving target localization, tracking and therapeutic dose accumulation in adaptive radiotherapy. However, the severe fringe artifacts and noise degradation caused by 4D CBCT reconstruction restrict its clinical application. We propose a novel deep unsupervised learning model to generate the high-quality 4D CBCT from the poor-quality 4D CBCT. Approach. The proposed model uses a contrastive loss function to preserve the anatomical structure in the corrected image. To preserve the relationship between the input and output image, we use a multilayer, patch-based method rather than operate on entire images. Furthermore, we draw negatives from within the input 4D CBCT rather than from the rest of the dataset. Main results. The results showed that the streak and motion artifacts were significantly suppressed. The spatial resolution of the pulmonary vessels and microstructure were also improved. To demonstrate the results in the different directions, we make the animation to show the different views of the predicted correction image in the supplementary animation. Significance. The proposed method can be integrated into any 4D CBCT reconstruction method and maybe a practical way to enhance the image quality of the 4D CBCT.
Collapse
|
4
|
Jailin C, Roux S, Sarrut D, Rit S. Projection-based dynamic tomography. Phys Med Biol 2021; 66. [PMID: 34663759 DOI: 10.1088/1361-6560/ac309e] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 10/18/2021] [Indexed: 11/11/2022]
Abstract
Objective. This paper proposes a 4D dynamic tomography framework that allows a moving sample to be imaged in a tomograph as well as the associated space-time kinematics to be measured with nothing more than a single conventional x-ray scan.Approach. The method exploits the consistency of the projection/reconstruction operations through a multi-scale procedure. The procedure is composed of two main parts solved alternatively: a motion-compensated reconstruction algorithm and a projection-based measurement procedure that estimates the displacement field directly on each projection.Main results. The method is applied to two studies: a numerical simulation of breathing from chest computed tomography (4D-CT) and a clinical cone-beam CT of a breathing patient acquired for image guidance of radiotherapy. The reconstructed volume, initially blurred by the motion, is cleaned from motion artifacts.Significance. Applying the proposed approach results in an improved reconstruction quality showing sharper edges and finer details.
Collapse
Affiliation(s)
- Clément Jailin
- Université Paris-Saclay, ENS Paris-Saclay, CNRS, LMT-Laboratoire de Mécanique et Technologie, F-91190, Gif-sur-Yvette, France.,GE Healthcare, F-78530 Buc, France
| | - Stéphane Roux
- Université Paris-Saclay, ENS Paris-Saclay, CNRS, LMT-Laboratoire de Mécanique et Technologie, F-91190, Gif-sur-Yvette, France
| | - David Sarrut
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69373, Lyon, France
| | - Simon Rit
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69373, Lyon, France
| |
Collapse
|
5
|
Zhang Y. An unsupervised 2D-3D deformable registration network (2D3D-RegNet) for cone-beam CT estimation. Phys Med Biol 2021; 66. [PMID: 33631734 DOI: 10.1088/1361-6560/abe9f6] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 02/25/2021] [Indexed: 12/25/2022]
Abstract
Acquiring CBCTs from a limited scan angle can help to reduce the imaging time, save the imaging dose, and allow continuous target localizations through arc-based treatments with high temporal resolution. However, insufficient scan angle sampling leads to severe distortions and artifacts in the reconstructed CBCT images, limiting their clinical applicability. 2D-3D deformable registration can map a prior fully-sampled CT/CBCT volume to estimate a new CBCT, based on limited-angle on-board cone-beam projections. The resulting CBCT images estimated by 2D-3D deformable registration can successfully suppress the distortions and artifacts, and reflect up-to-date patient anatomy. However, traditional iterative 2D-3D deformable registration algorithm is very computationally expensive and time-consuming, which takes hours to generate a high quality deformation vector field (DVF) and the CBCT. In this work, we developed an unsupervised, end-to-end, 2D-3D deformable registration framework using convolutional neural networks (2D3D-RegNet) to address the speed bottleneck of the conventional iterative 2D-3D deformable registration algorithm. The 2D3D-RegNet was able to solve the DVFs within 5 seconds for 90 orthogonally-arranged projections covering a combined 90° scan angle, with DVF accuracy superior to 3D-3D deformable registration, and on par with the conventional 2D-3D deformable registration algorithm. We also performed a preliminary robustness analysis of 2D3D-RegNet towards projection angular sampling frequency variations, as well as scan angle offsets. The synergy of 2D3D-RegNet with biomechanical modeling was also evaluated, and demonstrated that 2D3D-RegNet can function as a fast DVF solution core for further DVF refinement.
Collapse
Affiliation(s)
- You Zhang
- Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75235, United States of America
| |
Collapse
|
6
|
Dang J, You T, Sun W, Xiao H, Li L, Chen X, Dai C, Li Y, Song Y, Zhang T, Chen D. Fully Automatic Sliding Motion Compensated and Simultaneous 4D-CBCT via Bilateral Filtering. Front Oncol 2021; 10:568627. [PMID: 33537233 PMCID: PMC7849763 DOI: 10.3389/fonc.2020.568627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Accepted: 11/16/2020] [Indexed: 12/03/2022] Open
Abstract
Purpose To incorporate the bilateral filtering into the Deformable Vector Field (DVF) based 4D-CBCT reconstruction for realizing a fully automatic sliding motion compensated 4D-CBCT. Materials and Methods Initially, a motion compensated simultaneous algebraic reconstruction technique (mSART) is used to generate a high quality reference phase (e.g. 0% phase) by using all phase projections together with the initial 4D-DVFs. The initial 4D-DVF were generated via Demons registration between 0% phase and each other phase image. The 4D-DVF will then kept updating by matching the forward projection of the deformed high quality 0% phase with the measured projection of the target phase. The loss function during this optimization contains an projection intensity difference matching criterion plus a DVF smoothing constrain term. We introduce a bilateral filtering kernel into the DVF constrain term to estimate the sliding motion automatically. The bilateral filtering kernel contains three sub-kernels: 1) an spatial domain Guassian kernel; 2) an image intensity domain Guassian kernel; and 3) a DVF domain Guassian kernel. By choosing suitable kernel variances, the sliding motion can be extracted. A non-linear conjugate gradient optimizer was used. We validated the algorithm on a non-uniform rotational B-spline based cardiac-torso (NCAT) phantom and four anonymous patient data. For quantification, we used: 1) the Root-Mean-Square-Error (RMSE) together with the Maximum-Error (MaxE); 2) the Dice coefficient of the extracted lung contour from the final reconstructed images and 3) the relative reconstruction error (RE) to evaluate the algorithm's performance. Results For NCAT phantom, the motion trajectory's RMSE/MaxE are 0.796/1.02 mm for bilateral filtering reconstruction; and 2.704/4.08 mm for original reconstruction. For patient pilot study, the 4D-Dice coefficient obtained with bilateral filtering are consistently higher than that without bilateral filtering. Meantime several image content such as the rib position, the heart edge definition, the fibrous structures all has been better corrected with bilateral filtering. Conclusion We developed a bilateral filtering based fully automatic sliding motion compensated 4D-CBCT scheme. Both digital phantom and initial patient pilot studies confirmed the improved motion estimation and image reconstruction ability. It can be used as a 4D-CBCT image guidance tool for lung SBRT treatment.
Collapse
Affiliation(s)
- Jun Dang
- Department of Oncology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Tao You
- Department of Radiation Oncology, The Affiliated Hospital of Jiangsu University, Zhenjiang, China
| | - Wenzheng Sun
- Department of Radiation Oncology, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Hanguan Xiao
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Longhao Li
- Department of Oncology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Xiaopin Chen
- Department of Oncology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Chunhua Dai
- Department of Radiation Oncology, The Affiliated Hospital of Jiangsu University, Zhenjiang, China
| | - Ying Li
- Department of Oncology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Yanbo Song
- Department of Oncology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Tao Zhang
- Department of Oncology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Deyu Chen
- Department of Radiation Oncology, The Affiliated Hospital of Jiangsu University, Zhenjiang, China
| |
Collapse
|
7
|
Vergalasova I, Cai J. A modern review of the uncertainties in volumetric imaging of respiratory-induced target motion in lung radiotherapy. Med Phys 2020; 47:e988-e1008. [PMID: 32506452 DOI: 10.1002/mp.14312] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2019] [Revised: 05/15/2020] [Accepted: 05/26/2020] [Indexed: 12/25/2022] Open
Abstract
Radiotherapy has become a critical component for the treatment of all stages and types of lung cancer, often times being the primary gateway to a cure. However, given that radiation can cause harmful side effects depending on how much surrounding healthy tissue is exposed, treatment of the lung can be particularly challenging due to the presence of moving targets. Careful implementation of every step in the radiotherapy process is absolutely integral for attaining optimal clinical outcomes. With the advent and now widespread use of stereotactic body radiation therapy (SBRT), where extremely large doses are delivered, accurate, and precise dose targeting is especially vital to achieve an optimal risk to benefit ratio. This has largely become possible due to the rapid development of image-guided technology. Although imaging is critical to the success of radiotherapy, it can often be plagued with uncertainties due to respiratory-induced target motion. There has and continues to be an immense research effort aimed at acknowledging and addressing these uncertainties to further our abilities to more precisely target radiation treatment. Thus, the goal of this article is to provide a detailed review of the prevailing uncertainties that remain to be investigated across the different imaging modalities, as well as to highlight the more modern solutions to imaging motion and their role in addressing the current challenges.
Collapse
Affiliation(s)
- Irina Vergalasova
- Department of Radiation Oncology, Rutgers Cancer Institute of New Jersey, Rutgers University, New Brunswick, NJ, USA
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| |
Collapse
|
8
|
Zhang H, Zeng D, Lin J, Zhang H, Bian Z, Huang J, Gao Y, Zhang S, Zhang H, Feng Q, Liang Z, Chen W, Ma J. Iterative reconstruction for dual energy CT with an average image-induced nonlocal means regularization. Phys Med Biol 2017; 62:5556-5574. [PMID: 28471750 PMCID: PMC5497789 DOI: 10.1088/1361-6560/aa7122] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Reducing radiation dose in dual energy computed tomography (DECT) is highly desirable but it may lead to excessive noise in the filtered backprojection (FBP) reconstructed DECT images, which can inevitably increase the diagnostic uncertainty. To obtain clinically acceptable DECT images from low-mAs acquisitions, in this work we develop a novel scheme based on measurement of DECT data. In this scheme, inspired by the success of edge-preserving non-local means (NLM) filtering in CT imaging and the intrinsic characteristics underlying DECT images, i.e. global correlation and non-local similarity, an averaged image induced NLM-based (aviNLM) regularization is incorporated into the penalized weighted least-squares (PWLS) framework. Specifically, the presented NLM-based regularization is designed by averaging the acquired DECT images, which takes the image similarity within the two energies into consideration. In addition, the weighted least-squares term takes into account DECT data-dependent variance. For simplicity, the presented scheme was termed as 'PWLS-aviNLM'. The performance of the presented PWLS-aviNLM algorithm was validated and evaluated on digital phantom, physical phantom and patient data. The extensive experiments validated that the presented PWLS-aviNLM algorithm outperforms the FBP, PWLS-TV and PWLS-NLM algorithms quantitatively. More importantly, it delivers the best qualitative results with the finest details and the fewest noise-induced artifacts, due to the aviNLM regularization learned from DECT images. This study demonstrated the feasibility and efficacy of the presented PWLS-aviNLM algorithm to improve the DECT reconstruction and resulting material decomposition.
Collapse
Affiliation(s)
- Houjin Zhang
- Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University Guangzhou, Guangdong 510515, China
| | - Dong Zeng
- Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University Guangzhou, Guangdong 510515, China
| | - Jiahui Lin
- Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University Guangzhou, Guangdong 510515, China
| | - Hao Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, USA
| | - Zhaoying Bian
- Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University Guangzhou, Guangdong 510515, China
| | - Jing Huang
- Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University Guangzhou, Guangdong 510515, China
| | - Yuanyuan Gao
- Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University Guangzhou, Guangdong 510515, China
| | - Shanli Zhang
- The First Affiliated Hospital of Guangzhou University of Traditional Chinese Medicine, Guangzhou, Guangdong, China
| | - Hua Zhang
- Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University Guangzhou, Guangdong 510515, China
| | - Qianjin Feng
- Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University Guangzhou, Guangdong 510515, China
| | - Zhengrong Liang
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, New York 11794 USA
| | - Wufan Chen
- Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University Guangzhou, Guangdong 510515, China
| | - Jianhua Ma
- Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China
- Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Guangzhou 510515, China
| |
Collapse
|