1
|
Wei R, Song Z, Pan Z, Cao Y, Song Y, Dai J. Non-coplanar CBCT image reconstruction using a generative adversarial network for non-coplanar radiotherapy. J Appl Clin Med Phys 2024:e14487. [PMID: 39186746 DOI: 10.1002/acm2.14487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 06/12/2024] [Accepted: 07/11/2024] [Indexed: 08/28/2024] Open
Abstract
PURPOSE To develop a non-coplanar cone-beam computed tomography (CBCT) image reconstruction method using projections within a limited angle range for non-coplanar radiotherapy. METHODS A generative adversarial network (GAN) was utilized to reconstruct non-coplanar CBCT images. Data from 40 patients with brain tumors and two head phantoms were used in this study. In the training stage, the generator of the GAN used coplanar CBCT and non-coplanar projections as the input, and an encoder with a dual-branch structure was utilized to extract features from the coplanar CBCT and non-coplanar projections separately. Non-coplanar CBCT images were then reconstructed using a decoder by combining the extracted features. To improve the reconstruction accuracy of the image details, the generator was adversarially trained using a patch-based convolutional neural network as the discriminator. A newly designed joint loss was used to improve the global structure consistency rather than the conventional GAN loss. The proposed model was evaluated using data from eight patients and two phantoms at four couch angles (±45°, ±90°) that are most commonly used for brain non-coplanar radiotherapy in our department. The reconstructed accuracy was evaluated by calculating the root mean square error (RMSE) and an overall registration error ε, computed by integrating the rigid transformation parameters. RESULTS In both patient data and phantom data studies, the qualitative and quantitative metrics results indicated that ± 45° couch angle models performed better than ±90° couch angle models and had statistical differences. In the patient data study, the mean RMSE and ε values of couch angle at 45°, -45°, 90°, and -90° were 58.5 HU and 0.42 mm, 56.8 HU and 0.41 mm, 73.6 HU and 0.48 mm, and 65.3 HU and 0.46 mm, respectively. In the phantom data study, the mean RMSE and ε values of couch angle at 45°, -45°, 90°, and -90° were 91.2 HU and 0.46 mm, 95.0 HU and 0.45 mm, 114.6 HU and 0.58 mm, and 102.9 HU and 0.52 mm, respectively. CONCLUSIONS The results show that the reconstructed non-coplanar CBCT images can potentially enable intra-treatment three-dimensional position verification for non-coplanar radiotherapy.
Collapse
Affiliation(s)
- Ran Wei
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Zhiyue Song
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ziqi Pan
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ying Cao
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yongli Song
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
2
|
Indramohan K, Moinuddin S, Akter T, Webster A, Wilson E, Fersht N, Kosmin M. Assessing inter and intrafraction uncertainties in adult brain cancer patients using 2D/3D kV and CBCT imaging. Radiography (Lond) 2024; 30:1249-1257. [PMID: 38970885 DOI: 10.1016/j.radi.2024.04.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 01/03/2024] [Accepted: 04/10/2024] [Indexed: 07/08/2024]
Abstract
METHOD 2D/3D kV imaging and CBCT data using 6 degrees of freedom (6DoF) were compared to evaluate inter and intrafraction motion. RESULTS Results showed that intrafraction errors were low and interfraction levels were within institutional protocols. CONCLUSION Confidence was given to use low dose 2D/3D kV imaging to confirm daily patient set up errors, and to use pre-treatment CBCT only once weekly for additional imaging information. IMPLICATIONS FOR PRACTICE Further research is necessary to assess other uncertainties, to enable the calculation of a margin and determining the feasibility of further reduction of this.
Collapse
Affiliation(s)
| | | | - T Akter
- Department of Radiotherapy UCLH, UK
| | | | - E Wilson
- Department of Radiotherapy UCLH, UK
| | - N Fersht
- Department of Radiotherapy UCLH, UK
| | - M Kosmin
- Department of Radiotherapy UCLH and NIHR University College London Hospitals Biomedical Research Centre, UK
| |
Collapse
|
3
|
Song Z, Li T, Zuo L, Song Y, Wei R, Dai J. A grayscale compression method to segment bone structures for 2D-3D registration of setup images in non-coplanar radiotherapy. Biomed Phys Eng Express 2024; 10:035014. [PMID: 38442730 DOI: 10.1088/2057-1976/ad3050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 03/05/2024] [Indexed: 03/07/2024]
Abstract
Purpose. To evaluate the performance of an automated 2D-3D bone registration algorithm incorporating a grayscale compression method for quantifying patient position errors in non-coplanar radiotherapy.Methods. An automated 2D-3D registration incorporating a grayscale compression method to segment bone structures was proposed. Portal images containing only bone structures (Portalbone) and digitally reconstructed radiographs containing only bone structures (DRRbone) were used for registration. First, the portal image was filtered by a high-pass finite impulse response (FIR) filter. Then the grayscale range of the filtered portal image was compressed. Thresholds were determined based on the difference in gray values of bone structures in the filtered and compressed portal image to obtainPortalbone.Another threshold was applied to generateDRRbonewhen the CT image uses the ray-casting algorithm to generate DRR images. The compression performance was assessed by registering theDRRbonewith thePortalboneobtained by compressing the portal image into various grayscale ranges. The proposed registration method was quantitatively and visually validated using (1) a CT image of an anthropomorphic head phantom and its portal images obtained in different poses and (2) CT images and pre-treatment portal images of 20 patients treated with non-coplanar radiotherapy.Results. Mean absolute registration errors for the best compression grayscale range test were 0.642 mm, 0.574 mm, and 0.643 mm, with calculation times of 50.6 min, 42.2 min, and 49.6 min for grayscale ranges of 0-127, 0-63 and 0-31, respectively. For the accuracy validation (1), the mean absolute registration errors for couch angles 0°, 45°, 90°, 270°, and 315° were 0.694 mm, 0.839 mm, 0.726 mm, 0.833 mm, and 0.873 mm, respectively. Among the six transformation parameters, the translation error in the vertical direction contributed the most to the registration errors. Visual inspection of the patient registration results revealed success in every instance.Conclusions. The implemented grayscale compression method successfully enhances and segments bone structures in portal images, allowing for accurate determination of patient setup errors in non-coplanar radiotherapy.
Collapse
Affiliation(s)
- Zhiyue Song
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Tantan Li
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Lijing Zuo
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Yongli Song
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Ran Wei
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Jianrong Dai
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| |
Collapse
|
4
|
Dong G, Dai J, Li N, Zhang C, He W, Liu L, Chan Y, Li Y, Xie Y, Liang X. 2D/3D Non-Rigid Image Registration via Two Orthogonal X-ray Projection Images for Lung Tumor Tracking. Bioengineering (Basel) 2023; 10:bioengineering10020144. [PMID: 36829638 PMCID: PMC9951849 DOI: 10.3390/bioengineering10020144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 01/10/2023] [Accepted: 01/16/2023] [Indexed: 01/24/2023] Open
Abstract
Two-dimensional (2D)/three-dimensional (3D) registration is critical in clinical applications. However, existing methods suffer from long alignment times and high doses. In this paper, a non-rigid 2D/3D registration method based on deep learning with orthogonal angle projections is proposed. The application can quickly achieve alignment using only two orthogonal angle projections. We tested the method with lungs (with and without tumors) and phantom data. The results show that the Dice and normalized cross-correlations are greater than 0.97 and 0.92, respectively, and the registration time is less than 1.2 seconds. In addition, the proposed model showed the ability to track lung tumors, highlighting the clinical potential of the proposed method.
Collapse
Affiliation(s)
- Guoya Dong
- School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300130, China
- Hebei Key Laboratory of Bioelectromagnetics and Neural Engineering, Tianjin 300130, China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Tianjin 300130, China
| | - Jingjing Dai
- School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300130, China
- Hebei Key Laboratory of Bioelectromagnetics and Neural Engineering, Tianjin 300130, China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Tianjin 300130, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Na Li
- Department of Biomedical Engineering, Guangdong Medical University, Dongguan 523808, China
| | - Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Wenfeng He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Lin Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yinping Chan
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yunhui Li
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Correspondence:
| |
Collapse
|
5
|
Zhang Y, Qin H, Li P, Pei Y, Guo Y, Xu T, Zha H. Deformable registration of lateral cephalogram and cone-beam computed tomography image. Med Phys 2021; 48:6901-6915. [PMID: 34496039 DOI: 10.1002/mp.15214] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 07/14/2021] [Accepted: 08/26/2021] [Indexed: 01/17/2023] Open
Abstract
PURPOSE This study aimed to design and evaluate a novel method for the registration of 2D lateral cephalograms and 3D craniofacial cone-beam computed tomography (CBCT) images, providing patient-specific 3D structures from a 2D lateral cephalogram without additional radiation exposure. METHODS We developed a cross-modal deformable registration model based on a deep convolutional neural network. Our approach took advantage of a low-dimensional deformation field encoding and an iterative feedback scheme to infer coarse-to-fine volumetric deformations. In particular, we constructed a statistical subspace of deformation fields and parameterized the nonlinear mapping function from an image pair, consisting of the target 2D lateral cephalogram and the reference volumetric CBCT, to a latent encoding of the deformation field. Instead of the one-shot registration by the learned mapping function, a feedback scheme was introduced to progressively update the reference volumetric image and to infer coarse-to-fine deformations fields, accounting for the shape variations of anatomical structures. A total of 220 clinically obtained CBCTs were used to train and validate the proposed model, among which 120 CBCTs were used to generate a training dataset with 24k paired synthetic lateral cephalograms and CBCTs. The proposed approach was evaluated on the deformable 2D-3D registration of clinically obtained lateral cephalograms and CBCTs from growing and adult orthodontic patients. RESULTS Strong structural consistencies were observed between the deformed CBCT and the target lateral cephalogram in all criteria. The proposed method achieved state-of-the-art performances with the mean contour deviation of 0.41 ± 0.12 mm on the anterior cranial base, 0.48 ± 0.17 mm on the mandible, and 0.35 ± 0.08 mm on the maxilla, respectively. The mean surface mesh ranged from 0.78 to 0.97 mm on various craniofacial structures, and the LREs ranged from 0.83 to 1.24 mm on the growing datasets regarding 14 landmarks. The proposed iterative feedback scheme handled the structural details and improved the registration. The resultant deformed volumetric image was consistent with the target lateral cephalogram in both 2D projective planes and 3D volumetric space regarding the multicategory craniofacial structures. CONCLUSIONS The results suggest that the deep learning-based 2D-3D registration model enables the deformable alignment of 2D lateral cephalograms and CBCTs and estimates patient-specific 3D craniofacial structures.
Collapse
Affiliation(s)
- Yungeng Zhang
- Key Laboratory of Machine Perception (MOE), Department of Machine Intelligence, Peking University, Beijing, China
| | - Haifang Qin
- Key Laboratory of Machine Perception (MOE), Department of Machine Intelligence, Peking University, Beijing, China
| | - Peixin Li
- Key Laboratory of Machine Perception (MOE), Department of Machine Intelligence, Peking University, Beijing, China
| | - Yuru Pei
- Key Laboratory of Machine Perception (MOE), Department of Machine Intelligence, Peking University, Beijing, China
| | - Yuke Guo
- Luoyang Institute of Science and Technology, Luoyang, China
| | - Tianmin Xu
- School of Stomatology, Stomatology Hospital, Peking University, Beijing, China
| | - Hongbin Zha
- Key Laboratory of Machine Perception (MOE), Department of Machine Intelligence, Peking University, Beijing, China
| |
Collapse
|
6
|
Hayashi R, Miyazaki K, Takao S, Yokokawa K, Tanaka S, Matsuura T, Taguchi H, Katoh N, Shimizu S, Umegaki K, Miyamoto N. Real-time CT image generation based on voxel-by-voxel modeling of internal deformation by utilizing the displacement of fiducial markers. Med Phys 2021; 48:5311-5326. [PMID: 34260755 DOI: 10.1002/mp.15095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 06/17/2021] [Accepted: 07/07/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE To show the feasibility of real-time CT image generation technique utilizing internal fiducial markers that facilitate the evaluation of internal deformation. METHODS In the proposed method, a linear regression model that can derive internal deformation from the displacement of fiducial markers is built for each voxel in the training process before the treatment session. Marker displacement and internal deformation are derived from the four-dimensional computed tomography (4DCT) dataset. In the treatment session, the three-dimensional deformation vector field is derived according to the marker displacement, which is monitored by the real-time imaging system. The whole CT image can be synthesized by deforming the reference CT image with a deformation vector field in real-time. To show the feasibility of the technique, image synthesis accuracy and tumor localization accuracy were evaluated using the dataset generated by extended NURBS-Based Cardiac-Torso (XCAT) phantom and clinical 4DCT datasets from six patients, containing 10 CT datasets each. In the validation with XCAT phantom, motion range of the tumor in training data and validation data were about 10 and 15 mm, respectively, so as to simulate motion variation between 4DCT acquisition and treatment session. In the validation with patient 4DCT dataset, eight CT datasets from the 4DCT dataset were used in the training process. Two excluded inhale CT datasets can be regarded as the datasets with large deformations more than training dataset. CT images were generated for each respiratory phase using the corresponding marker displacement. Root mean squared error (RMSE), normalized RMSE (NRMSE), and structural similarity index measure (SSIM) between the original CT images and the synthesized CT images were evaluated as the quantitative indices of the accuracy of image synthesis. The accuracy of tumor localization was also evaluated. RESULTS In the validation with XCAT phantom, the mean NRMSE, SSIM, and three-dimensional tumor localization error were 7.5 ± 1.1%, 0.95 ± 0.02, and 0.4 ± 0.3 mm, respectively. In the validation with patient 4DCT dataset, the mean RMSE, NRMSE, SSIM, and three-dimensional tumor localization error in six patients were 73.7 ± 19.6 HU, 9.2 ± 2.6%, 0.88 ± 0.04, and 0.8 ± 0.6 mm, respectively. These results suggest that the accuracy of the proposed technique is adequate when the respiratory motion is within the range of the training dataset. In the evaluation with a marker displacement larger than that of the training dataset, the mean RMSE, NRMSE, and tumor localization error were about 100 HU, 13%, and <2.0 mm, respectively, except for one case having large motion variation. The performance of the proposed method was similar to those of previous studies. Processing time to generate the volumetric image was <100 ms. CONCLUSION We have shown the feasibility of the real-time CT image generation technique for volumetric imaging.
Collapse
Affiliation(s)
- Risa Hayashi
- Graduate School of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Koichi Miyazaki
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Seishin Takao
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Kohei Yokokawa
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Sodai Tanaka
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Taeko Matsuura
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Hiroshi Taguchi
- Department of Radiation Oncology, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Norio Katoh
- Faculty of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Shinichi Shimizu
- Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan.,Faculty of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Kikuo Umegaki
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Naoki Miyamoto
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| |
Collapse
|
7
|
Frysch R, Pfeiffer T, Rose G. A novel approach to 2D/3D registration of X-ray images using Grangeat's relation. Med Image Anal 2020; 67:101815. [PMID: 33065470 DOI: 10.1016/j.media.2020.101815] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Revised: 07/31/2020] [Accepted: 09/02/2020] [Indexed: 11/19/2022]
Abstract
Fast and accurate 2D/3D registration plays an important role in many applications, ranging from scientific and engineering domains all the way to medical care. Today's predominant methods are based on computationally expensive approaches, such as virtual forward or back projections, that limit the real-time applicability of the routines. Here, we present a novel concept that makes use of Grangeat's relation to intertwine information from the 3D volume and the 2D projection space in a way that allows pre-computation of all time-intensive steps. The main effort within actual registration tasks is reduced to simple resampling of the pre-calculated values, which can be executed rapidly on modern GPU hardware. We analyze the applicability of the proposed method on simulated data under various conditions and evaluate the findings on real data from a C-arm CT scanner. Our results show high registration quality in both simulated as well as real data scenarios and demonstrate a reduction in computation time for the crucial computation step by a factor of six to eight when compared to state-of-the-art routines. With minor trade-offs in accuracy, this speed-up can even be increased up to a factor of 100 in particular settings. To our knowledge, this is the first application of Grangeat's relation to the topic of 2D/3D registration. Due to its high computational efficiency and broad range of potential applications, we believe it constitutes a highly relevant approach for various problems dealing with cone beam transmission images.
Collapse
Affiliation(s)
- Robert Frysch
- Institute for Medical Engineering and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany.
| | - Tim Pfeiffer
- Institute for Medical Engineering and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| | - Georg Rose
- Institute for Medical Engineering and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| |
Collapse
|