1
|
Hu D, Zhang C, Fei X, Yao Y, Xi Y, Liu J, Zhang Y, Coatrieux G, Coatrieux JL, Chen Y. DPI-MoCo: Deep Prior Image Constrained Motion Compensation Reconstruction for 4D CBCT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:1243-1256. [PMID: 39423082 DOI: 10.1109/tmi.2024.3483451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2024]
Abstract
4D cone-beam computed tomography (CBCT) plays a critical role in adaptive radiation therapy for lung cancer. However, extremely sparse sampling projection data will cause severe streak artifacts in 4D CBCT images. Existing deep learning (DL) methods heavily rely on large labeled training datasets which are difficult to obtain in practical scenarios. Restricted by this dilemma, DL models often struggle with simultaneously retaining dynamic motions, removing streak degradations, and recovering fine details. To address the above challenging problem, we introduce a Deep Prior Image Constrained Motion Compensation framework (DPI-MoCo) that decouples the 4D CBCT reconstruction into two sub-tasks including coarse image restoration and structural detail fine-tuning. In the first stage, the proposed DPI-MoCo combines the prior image guidance, generative adversarial network, and contrastive learning to globally suppress the artifacts while maintaining the respiratory movements. After that, to further enhance the local anatomical structures, the motion estimation and compensation technique is adopted. Notably, our framework is performed without the need for paired datasets, ensuring practicality in clinical cases. In the Monte Carlo simulation dataset, the DPI-MoCo achieves competitive quantitative performance compared to the state-of-the-art (SOTA) methods. Furthermore, we test DPI-MoCo in clinical lung cancer datasets, and experiments validate that DPI-MoCo not only restores small anatomical structures and lesions but also preserves motion information.
Collapse
|
2
|
Dehdab R, Brendlin AS, Grözinger G, Almansour H, Brendel JM, Gassenmaier S, Ghibes P, Werner S, Nikolaou K, Afat S. Enhancing Cone-Beam CT Image Quality in TIPSS Procedures Using AI Denoising. Diagnostics (Basel) 2024; 14:1989. [PMID: 39272773 PMCID: PMC11394631 DOI: 10.3390/diagnostics14171989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 08/30/2024] [Accepted: 09/02/2024] [Indexed: 09/15/2024] Open
Abstract
Purpose: This study evaluates a deep learning-based denoising algorithm to improve the trade-off between radiation dose, image noise, and motion artifacts in TIPSS procedures, aiming for shorter acquisition times and reduced radiation with maintained diagnostic quality. Methods: In this retrospective study, TIPSS patients were divided based on CBCT acquisition times of 6 s and 3 s. Traditional weighted filtered back projection (Original) and an AI denoising algorithm (AID) were used for image reconstructions. Objective assessments of image quality included contrast, noise levels, and contrast-to-noise ratios (CNRs) through place-consistent region-of-interest (ROI) measurements across various critical areas pertinent to the TIPSS procedure. Subjective assessments were conducted by two blinded radiologists who evaluated the overall image quality, sharpness, contrast, and motion artifacts for each dataset combination. Statistical significance was determined using a mixed-effects model (p ≤ 0.05). Results: From an initial cohort of 60 TIPSS patients, 44 were selected and paired. The mean dose-area product (DAP) for the 6 s acquisitions was 5138.50 ± 1325.57 µGy·m2, significantly higher than the 2514.06 ± 691.59 µGym2 obtained for the 3 s series. CNR was highest in the 6 s-AID series (p < 0.05). Both denoised and original series showed consistent contrast for 6 s and 3 s acquisitions, with no significant noise differences between the 6 s Original and 3 s AID images (p > 0.9). Subjective assessments indicated superior quality in 6 s-AID images, with no significant overall quality difference between the 6 s-Original and 3 s-AID series (p > 0.9). Conclusions: The AI denoising algorithm enhances CBCT image quality in TIPSS procedures, allowing for shorter scans that reduce radiation exposure and minimize motion artifacts.
Collapse
Affiliation(s)
- Reza Dehdab
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, D-72076 Tuebingen, Germany
| | - Andreas S Brendlin
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, D-72076 Tuebingen, Germany
| | - Gerd Grözinger
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, D-72076 Tuebingen, Germany
| | - Haidara Almansour
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, D-72076 Tuebingen, Germany
| | - Jan Michael Brendel
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, D-72076 Tuebingen, Germany
| | - Sebastian Gassenmaier
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, D-72076 Tuebingen, Germany
| | - Patrick Ghibes
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, D-72076 Tuebingen, Germany
| | - Sebastian Werner
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, D-72076 Tuebingen, Germany
| | - Konstantin Nikolaou
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, D-72076 Tuebingen, Germany
| | - Saif Afat
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, D-72076 Tuebingen, Germany
| |
Collapse
|
3
|
Zhang Y, Jiang Z, Zhang Y, Ren L. A review on 4D cone-beam CT (4D-CBCT) in radiation therapy: Technical advances and clinical applications. Med Phys 2024; 51:5164-5180. [PMID: 38922912 PMCID: PMC11321939 DOI: 10.1002/mp.17269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 03/05/2024] [Accepted: 06/01/2024] [Indexed: 06/28/2024] Open
Abstract
Cone-beam CT (CBCT) is the most commonly used onboard imaging technique for target localization in radiation therapy. Conventional 3D CBCT acquires x-ray cone-beam projections at multiple angles around the patient to reconstruct 3D images of the patient in the treatment room. However, despite its wide usage, 3D CBCT is limited in imaging disease sites affected by respiratory motions or other dynamic changes within the body, as it lacks time-resolved information. To overcome this limitation, 4D-CBCT was developed to incorporate a time dimension in the imaging to account for the patient's motion during the acquisitions. For example, respiration-correlated 4D-CBCT divides the breathing cycles into different phase bins and reconstructs 3D images for each phase bin, ultimately generating a complete set of 4D images. 4D-CBCT is valuable for localizing tumors in the thoracic and abdominal regions where the localization accuracy is affected by respiratory motions. This is especially important for hypofractionated stereotactic body radiation therapy (SBRT), which delivers much higher fractional doses in fewer fractions than conventional fractionated treatments. Nonetheless, 4D-CBCT does face certain limitations, including long scanning times, high imaging doses, and compromised image quality due to the necessity of acquiring sufficient x-ray projections for each respiratory phase. In order to address these challenges, numerous methods have been developed to achieve fast, low-dose, and high-quality 4D-CBCT. This paper aims to review the technical developments surrounding 4D-CBCT comprehensively. It will explore conventional algorithms and recent deep learning-based approaches, delving into their capabilities and limitations. Additionally, the paper will discuss the potential clinical applications of 4D-CBCT and outline a future roadmap, highlighting areas for further research and development. Through this exploration, the readers will better understand 4D-CBCT's capabilities and potential to enhance radiation therapy.
Collapse
Affiliation(s)
- Yawei Zhang
- University of Florida Proton Therapy Institute, Jacksonville, FL 32206, USA
- Department of Radiation Oncology, University of Florida College of Medicine, Gainesville, FL 32608, USA
| | - Zhuoran Jiang
- Medical Physics Graduate Program, Duke University, Durham, NC 27710, USA
| | - You Zhang
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Lei Ren
- Department of Radiation Oncology, University of Maryland, Baltimore, MD 21201, USA
| |
Collapse
|
4
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
5
|
Lauria M, Miller C, Singhrao K, Lewis J, Lin W, O'Connell D, Naumann L, Stiehl B, Santhanam A, Boyle P, Raldow AC, Goldin J, Barjaktarevic I, Low DA. Motion compensated cone-beam CT reconstruction using an a priorimotion model from CT simulation: a pilot study. Phys Med Biol 2024; 69:075022. [PMID: 38452385 DOI: 10.1088/1361-6560/ad311b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 03/07/2024] [Indexed: 03/09/2024]
Abstract
Objective. To combat the motion artifacts present in traditional 4D-CBCT reconstruction, an iterative technique known as the motion-compensated simultaneous algebraic reconstruction technique (MC-SART) was previously developed. MC-SART employs a 4D-CBCT reconstruction to obtain an initial model, which suffers from a lack of sufficient projections in each bin. The purpose of this study is to demonstrate the feasibility of introducing a motion model acquired during CT simulation to MC-SART, coined model-based CBCT (MB-CBCT).Approach. For each of 5 patients, we acquired 5DCTs during simulation and pre-treatment CBCTs with a simultaneous breathing surrogate. We cross-calibrated the 5DCT and CBCT breathing waveforms by matching the diaphragms and employed the 5DCT motion model parameters for MC-SART. We introduced the Amplitude Reassignment Motion Modeling technique, which measures the ability of the model to control diaphragm sharpness by reassigning projection amplitudes with varying resolution. We evaluated the sharpness of tumors and compared them between MB-CBCT and 4D-CBCT. We quantified sharpness by fitting an error function across anatomical boundaries. Furthermore, we compared our MB-CBCT approach to the traditional MC-SART approach. We evaluated MB-CBCT's robustness over time by reconstructing multiple fractions for each patient and measuring consistency in tumor centroid locations between 4D-CBCT and MB-CBCT.Main results. We found that the diaphragm sharpness rose consistently with increasing amplitude resolution for 4/5 patients. We observed consistently high image quality across multiple fractions, and observed stable tumor centroids with an average 0.74 ± 0.31 mm difference between the 4D-CBCT and MB-CBCT. Overall, vast improvements over 3D-CBCT and 4D-CBCT were demonstrated by our MB-CBCT technique in terms of both diaphragm sharpness and overall image quality.Significance. This work is an important extension of the MC-SART technique. We demonstrated the ability ofa priori5DCT models to provide motion compensation for CBCT reconstruction. We showed improvements in image quality over both 4D-CBCT and the traditional MC-SART approach.
Collapse
Affiliation(s)
- Michael Lauria
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Claudia Miller
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Kamal Singhrao
- Brigham and Women's Hospital, Dana Farber Cancer Institute and Harvard Medical School, Department of Radiation Oncology, Boston, MA, United States of America
| | - John Lewis
- Cedars-Sinai Medical Center, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Weicheng Lin
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Dylan O'Connell
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Louise Naumann
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Bradley Stiehl
- Cedars-Sinai Medical Center, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Anand Santhanam
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Peter Boyle
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Ann C Raldow
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| | - Jonathan Goldin
- UCLA, Department of Radiological Sciences, Los Angeles, CA, United States of America
| | - Igor Barjaktarevic
- UCLA, Department of Pulmonary and Critical Care Medicine, Los Angeles, CA, United States of America
| | - Daniel A Low
- UCLA, Department of Radiation Oncology, Los Angeles, CA, United States of America
| |
Collapse
|
6
|
Amirian M, Montoya-Zegarra JA, Herzig I, Eggenberger Hotz P, Lichtensteiger L, Morf M, Züst A, Paysan P, Peterlik I, Scheib S, Füchslin RM, Stadelmann T, Schilling FP. Mitigation of motion-induced artifacts in cone beam computed tomography using deep convolutional neural networks. Med Phys 2023; 50:6228-6242. [PMID: 36995003 DOI: 10.1002/mp.16405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 02/25/2023] [Accepted: 03/19/2023] [Indexed: 03/31/2023] Open
Abstract
BACKGROUND Cone beam computed tomography (CBCT) is often employed on radiation therapy treatment devices (linear accelerators) used in image-guided radiation therapy (IGRT). For each treatment session, it is necessary to obtain the image of the day in order to accurately position the patient and to enable adaptive treatment capabilities including auto-segmentation and dose calculation. Reconstructed CBCT images often suffer from artifacts, in particular those induced by patient motion. Deep-learning based approaches promise ways to mitigate such artifacts. PURPOSE We propose a novel deep-learning based approach with the goal to reduce motion induced artifacts in CBCT images and improve image quality. It is based on supervised learning and includes neural network architectures employed as pre- and/or post-processing steps during CBCT reconstruction. METHODS Our approach is based on deep convolutional neural networks which complement the standard CBCT reconstruction, which is performed either with the analytical Feldkamp-Davis-Kress (FDK) method, or with an iterative algebraic reconstruction technique (SART-TV). The neural networks, which are based on refined U-net architectures, are trained end-to-end in a supervised learning setup. Labeled training data are obtained by means of a motion simulation, which uses the two extreme phases of 4D CT scans, their deformation vector fields, as well as time-dependent amplitude signals as input. The trained networks are validated against ground truth using quantitative metrics, as well as by using real patient CBCT scans for a qualitative evaluation by clinical experts. RESULTS The presented novel approach is able to generalize to unseen data and yields significant reductions in motion induced artifacts as well as improvements in image quality compared with existing state-of-the-art CBCT reconstruction algorithms (up to +6.3 dB and +0.19 improvements in peak signal-to-noise ratio, PSNR, and structural similarity index measure, SSIM, respectively), as evidenced by validation with an unseen test dataset, and confirmed by a clinical evaluation on real patient scans (up to 74% preference for motion artifact reduction over standard reconstruction). CONCLUSIONS For the first time, it is demonstrated, also by means of clinical evaluation, that inserting deep neural networks as pre- and post-processing plugins in the existing 3D CBCT reconstruction and trained end-to-end yield significant improvements in image quality and reduction of motion artifacts.
Collapse
Affiliation(s)
- Mohammadreza Amirian
- Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
- Institute of Neural Information Processing, Ulm University, Ulm, Germany
| | - Javier A Montoya-Zegarra
- Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Ivo Herzig
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Peter Eggenberger Hotz
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Lukas Lichtensteiger
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Marco Morf
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Alexander Züst
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| | - Pascal Paysan
- Varian Medical Systems Imaging Laboratory GmbH, Baden, Switzerland
| | - Igor Peterlik
- Varian Medical Systems Imaging Laboratory GmbH, Baden, Switzerland
| | - Stefan Scheib
- Varian Medical Systems Imaging Laboratory GmbH, Baden, Switzerland
| | - Rudolf Marcel Füchslin
- Institute for Applied Mathematics and Physics IAMP, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
- European Centre for Living Technology, Venice, Italy
| | - Thilo Stadelmann
- Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
- European Centre for Living Technology, Venice, Italy
| | - Frank-Peter Schilling
- Centre for Artificial Intelligence CAI, Zurich University of Applied Sciences ZHAW, Winterthur, Switzerland
| |
Collapse
|