1
|
Cao YH, Bourbonne V, Lucia F, Schick U, Bert J, Jaouen V, Visvikis D. CT respiratory motion synthesis using joint supervised and adversarial learning. Phys Med Biol 2024; 69:095001. [PMID: 38537289 DOI: 10.1088/1361-6560/ad388a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 03/27/2024] [Indexed: 04/16/2024]
Abstract
Objective.Four-dimensional computed tomography (4DCT) imaging consists in reconstructing a CT acquisition into multiple phases to track internal organ and tumor motion. It is commonly used in radiotherapy treatment planning to establish planning target volumes. However, 4DCT increases protocol complexity, may not align with patient breathing during treatment, and lead to higher radiation delivery.Approach.In this study, we propose a deep synthesis method to generate pseudo respiratory CT phases from static images for motion-aware treatment planning. The model produces patient-specific deformation vector fields (DVFs) by conditioning synthesis on external patient surface-based estimation, mimicking respiratory monitoring devices. A key methodological contribution is to encourage DVF realism through supervised DVF training while using an adversarial term jointly not only on the warped image but also on the magnitude of the DVF itself. This way, we avoid excessive smoothness typically obtained through deep unsupervised learning, and encourage correlations with the respiratory amplitude.Main results.Performance is evaluated using real 4DCT acquisitions with smaller tumor volumes than previously reported. Results demonstrate for the first time that the generated pseudo-respiratory CT phases can capture organ and tumor motion with similar accuracy to repeated 4DCT scans of the same patient. Mean inter-scans tumor center-of-mass distances and Dice similarity coefficients were 1.97 mm and 0.63, respectively, for real 4DCT phases and 2.35 mm and 0.71 for synthetic phases, and compares favorably to a state-of-the-art technique (RMSim).Significance.This study presents a deep image synthesis method that addresses the limitations of conventional 4DCT by generating pseudo-respiratory CT phases from static images. Although further studies are needed to assess the dosimetric impact of the proposed method, this approach has the potential to reduce radiation exposure in radiotherapy treatment planning while maintaining accurate motion representation. Our training and testing code can be found athttps://github.com/cyiheng/Dynagan.
Collapse
Affiliation(s)
- Y-H Cao
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
| | - V Bourbonne
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- CHRU Brest University Hospital, Brest, France
| | - F Lucia
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- CHRU Brest University Hospital, Brest, France
| | - U Schick
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- CHRU Brest University Hospital, Brest, France
| | - J Bert
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- CHRU Brest University Hospital, Brest, France
| | - V Jaouen
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
- IMT Atlantique, Brest, France
| | - D Visvikis
- LaTIM, UMR Inserm 1101, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
| |
Collapse
|
2
|
Liu L, Shen L, Johansson A, Balter JM, Cao Y, Vitzthum L, Xing L. Volumetric MRI with sparse sampling for MR-guided 3D motion tracking via sparse prior-augmented implicit neural representation learning. Med Phys 2024; 51:2526-2537. [PMID: 38014764 PMCID: PMC10994763 DOI: 10.1002/mp.16845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 09/22/2023] [Accepted: 10/30/2023] [Indexed: 11/29/2023] Open
Abstract
BACKGROUND Volumetric reconstruction of magnetic resonance imaging (MRI) from sparse samples is desirable for 3D motion tracking and promises to improve magnetic resonance (MR)-guided radiation treatment precision. Data-driven sparse MRI reconstruction, however, requires large-scale training datasets for prior learning, which is time-consuming and challenging to acquire in clinical settings. PURPOSE To investigate volumetric reconstruction of MRI from sparse samples of two orthogonal slices aided by sparse priors of two static 3D MRI through implicit neural representation (NeRP) learning, in support of 3D motion tracking during MR-guided radiotherapy. METHODS A multi-layer perceptron network was trained to parameterize the NeRP model of a patient-specific MRI dataset, where the network takes 4D data coordinates of voxel locations and motion states as inputs and outputs corresponding voxel intensities. By first training the network to learn the NeRP of two static 3D MRI with different breathing motion states, prior information of patient breathing motion was embedded into network weights through optimization. The prior information was then augmented from two motion states to 31 motion states by querying the optimized network at interpolated and extrapolated motion state coordinates. Starting from the prior-augmented NeRP model as an initialization point, we further trained the network to fit sparse samples of two orthogonal MRI slices and the final volumetric reconstruction was obtained by querying the trained network at 3D spatial locations. We evaluated the proposed method using 5-min volumetric MRI time series with 340 ms temporal resolution for seven abdominal patients with hepatocellular carcinoma, acquired using golden-angle radial MRI sequence and reconstructed through retrospective sorting. Two volumetric MRI with inhale and exhale states respectively were selected from the first 30 s of the time series for prior embedding and augmentation. The remaining 4.5-min time series was used for volumetric reconstruction evaluation, where we retrospectively subsampled each MRI to two orthogonal slices and compared model-reconstructed images to ground truth images in terms of image quality and the capability of supporting 3D target motion tracking. RESULTS Across the seven patients evaluated, the peak signal-to-noise-ratio between model-reconstructed and ground truth MR images was 38.02 ± 2.60 dB and the structure similarity index measure was 0.98 ± 0.01. Throughout the 4.5-min time period, gross tumor volume (GTV) motion estimated by deforming a reference state MRI to model-reconstructed and ground truth MRI showed good consistency. The 95-percentile Hausdorff distance between GTV contours was 2.41 ± 0.77 mm, which is less than the voxel dimension. The mean GTV centroid position difference between ground truth and model estimation was less than 1 mm in all three orthogonal directions. CONCLUSION A prior-augmented NeRP model has been developed to reconstruct volumetric MRI from sparse samples of orthogonal cine slices. Only one exhale and one inhale 3D MRI were needed to train the model to learn prior information of patient breathing motion for sparse image reconstruction. The proposed model has the potential of supporting 3D motion tracking during MR-guided radiotherapy for improved treatment precision and promises a major simplification of the workflow by eliminating the need for large-scale training datasets.
Collapse
Affiliation(s)
- Lianli Liu
- Department of Radiation Oncology, Stanford University, Palo Alto, California, USA
| | - Liyue Shen
- Department of Electrical Engineering, Stanford University, Palo Alto, California, USA
| | - Adam Johansson
- Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan, USA
- Department of Immunology Genetics and pathology, Uppsala University, Uppsala, Sweden
- Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| | - James M Balter
- Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan, USA
| | - Yue Cao
- Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan, USA
| | - Lucas Vitzthum
- Department of Radiation Oncology, Stanford University, Palo Alto, California, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Palo Alto, California, USA
- Department of Electrical Engineering, Stanford University, Palo Alto, California, USA
| |
Collapse
|
3
|
Cobanaj M, Corti C, Dee EC, McCullum L, Boldrini L, Schlam I, Tolaney SM, Celi LA, Curigliano G, Criscitiello C. Advancing equitable and personalized cancer care: Novel applications and priorities of artificial intelligence for fairness and inclusivity in the patient care workflow. Eur J Cancer 2024; 198:113504. [PMID: 38141549 DOI: 10.1016/j.ejca.2023.113504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 12/13/2023] [Indexed: 12/25/2023]
Abstract
Patient care workflows are highly multimodal and intertwined: the intersection of data outputs provided from different disciplines and in different formats remains one of the main challenges of modern oncology. Artificial Intelligence (AI) has the potential to revolutionize the current clinical practice of oncology owing to advancements in digitalization, database expansion, computational technologies, and algorithmic innovations that facilitate discernment of complex relationships in multimodal data. Within oncology, radiation therapy (RT) represents an increasingly complex working procedure, involving many labor-intensive and operator-dependent tasks. In this context, AI has gained momentum as a powerful tool to standardize treatment performance and reduce inter-observer variability in a time-efficient manner. This review explores the hurdles associated with the development, implementation, and maintenance of AI platforms and highlights current measures in place to address them. In examining AI's role in oncology workflows, we underscore that a thorough and critical consideration of these challenges is the only way to ensure equitable and unbiased care delivery, ultimately serving patients' survival and quality of life.
Collapse
Affiliation(s)
- Marisa Cobanaj
- National Center for Radiation Research in Oncology, OncoRay, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany
| | - Chiara Corti
- Breast Oncology Program, Dana-Farber Brigham Cancer Center, Boston, MA, USA; Harvard Medical School, Boston, MA, USA; Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy.
| | - Edward C Dee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Lucas McCullum
- Department of Radiation Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Laura Boldrini
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy
| | - Ilana Schlam
- Department of Hematology and Oncology, Tufts Medical Center, Boston, MA, USA; Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Sara M Tolaney
- Breast Oncology Program, Dana-Farber Brigham Cancer Center, Boston, MA, USA; Harvard Medical School, Boston, MA, USA; Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Leo A Celi
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA; Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA; Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Giuseppe Curigliano
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy
| | - Carmen Criscitiello
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy
| |
Collapse
|
4
|
Bengs M, Sprenger J, Gerlach S, Neidhardt M, Schlaefer A. Real-Time Motion Analysis With 4D Deep Learning for Ultrasound-Guided Radiotherapy. IEEE Trans Biomed Eng 2023; 70:2690-2699. [PMID: 37030809 DOI: 10.1109/tbme.2023.3262422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
Abstract
Motion compensation in radiation therapy is a challenging scenario that requires estimating and forecasting motion of tissue structures to deliver the target dose. Ultrasound offers direct imaging of tissue in real-time and is considered for image guidance in radiation therapy. Recently, fast volumetric ultrasound has gained traction, but motion analysis with such high-dimensional data remains difficult. While deep learning could bring many advantages, such as fast data processing and high performance, it remains unclear how to process sequences of hundreds of image volumes efficiently and effectively. We present a 4D deep learning approach for real-time motion estimation and forecasting using long-term 4D ultrasound data. Using motion traces acquired during radiation therapy combined with various tissue types, our results demonstrate that long-term motion estimation can be performed markerless with a tracking error of 0.35±0.2 mm and with an inference time of less than 5 ms. Also, we demonstrate forecasting directly from the image data up to 900 ms into the future. Overall, our findings highlight that 4D deep learning is a promising approach for motion analysis during radiotherapy.
Collapse
|
5
|
Huttinga NRF, Bruijnen T, van den Berg CAT, Sbrizzi A. Gaussian Processes for real-time 3D motion and uncertainty estimation during MR-guided radiotherapy. Med Image Anal 2023; 88:102843. [PMID: 37245435 DOI: 10.1016/j.media.2023.102843] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 05/11/2023] [Accepted: 05/15/2023] [Indexed: 05/30/2023]
Abstract
Respiratory motion during radiotherapy causes uncertainty in the tumor's location, which is typically addressed by an increased radiation area and a decreased dose. As a result, the treatments' efficacy is reduced. The recently proposed hybrid MR-linac scanner holds the promise to efficiently deal with such respiratory motion through real-time adaptive MR-guided radiotherapy (MRgRT). For MRgRT, motion-fields should be estimated from MR-data and the radiotherapy plan should be adapted in real-time according to the estimated motion-fields. All of this should be performed with a total latency of maximally 200 ms, including data acquisition and reconstruction. A measure of confidence in such estimated motion-fields is highly desirable, for instance to ensure the patient's safety in case of unexpected and undesirable motion. In this work, we propose a framework based on Gaussian Processes to infer 3D motion-fields and uncertainty maps in real-time from only three readouts of MR-data. We demonstrated an inference frame rate up to 69 Hz including data acquisition and reconstruction, thereby exploiting the limited amount of required MR-data. Additionally, we designed a rejection criterion based on the motion-field uncertainty maps to demonstrate the framework's potential for quality assurance. The framework was validated in silico and in vivo on healthy volunteer data (n=5) acquired using an MR-linac, thereby taking into account different breathing patterns and controlled bulk motion. Results indicate end-point-errors with a 75th percentile below 1 mm in silico, and a correct detection of erroneous motion estimates with the rejection criterion. Altogether, the results show the potential of the framework for application in real-time MR-guided radiotherapy with an MR-linac.
Collapse
Affiliation(s)
- Niek R F Huttinga
- Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, The Netherlands; Computational Imaging Group for MR diagnostics & therapy, Center for Image Sciences, University Medical Center Utrecht, The Netherlands.
| | - Tom Bruijnen
- Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, The Netherlands; Computational Imaging Group for MR diagnostics & therapy, Center for Image Sciences, University Medical Center Utrecht, The Netherlands
| | - Cornelis A T van den Berg
- Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, The Netherlands; Computational Imaging Group for MR diagnostics & therapy, Center for Image Sciences, University Medical Center Utrecht, The Netherlands
| | - Alessandro Sbrizzi
- Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, The Netherlands; Computational Imaging Group for MR diagnostics & therapy, Center for Image Sciences, University Medical Center Utrecht, The Netherlands
| |
Collapse
|
6
|
Pastor-Serrano O, Habraken S, Hoogeman M, Lathouwers D, Schaart D, Nomura Y, Xing L, Perkó Z. A probabilistic deep learning model of inter-fraction anatomical variations in radiotherapy. Phys Med Biol 2023; 68:085018. [PMID: 36958058 PMCID: PMC10481950 DOI: 10.1088/1361-6560/acc71d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 02/20/2023] [Accepted: 03/23/2023] [Indexed: 03/25/2023]
Abstract
Objective. In radiotherapy, the internal movement of organs between treatment sessions causes errors in the final radiation dose delivery. To assess the need for adaptation, motion models can be used to simulate dominant motion patterns and assess anatomical robustness before delivery. Traditionally, such models are based on principal component analysis (PCA) and are either patient-specific (requiring several scans per patient) or population-based, applying the same set of deformations to all patients. We present a hybrid approach which, based on population data, allows to predict patient-specific inter-fraction variations for an individual patient.Approach. We propose a deep learning probabilistic framework that generates deformation vector fields warping a patient's planning computed tomography (CT) into possible patient-specific anatomies. This daily anatomy model (DAM) uses few random variables capturing groups of correlated movements. Given a new planning CT, DAM estimates the joint distribution over the variables, with each sample from the distribution corresponding to a different deformation. We train our model using dataset of 312 CT pairs with prostate, bladder, and rectum delineations from 38 prostate cancer patients. For 2 additional patients (22 CTs), we compute the contour overlap between real and generated images, and compare the sampled and 'ground truth' distributions of volume and center of mass changes.Results. With a DICE score of 0.86 ± 0.05 and a distance between prostate contours of 1.09 ± 0.93 mm, DAM matches and improves upon previously published PCA-based models, using as few as 8 latent variables. The overlap between distributions further indicates that DAM's sampled movements match the range and frequency of clinically observed daily changes on repeat CTs.Significance. Conditioned only on planning CT values and organ contours of a new patient without any pre-processing, DAM can accurately deformations seen during following treatment sessions, enabling anatomically robust treatment planning and robustness evaluation against inter-fraction anatomical changes.
Collapse
Affiliation(s)
- Oscar Pastor-Serrano
- Delft University of Technology,
Department of Radiation Science & Technology, Delft, The
Netherlands
- Stanford University, Department of
Radiation Oncology, Stanford, CA, United States of America
| | - Steven Habraken
- Erasmus University Medical Center,
Department of Radiotherapy, Rotterdam, The Netherlands
- HollandPTC, Department of Medical
Physics and Informatics, Delft, The Netherlands
| | - Mischa Hoogeman
- Erasmus University Medical Center,
Department of Radiotherapy, Rotterdam, The Netherlands
- HollandPTC, Department of Medical
Physics and Informatics, Delft, The Netherlands
| | - Danny Lathouwers
- Delft University of Technology,
Department of Radiation Science & Technology, Delft, The
Netherlands
| | - Dennis Schaart
- Delft University of Technology,
Department of Radiation Science & Technology, Delft, The
Netherlands
- HollandPTC, Department of Medical
Physics and Informatics, Delft, The Netherlands
| | - Yusuke Nomura
- Stanford University, Department of
Radiation Oncology, Stanford, CA, United States of America
| | - Lei Xing
- Stanford University, Department of
Radiation Oncology, Stanford, CA, United States of America
| | - Zoltán Perkó
- Delft University of Technology,
Department of Radiation Science & Technology, Delft, The
Netherlands
| |
Collapse
|
7
|
Lombardo E, Rabe M, Xiong Y, Nierer L, Cusumano D, Placidi L, Boldrini L, Corradini S, Niyazi M, Reiner M, Belka C, Kurz C, Riboldi M, Landry G. Evaluation of real-time tumor contour prediction using LSTM networks for MR-guided radiotherapy. Radiother Oncol 2023; 182:109555. [PMID: 36813166 DOI: 10.1016/j.radonc.2023.109555] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 01/24/2023] [Accepted: 02/05/2023] [Indexed: 02/24/2023]
Abstract
BACKGROUND AND PURPOSE Magnetic resonance imaging guided radiotherapy (MRgRT) with deformable multileaf collimator (MLC) tracking would allow to tackle both rigid displacement and tumor deformation without prolonging treatment. However, the system latency must be accounted for by predicting future tumor contours in real-time. We compared the performance of three artificial intelligence (AI) algorithms based on long short-term memory (LSTM) modules for the prediction of 2D-contours 500ms into the future. MATERIALS AND METHODS Models were trained (52 patients, 3.1h of motion), validated (18 patients, 0.6h) and tested (18 patients, 1.1h) with cine MRs from patients treated at one institution. Additionally, we used three patients (2.9h) treated at another institution as second testing set. We implemented 1) a classical LSTM network (LSTM-shift) predicting tumor centroid positions in superior-inferior and anterior-posterior direction which are used to shift the last observed tumor contour. The LSTM-shift model was optimized both in an offline and online fashion. We also implemented 2) a convolutional LSTM model (ConvLSTM) to directly predict future tumor contours and 3) a convolutional LSTM combined with spatial transformer layers (ConvLSTM-STL) to predict displacement fields used to warp the last tumor contour. RESULTS The online LSTM-shift model was found to perform slightly better than the offline LSTM-shift and significantly better than the ConvLSTM and ConvLSTM-STL. It achieved a 50% Hausdorff distance of 1.2mm and 1.0mm for the two testing sets, respectively. Larger motion ranges were found to lead to more substantial performance differences across the models. CONCLUSION LSTM networks predicting future centroids and shifting the last tumor contour are the most suitable for tumor contour prediction. The obtained accuracy would allow to reduce residual tracking errors during MRgRT with deformable MLC-tracking.
Collapse
Affiliation(s)
- Elia Lombardo
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich 81377, Germany
| | - Moritz Rabe
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich 81377, Germany
| | - Yuqing Xiong
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich 81377, Germany
| | - Lukas Nierer
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich 81377, Germany
| | - Davide Cusumano
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome 00168, Italy
| | - Lorenzo Placidi
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome 00168, Italy
| | - Luca Boldrini
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome 00168, Italy
| | - Stefanie Corradini
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich 81377, Germany
| | - Maximilian Niyazi
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich 81377, Germany
| | - Michael Reiner
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich 81377, Germany
| | - Claus Belka
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich 81377, Germany; German Cancer Consortium (DKTK), Munich 81377, Germany
| | - Christopher Kurz
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich 81377, Germany
| | - Marco Riboldi
- Department of Medical Physics, Faculty of Physics, Ludwig-Maximilians-Universität München, Garching b. München 85748, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich 81377, Germany.
| |
Collapse
|
8
|
Lee D, Yorke E, Zarepisheh M, Nadeem S, Hu YC. RMSim: controlled respiratory motion simulation on static patient scans. Phys Med Biol 2023; 68. [PMID: 36652721 DOI: 10.1088/1361-6560/acb484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 01/18/2023] [Indexed: 01/20/2023]
Abstract
Objective.This work aims to generate realistic anatomical deformations from static patient scans. Specifically, we present a method to generate these deformations/augmentations via deep learning driven respiratory motion simulation that provides the ground truth for validating deformable image registration (DIR) algorithms and driving more accurate deep learning based DIR.Approach.We present a novel 3D Seq2Seq deep learning respiratory motion simulator (RMSim) that learns from 4D-CT images and predicts future breathing phases given a static CT image. The predicted respiratory patterns, represented by time-varying displacement vector fields (DVFs) at different breathing phases, are modulated through auxiliary inputs of 1D breathing traces so that a larger amplitude in the trace results in more significant predicted deformation. Stacked 3D-ConvLSTMs are used to capture the spatial-temporal respiration patterns. Training loss includes a smoothness loss in the DVF and mean-squared error between the predicted and ground truth phase images. A spatial transformer deforms the static CT with the predicted DVF to generate the predicted phase image. 10-phase 4D-CTs of 140 internal patients were used to train and test RMSim. The trained RMSim was then used to augment a public DIR challenge dataset for training VoxelMorph to show the effectiveness of RMSim-generated deformation augmentation.Main results.We validated our RMSim output with both private and public benchmark datasets (healthy and cancer patients). The structure similarity index measure (SSIM) for predicted breathing phases and ground truth 4D CT images was 0.92 ± 0.04, demonstrating RMSim's potential to generate realistic respiratory motion. Moreover, the landmark registration error in a public DIR dataset was improved from 8.12 ± 5.78 mm to 6.58mm ± 6.38 mm using RMSim-augmented training data.Significance.The proposed approach can be used for validating DIR algorithms as well as for patient-specific augmentations to improve deep learning DIR algorithms. The code, pretrained models, and augmented DIR validation datasets will be released athttps://github.com/nadeemlab/SeqX2Y.
Collapse
Affiliation(s)
- Donghoon Lee
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States of America
| | - Ellen Yorke
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States of America
| | - Masoud Zarepisheh
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States of America
| | - Saad Nadeem
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States of America
| | - Yu-Chi Hu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States of America
| |
Collapse
|
9
|
Li N, Tous C, Dimov IP, Fei P, Zhang Q, Lessard S, Moran G, Jin N, Kadoury S, Tang A, Martel S, Soulez G. Design of a Patient-Specific Respiratory-Motion-Simulating Platform for In Vitro 4D Flow MRI. Ann Biomed Eng 2022; 51:1028-1039. [PMID: 36580223 DOI: 10.1007/s10439-022-03117-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 12/04/2022] [Indexed: 12/30/2022]
Abstract
Four-dimensional (4D) flow magnetic resonance imaging (MRI) is a leading-edge imaging technique and has numerous medicinal applications. In vitro 4D flow MRI can offer some advantages over in vivo ones, especially in accurately controlling flow rate (gold standard), removing patient and user-specific variations, and minimizing animal testing. Here, a complete testing method and a respiratory-motion-simulating platform are proposed for in vitro validation of 4D flow MRI. A silicon phantom based on the hepatic arteries of a living pig is made. Under the free-breathing, a human volunteer's liver motion (inferior-superior direction) is tracked using a pencil-beam MRI navigator and is extracted and converted into velocity-distance pairs to program the respiratory-motion-simulating platform. With the magnitude displacement of about 1.3 cm, the difference between the motions obtained from the volunteer and our platform is ≤ 1 mm which is within the positioning error of the MRI navigator. The influence of the platform on the MRI signal-to-noise ratio can be eliminated even if the actuator is placed in the MRI room. The 4D flow measurement errors are respectively 0.4% (stationary phantom), 9.4% (gating window = 3 mm), 27.3% (gating window = 4 mm) and 33.1% (gating window = 7 mm). The vessel resolutions decreased with the increase of the gating window. The low-cost simulation system, assembled from commercially available components, is easy to be duplicated.
Collapse
Affiliation(s)
- Ning Li
- Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), 900 Rue Saint-Denis, Montreal, QC, H2X 0A9, Canada
- Université de Montréal, 2900 Boulevard Édouard-Montpetit, Montreal, QC, H3T 1J4, Canada
| | - Cyril Tous
- Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), 900 Rue Saint-Denis, Montreal, QC, H2X 0A9, Canada
- Université de Montréal, 2900 Boulevard Édouard-Montpetit, Montreal, QC, H3T 1J4, Canada
| | - Ivan P Dimov
- Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), 900 Rue Saint-Denis, Montreal, QC, H2X 0A9, Canada
- Université de Montréal, 2900 Boulevard Édouard-Montpetit, Montreal, QC, H3T 1J4, Canada
| | - Phillip Fei
- Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), 900 Rue Saint-Denis, Montreal, QC, H2X 0A9, Canada
- Université de Montréal, 2900 Boulevard Édouard-Montpetit, Montreal, QC, H3T 1J4, Canada
| | - Quan Zhang
- Shanghai University, 266 Jufengyuan Rd, Shanghai, 200444, China
| | - Simon Lessard
- Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), 900 Rue Saint-Denis, Montreal, QC, H2X 0A9, Canada
- Université de Montréal, 2900 Boulevard Édouard-Montpetit, Montreal, QC, H3T 1J4, Canada
| | - Gerald Moran
- Siemens Canada, 1577 North Service Rd E, Oakville, ON, L6H 0H6, Canada
| | - Ning Jin
- Siemens Medical Solutions Inc., 40 Liberty Boulevard, Malvern, PA, 19355, USA
| | - Samuel Kadoury
- Polytechnique Montréal, 2500 Chemin de Polytechnique, Montreal, QC, H3T 1J4, Canada
| | - An Tang
- Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), 900 Rue Saint-Denis, Montreal, QC, H2X 0A9, Canada
- Université de Montréal, 2900 Boulevard Édouard-Montpetit, Montreal, QC, H3T 1J4, Canada
- Department of Radiology, Centre hospitalier de l'Université de Montréal (CHUM), 1000 Rue Saint-Denis, Montreal, QC, H2X 0C1, Canada
| | - Sylvain Martel
- Polytechnique Montréal, 2500 Chemin de Polytechnique, Montreal, QC, H3T 1J4, Canada
| | - Gilles Soulez
- Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM), 900 Rue Saint-Denis, Montreal, QC, H2X 0A9, Canada.
- Université de Montréal, 2900 Boulevard Édouard-Montpetit, Montreal, QC, H3T 1J4, Canada.
- Department of Radiology, Centre hospitalier de l'Université de Montréal (CHUM), 1000 Rue Saint-Denis, Montreal, QC, H2X 0C1, Canada.
| |
Collapse
|
10
|
Liu C, Wang Q, Si W, Ni X. NuTracker: a coordinate-based neural network representation of lung motion for intrafraction tumor tracking with various surrogates in radiotherapy. Phys Med Biol 2022; 68. [PMID: 36537890 DOI: 10.1088/1361-6560/aca873] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 12/01/2022] [Indexed: 12/03/2022]
Abstract
Objective. Tracking tumors and surrounding tissues in real-time is critical for reducing errors and uncertainties during radiotherapy. Existing methods are either limited by the linear representation or scale poorly with the volume resolution. To address both issues, we propose a novel coordinate-based neural network representation of lung motion to predict the instantaneous 3D volume at arbitrary spatial resolution from various surrogates: patient surface, fiducial marker, and single kV projection.Approach. The proposed model, namely NuTracker, decomposes the 4DCT into a template volume and dense displacement fields (DDFs), and uses two coordinate neural networks to predict them from spatial coordinates and surrogate states. The predicted template is spatially warped with the predicted DDF to produce the deformed volume for a given surrogate state. The nonlinear coordinate networks enable representing complex motion at infinite resolution. The decomposition allows imposing different regularizations on the spatial and temporal domains. The meta-learning and multi-task learning are used to train NuTracker across patients and tasks, so that commonalities and differences can be exploited. NuTracker was evaluated on seven patients implanted with markers using a leave-one-phase-out procedure.Main results. The 3D marker localization error is 0.66 mm on average and <1 mm at 95th-percentile, which is about 26% and 32% improvement over the predominant linear methods. The tumor coverage and image quality are improved by 5.7% and 11% in terms of dice and PSNR. The difference in the localization error for different surrogates is small and is not statistically significant. Cross-population learning and multi-task learning contribute to performance. The model tolerates surrogate drift to a certain extent.Significance. NuTracker can provide accurate estimation for entire tumor volume based on various surrogates at infinite resolution. It is of great potential to apply the coordinate network to other imaging modalities, e.g. 4DCBCT and other tasks, e.g. 4D dose calculation.
Collapse
Affiliation(s)
- Cong Liu
- Radiation Oncology Center, The Affiliated Changzhou Second People's Hospital of Nanjing Medical University, Changzhou, People's Republic of China.,Center of Medical Physics, Nanjing Medical University, Changzhou, People's Republic of China.,Faculty of Business Information, Shanghai Business School, Shanghai, People's Republic of China
| | - Qingxin Wang
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, People's Republic of China
| | - Wen Si
- Faculty of Business Information, Shanghai Business School, Shanghai, People's Republic of China.,Huashan Hospital, Fudan University, Shanghai, People's Republic of China
| | - Xinye Ni
- Radiation Oncology Center, The Affiliated Changzhou Second People's Hospital of Nanjing Medical University, Changzhou, People's Republic of China.,Center of Medical Physics, Nanjing Medical University, Changzhou, People's Republic of China
| |
Collapse
|
11
|
Wei R, Chen J, Liang B, Chen X, Men K, Dai J. Real-time 3D MRI reconstruction from cine-MRI using unsupervised network in MRI-guided radiotherapy for liver cancer. Med Phys 2022. [PMID: 36510442 DOI: 10.1002/mp.16141] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 10/12/2022] [Accepted: 11/07/2022] [Indexed: 12/15/2022] Open
Abstract
PURPOSE Respiration has a major impact on the accuracy of radiation treatment for thorax and abdominal tumours. Instantaneous volumetric imaging could provide precise knowledge of tumour and normal organs' three-dimensional (3D) movement, which is the key to reducing the negative effect of breathing motion. Therefore, this study proposed a real-time 3D MRI reconstruction method from cine-MRI using an unsupervised network. METHODS AND MATERIALS Cine-MRI and setup 3D-MRI from eight patients with liver cancer were utilized to establish and validate the deep learning network for 3D-MRI reconstruction. Unlike previous methods that required 4D-MRI for network training, the proposed method utilized a reference 3D-MRI and cine-MRI to generate the training data. Then, a network was trained in an unsupervised manner to estimate the relationship between the cine-MRI acquired on coronal plane and deformation vector field (DVF) that describes the patient's breathing motion. After the training process, the coronal cine-MRI were inputted into the network, and the corresponding DVF was obtained. By wrapping the reference 3D-MRI with the generated DVF, the 3D-MRI could be reconstructed. RESULTS The reconstructed 3D-MRI slices were compared with the corresponding phase-sorted cine-MRI using dice similarity coefficients (DSCs) of liver contours and blood vessel localization error. In all patients, the liver DSC had mean value >96.1% and standard deviation < 1.3%; the blood vessel localization error had mean value <2.6 mm, and standard deviation was <2.0 mm. Moreover, the time for 3D-MRI reconstruction was approximately 100 ms. These results indicated that the proposed method could accurately reconstruct the 3D-MRI in real time. CONCLUSIONS The proposed method could accurately reconstruct the 3D-MRI from cine-MRI in real time. This method has great potential in improving the accuracy of radiotherapy for moving tumours.
Collapse
Affiliation(s)
- Ran Wei
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.,National Cancer Center/National Clinical Research Center for Cancer/Hebei Cancer Hospital, Chinese Academy of Medical Sciences, Langfang, China
| | - Jiayun Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Bin Liang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.,National Cancer Center/National Clinical Research Center for Cancer/Hebei Cancer Hospital, Chinese Academy of Medical Sciences, Langfang, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
12
|
Mansour R, Romaguera LV, Huet C, Bentridi A, Vu KN, Billiard JS, Gilbert G, Tang A, Kadoury S. Abdominal motion tracking with free-breathing XD-GRASP acquisitions using spatio-temporal geodesic trajectories. Med Biol Eng Comput 2022; 60:583-598. [PMID: 35029812 DOI: 10.1007/s11517-021-02477-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 11/23/2021] [Indexed: 11/25/2022]
Abstract
Free-breathing external beam radiotherapy remains challenging due to the complex elastic or irregular motion of abdominal organs, as imaging moving organs leads to the creation of motion blurring artifacts. In this paper, we propose a radial-based MRI reconstruction method from 3D free-breathing abdominal data using spatio-temporal geodesic trajectories, to quantify motion during radiotherapy. The prospective study was approved by the institutional review board and consent was obtained from all participants. A total of 25 healthy volunteers, 12 women and 13 men (38 years ± 12 [standard deviation]), and 11 liver cancer patients underwent imaging using a 3.0 T clinical MRI system. The radial acquisition based on golden-angle sparse sampling was performed using a 3D stack-of-stars gradient-echo sequence and reconstructed using a discretized piecewise spatio-temporal trajectory defined in a low-dimensional embedding, which tracks the inhale and exhale phases, allowing the separation between distinct motion phases. Liver displacement between phases as measured with the proposed radial approach based on the deformation vector fields was compared to a navigator-based approach. Images reconstructed with the proposed technique with 20 motion states and registered with the multiscale B-spline approach received on average the highest Likert scores for the overall image quality and visual SNR score 3.2 ± 0.3 (mean ± standard deviation), with liver displacement errors varying between 0.1 and 2.0 mm (mean 0.8 ± 0.6 mm). When compared to navigator-based approaches, the proposed method yields similar deformation vector field magnitudes and angle distributions, and with improved reconstruction accuracy based on mean squared errors. Schematic illustration of the proposed 4D-MRI reconstruction method based on radial golden-angle acquisitions and a respiration motion model from a manifold embedding used for motion tracking. First, data is extracted from the center of k-space using golden-angle sampling, which is then mapped onto a low-dimensional embedding, describing the relationship between neighboring samples in the breathing cycle. The trained model is then used to extract the respiratory motion signal for slice re-ordering. The process then improves the image quality through deformable image registration. Using a reference volume, the deformation vector field (DVF) of sequential motion states are extracted, followed by deformable registrations. The output is a 4DMRI which allows to visualize and quantify motion during free-breathing.
Collapse
Affiliation(s)
- Rihab Mansour
- Centre hospitalier de l'Université de Montréal (CHUM) Research Center, Montreal, QC, Canada
| | - Liset Vazquez Romaguera
- Department of Computer and Software Engineering, Polytechnique Montreal, PO Box 6079, Montreal, QC, Canada
| | - Catherine Huet
- Department of Radiology, Centre hospitalier de l'Université de Montréal (CHUM), Montreal, QC, Canada
| | - Ahmed Bentridi
- Department of Radiology, Centre hospitalier de l'Université de Montréal (CHUM), Montreal, QC, Canada
| | - Kim-Nhien Vu
- Department of Radiology, Centre hospitalier de l'Université de Montréal (CHUM), Montreal, QC, Canada
| | - Jean-Sébastien Billiard
- Department of Radiology, Centre hospitalier de l'Université de Montréal (CHUM), Montreal, QC, Canada
| | | | - An Tang
- Centre hospitalier de l'Université de Montréal (CHUM) Research Center, Montreal, QC, Canada
- Department of Radiology, Centre hospitalier de l'Université de Montréal (CHUM), Montreal, QC, Canada
| | - Samuel Kadoury
- Centre hospitalier de l'Université de Montréal (CHUM) Research Center, Montreal, QC, Canada.
- Department of Computer and Software Engineering, Polytechnique Montreal, PO Box 6079, Montreal, QC, Canada.
| |
Collapse
|