1
|
Yoon YH, Chun J, Kiser K, Marasini S, Curcuru A, Gach HM, Kim JS, Kim T. Inter-scanner super-resolution of 3D cine MRI using a transfer-learning network for MRgRT. Phys Med Biol 2024; 69:115038. [PMID: 38663411 DOI: 10.1088/1361-6560/ad43ab] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 04/25/2024] [Indexed: 05/30/2024]
Abstract
Objective. Deep-learning networks for super-resolution (SR) reconstruction enhance the spatial-resolution of 3D magnetic resonance imaging (MRI) for MR-guided radiotherapy (MRgRT). However, variations between MRI scanners and patients impact the quality of SR for real-time 3D low-resolution (LR) cine MRI. In this study, we present a personalized super-resolution (psSR) network that incorporates transfer-learning to overcome the challenges in inter-scanner SR of 3D cine MRI.Approach: Development of the proposed psSR network comprises two-stages: (1) a cohort-specific SR (csSR) network using clinical patient datasets, and (2) a psSR network using transfer-learning to target datasets. The csSR network was developed by training on breath-hold and respiratory-gated high-resolution (HR) 3D MRIs and their k-space down-sampled LR MRIs from 53 thoracoabdominal patients scanned at 1.5 T. The psSR network was developed through transfer-learning to retrain the csSR network using a single breath-hold HR MRI and a corresponding 3D cine MRI from 5 healthy volunteers scanned at 0.55 T. Image quality was evaluated using the peak-signal-noise-ratio (PSNR) and the structure-similarity-index-measure (SSIM). The clinical feasibility was assessed by liver contouring on the psSR MRI using an auto-segmentation network and quantified using the dice-similarity-coefficient (DSC).Results. Mean PSNR and SSIM values of psSR MRIs were increased by 57.2% (13.8-21.7) and 94.7% (0.38-0.74) compared to cine MRIs, with the reference 0.55 T breath-hold HR MRI. In the contour evaluation, DSC was increased by 15% (0.79-0.91). Average time consumed for transfer-learning was 90 s, psSR was 4.51 ms per volume, and auto-segmentation was 210 ms, respectively.Significance. The proposed psSR reconstruction substantially increased image and segmentation quality of cine MRI in an average of 215 ms across the scanners and patients with less than 2 min of prerequisite transfer-learning. This approach would be effective in overcoming cohort- and scanner-dependency of deep-learning for MRgRT.
Collapse
Affiliation(s)
- Young Hun Yoon
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | | | - Kendall Kiser
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | - Shanti Marasini
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | - Austen Curcuru
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | - H Michael Gach
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
- Departments of Radiology and Biomedical Engineering, Washington University in St. Louis, St Louis, MO, United States of America
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
- Oncosoft Inc., Seoul, Republic of Korea
| | - Taeho Kim
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| |
Collapse
|
2
|
Wang X, Chang Y, Pei X, Xu XG. A prior-information-based automatic segmentation method for the clinical target volume in adaptive radiotherapy of cervical cancer. J Appl Clin Med Phys 2024; 25:e14350. [PMID: 38546277 PMCID: PMC11087177 DOI: 10.1002/acm2.14350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 01/09/2024] [Accepted: 03/18/2024] [Indexed: 05/12/2024] Open
Abstract
OBJECTIVE Adaptive planning to accommodate anatomic changes during treatment often requires repeated segmentation. In this study, prior patient-specific data was integrateda into a registration-guided multi-channel multi-path (Rg-MCMP) segmentation framework to improve the accuracy of repeated clinical target volume (CTV) segmentation. METHODS This study was based on CT image datasets for a total of 90 cervical cancer patients who received two courses of radiotherapy. A total of 15 patients were selected randomly as the test set. In the Rg-MCMP segmentation framework, the first-course CT images (CT1) were registered to second-course CT images (CT2) to yield aligned CT images (aCT1), and the CTV in the first course (CTV1) was propagated to yield aligned CTV contours (aCTV1). Then, aCT1, aCTV1, and CT2 were combined as the inputs for 3D U-Net consisting of a channel-based multi-path feature extraction network. The performance of the Rg-MCMP segmentation framework was evaluated and compared with the single-channel single-path model (SCSP), the standalone registration methods, and the registration-guided multi-channel single-path (Rg-MCSP) model. The Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and average surface distance (ASD) were used as the metrics. RESULTS The average DSC of CTV for the deformable image DIR-MCMP model was found to be 0.892, greater than that of the standalone DIR (0.856), SCSP (0.837), and DIR-MCSP (0.877), which were improvements of 4.2%, 6.6%, and 1.7%, respectively. Similarly, the rigid body DIR-MCMP model yielded an average DSC of 0.875, which exceeded standalone RB (0.787), SCSP (0.837), and registration-guided multi-channel single-path (0.848), which were improvements of 11.2%, 4.5%, and 3.2%, respectively. These improvements in DSC were statistically significant (p < 0.05). CONCLUSION The proposed Rg-MCMP framework achieved excellent accuracy in CTV segmentation as part of the adaptive radiotherapy workflow.
Collapse
Affiliation(s)
- Xuanhe Wang
- School of Nuclear Science and TechnologyUniversity of Science and Technology of ChinaHefeiChina
| | - Yankui Chang
- School of Nuclear Science and TechnologyUniversity of Science and Technology of ChinaHefeiChina
| | - Xi Pei
- School of Nuclear Science and TechnologyUniversity of Science and Technology of ChinaHefeiChina
- Anhui Wisdom Technology Company LtmitedHefeiChina
| | - Xie George Xu
- School of Nuclear Science and TechnologyUniversity of Science and Technology of ChinaHefeiChina
- Department of Radiation OncologyThe First Affiliated Hospital of University of Science and Technology of ChinaHefeiChina
| |
Collapse
|
3
|
Fransson S. Comparing multi-image and image augmentation strategies for deep learning-based prostate segmentation. Phys Imaging Radiat Oncol 2024; 29:100551. [PMID: 38444888 PMCID: PMC10912785 DOI: 10.1016/j.phro.2024.100551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/29/2024] [Accepted: 02/06/2024] [Indexed: 03/07/2024] Open
Abstract
During MR-Linac-based adaptive radiotherapy, multiple images are acquired per patient. These can be applied in training deep learning networks to reduce annotation efforts. This study examined the advantage of using multiple versus single images for prostate treatment segmentation. Findings indicate minimal improvement in DICE and Hausdorff 95% metrics with multiple images. Maximum difference was seen for the rectum in the low data regime, training with images from five patients. Utilizing a 2D U-net resulted in DICE values of 0.80/0.83 when including 1/5 images per patient, respectively. Including more patients in training reduced the difference. Standard augmentation methods remained more effective.
Collapse
Affiliation(s)
- Samuel Fransson
- Department of Medical Physics, Uppsala University Hospital, Uppsala, Sweden
- Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| |
Collapse
|
4
|
Maniscalco A, Liang X, Lin MH, Jiang S, Nguyen D. Single patient learning for adaptive radiotherapy dose prediction. Med Phys 2023; 50:7324-7337. [PMID: 37861055 PMCID: PMC10843391 DOI: 10.1002/mp.16799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 09/30/2023] [Accepted: 10/08/2023] [Indexed: 10/21/2023] Open
Abstract
BACKGROUND Throughout a patient's course of radiation therapy, maintaining accuracy of their initial treatment plan over time is challenging due to anatomical changes-for example, stemming from patient weight loss or tumor shrinkage. Online adaptation of their RT plan to these changes is crucial, but hindered by manual and time-consuming processes. While deep learning (DL) based solutions have shown promise in streamlining adaptive radiation therapy (ART) workflows, they often require large and extensive datasets to train population-based models. PURPOSE This study extends our prior research by introducing a minimalist approach to patient-specific adaptive dose prediction. In contrast to our prior method, which involved fine-tuning a pre-trained population model, this new method trains a model from scratch using only a patient's initial treatment data. This patient-specific dose predictor aims to enhance clinical accessibility, thereby empowering physicians and treatment planners to make more informed, quantitative decisions in ART. We hypothesize that patient-specific DL models will provide more accurate adaptive dose predictions for their respective patients compared to a population-based DL model. METHODS We selected 33 patients to train an adaptive population-based (AP) model. Ten additional patients were selected, and their respective initial RT data served as single samples for training patient-specific (PS) models. These 10 patients contained an additional 26 ART plans that were withheld as the test dataset to evaluate AP versus PS model dose prediction performance. We assessed model performance using Mean Absolute Percent Error (MAPE) by comparing predicted doses to the originally delivered ground truth doses. We used the Wilcoxon signed-rank test to determine statistically significant differences in terms of MAPE between the AP and PS model results across the test dataset. Furthermore, we calculated differences between predicted and ground truth mean doses for segmented structures and determined statistical significance in the differences for each of them. RESULTS The average MAPE across AP and PS model dose predictions was 5.759% and 4.069%, respectively. The Wilcoxon signed-rank test yielded two-tailed p-value = 2.9802 × 10 - 8 $2.9802\ \times \ {10}^{ - 8}$ , indicating that the MAPE differences between the AP and PS model dose predictions are statistically significant, and 95% confidence interval = [-2.1610, -1.0130], indicating 95% confidence that the MAPE difference between the AP and PS models for a population lies in this range. Out of 24 total segmented structures, the comparison of mean dose differences for 12 structures indicated statistical significance with two-tailed p-values < 0.05. CONCLUSION Our study demonstrates the potential of patient-specific deep learning models in application to ART. Notably, our method streamlines the training process by minimizing the size of the required training dataset, as only a single patient's initial treatment data is required. External institutions considering the implementation of such a technology could package such a model so that it only requires the upload of a reference treatment plan for model training and deployment. Our single patient learning strategy demonstrates promise in ART due to its minimal dataset requirement and its utility in personalization of cancer treatment.
Collapse
Affiliation(s)
- Austen Maniscalco
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Xiao Liang
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Mu-Han Lin
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Steve Jiang
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
5
|
Fransson S, Tilly D, Strand R. Patient specific deep learning based segmentation for magnetic resonance guided prostate radiotherapy. Phys Imaging Radiat Oncol 2022; 23:38-42. [PMID: 35769110 PMCID: PMC9234226 DOI: 10.1016/j.phro.2022.06.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 05/06/2022] [Accepted: 06/01/2022] [Indexed: 11/28/2022] Open
Affiliation(s)
- Samuel Fransson
- Department of Medical Physics, Uppsala University Hospital, Uppsala, Sweden
- Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
- Corresponding author at: Department of Medical Physics, Uppsala University Hospital, Uppsala, Sweden.
| | - David Tilly
- Department of Medical Physics, Uppsala University Hospital, Uppsala, Sweden
- Department of Immunology, Genetics and Pathology, Uppsala University, Uppsala, Sweden
| | - Robin Strand
- Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
6
|
Chun J, Park JC, Olberg S, Zhang Y, Nguyen D, Wang J, Kim JS, Jiang S. Intentional deep overfit learning (IDOL): A novel deep learning strategy for adaptive radiation therapy. Med Phys 2021; 49:488-496. [PMID: 34791672 DOI: 10.1002/mp.15352] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Revised: 09/28/2021] [Accepted: 11/03/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE Applications of deep learning (DL) are essential to realizing an effective adaptive radiotherapy (ART) workflow. Despite the promise demonstrated by DL approaches in several critical ART tasks, there remain unsolved challenges to achieve satisfactory generalizability of a trained model in a clinical setting. Foremost among these is the difficulty of collecting a task-specific training dataset with high-quality, consistent annotations for supervised learning applications. In this study, we propose a tailored DL framework for patient-specific performance that leverages the behavior of a model intentionally overfitted to a patient-specific training dataset augmented from the prior information available in an ART workflow-an approach we term Intentional Deep Overfit Learning (IDOL). METHODS Implementing the IDOL framework in any task in radiotherapy consists of two training stages: (1) training a generalized model with a diverse training dataset of N patients, just as in the conventional DL approach, and (2) intentionally overfitting this general model to a small training dataset-specific the patient of interest ( N + 1 ) generated through perturbations and augmentations of the available task- and patient-specific prior information to establish a personalized IDOL model. The IDOL framework itself is task-agnostic and is, thus, widely applicable to many components of the ART workflow, three of which we use as a proof of concept here: the autocontouring task on replanning CTs for traditional ART, the MRI super-resolution (SR) task for MRI-guided ART, and the synthetic CT (sCT) reconstruction task for MRI-only ART. RESULTS In the replanning CT autocontouring task, the accuracy measured by the Dice similarity coefficient improves from 0.847 with the general model to 0.935 by adopting the IDOL model. In the case of MRI SR, the mean absolute error (MAE) is improved by 40% using the IDOL framework over the conventional model. Finally, in the sCT reconstruction task, the MAE is reduced from 68 to 22 HU by utilizing the IDOL framework. CONCLUSIONS In this study, we propose a novel IDOL framework for ART and demonstrate its feasibility using three ART tasks. We expect the IDOL framework to be especially useful in creating personally tailored models in situations with limited availability of training data but existing prior information, which is usually true in the medical setting in general and is especially true in ART.
Collapse
Affiliation(s)
- Jaehee Chun
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Justin C Park
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Sven Olberg
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA.,Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - You Zhang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jing Wang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Steve Jiang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|