1
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RLJ, Liu T, Wang T, Yang X. Deep learning in MRI-guided radiation therapy: A systematic review. J Appl Clin Med Phys 2024; 25:e14155. [PMID: 37712893 PMCID: PMC10860468 DOI: 10.1002/acm2.14155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/10/2023] [Accepted: 08/21/2023] [Indexed: 09/16/2023] Open
Abstract
Recent advances in MRI-guided radiation therapy (MRgRT) and deep learning techniques encourage fully adaptive radiation therapy (ART), real-time MRI monitoring, and the MRI-only treatment planning workflow. Given the rapid growth and emergence of new state-of-the-art methods in these fields, we systematically review 197 studies written on or before December 31, 2022, and categorize the studies into the areas of image segmentation, image synthesis, radiomics, and real time MRI. Building from the underlying deep learning methods, we discuss their clinical importance and current challenges in facilitating small tumor segmentation, accurate x-ray attenuation information from MRI, tumor characterization and prognosis, and tumor motion tracking. In particular, we highlight the recent trends in deep learning such as the emergence of multi-modal, visual transformer, and diffusion models.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Richard L. J. Qiu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| |
Collapse
|
2
|
Yan W, Chiu B, Shen Z, Yang Q, Syer T, Min Z, Punwani S, Emberton M, Atkinson D, Barratt DC, Hu Y. Combiner and HyperCombiner networks: Rules to combine multimodality MR images for prostate cancer localisation. Med Image Anal 2024; 91:103030. [PMID: 37995627 DOI: 10.1016/j.media.2023.103030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 09/22/2023] [Accepted: 11/13/2023] [Indexed: 11/25/2023]
Abstract
One of the distinct characteristics of radiologists reading multiparametric prostate MR scans, using reporting systems like PI-RADS v2.1, is to score individual types of MR modalities, including T2-weighted, diffusion-weighted, and dynamic contrast-enhanced, and then combine these image-modality-specific scores using standardised decision rules to predict the likelihood of clinically significant cancer. This work aims to demonstrate that it is feasible for low-dimensional parametric models to model such decision rules in the proposed Combiner networks, without compromising the accuracy of predicting radiologic labels. First, we demonstrate that either a linear mixture model or a nonlinear stacking model is sufficient to model PI-RADS decision rules for localising prostate cancer. Second, parameters of these combining models are proposed as hyperparameters, weighing independent representations of individual image modalities in the Combiner network training, as opposed to end-to-end modality ensemble. A HyperCombiner network is developed to train a single image segmentation network that can be conditioned on these hyperparameters during inference for much-improved efficiency. Experimental results based on 751 cases from 651 patients compare the proposed rule-modelling approaches with other commonly-adopted end-to-end networks, in this downstream application of automating radiologist labelling on multiparametric MR. By acquiring and interpreting the modality combining rules, specifically the linear-weights or odds ratios associated with individual image modalities, three clinical applications are quantitatively presented and contextualised in the prostate cancer segmentation application, including modality availability assessment, importance quantification and rule discovery.
Collapse
Affiliation(s)
- Wen Yan
- Department of Electrical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Hong Kong China; Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Bernard Chiu
- Department of Electrical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Hong Kong China; Department of Physics & Computer Science, Wilfrid Laurier University, 75 University Avenue West Waterloo, Ontario N2L 3C5, Canada.
| | - Ziyi Shen
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Qianye Yang
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Tom Syer
- Centre for Medical Imaging, Division of Medicine, University College London, London W1 W 7TS, UK.
| | - Zhe Min
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Shonit Punwani
- Centre for Medical Imaging, Division of Medicine, University College London, London W1 W 7TS, UK.
| | - Mark Emberton
- Division of Surgery & Interventional Science, University College London, Gower St, WC1E 6BT, London, UK.
| | - David Atkinson
- Centre for Medical Imaging, Division of Medicine, University College London, London W1 W 7TS, UK.
| | - Dean C Barratt
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Yipeng Hu
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| |
Collapse
|
3
|
Tsui JMG, Kehayias CE, Leeman JE, Nguyen PL, Peng L, Yang DD, Moningi S, Martin N, Orio PF, D'Amico AV, Bredfeldt JS, Lee LK, Guthier CV, King MT. Assessing the Feasibility of Using Artificial Intelligence-Segmented Dominant Intraprostatic Lesion for Focal Intraprostatic Boost With External Beam Radiation Therapy. Int J Radiat Oncol Biol Phys 2024; 118:74-84. [PMID: 37517600 DOI: 10.1016/j.ijrobp.2023.07.029] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 07/11/2023] [Accepted: 07/18/2023] [Indexed: 08/01/2023]
Abstract
PURPOSE The delineation of dominant intraprostatic gross tumor volumes (GTVs) on multiparametric magnetic resonance imaging (mpMRI) can be subject to interobserver variability. We evaluated whether deep learning artificial intelligence (AI)-segmented GTVs can provide a similar degree of intraprostatic boosting with external beam radiation therapy (EBRT) as radiation oncologist (RO)-delineated GTVs. METHODS AND MATERIALS We identified 124 patients who underwent mpMRI followed by EBRT between 2010 and 2013. A reference GTV was delineated by an RO and approved by a board-certified radiologist. We trained an AI algorithm for GTV delineation on 89 patients, and tested the algorithm on 35 patients, each with at least 1 PI-RADS (Prostate Imaging Reporting and Data System) 4 or 5 lesion (46 total lesions). We then asked 5 additional ROs to independently delineate GTVs on the test set. We compared lesion detectability and geometric accuracy of the GTVs from AI and 5 ROs against the reference GTV. Then, we generated EBRT plans (77 Gy prostate) that boosted each observer-specific GTV to 95 Gy. We compared reference GTV dose (D98%) across observers using a mixed-effects model. RESULTS On a lesion level, AI GTV exhibited a sensitivity of 82.6% and positive predictive value of 86.4%. Respective ranges among the 5 RO GTVs were 84.8% to 95.7% and 95.1% to 100.0%. Among 30 GTVs mutually identified by all observers, no significant differences in Dice coefficient were detected between AI and any of the 5 ROs. Across all patients, only 2 of 5 ROs had a reference GTV D98% that significantly differed from that of AI by 2.56 Gy (P = .02) and 3.20 Gy (P = .003). The presence of false-negative (-5.97 Gy; P < .001) but not false-positive (P = .24) lesions was associated with reference GTV D98%. CONCLUSIONS AI-segmented GTVs demonstrate potential for intraprostatic boosting, although the degree of boosting may be adversely affected by false-negative lesions. Prospective review of AI-segmented GTVs remains essential.
Collapse
Affiliation(s)
- James M G Tsui
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts; Department of Radiation Oncology, McGill University Health Centre, Montreal, Quebec, Canada
| | - Christopher E Kehayias
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Jonathan E Leeman
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Paul L Nguyen
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Luke Peng
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - David D Yang
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Shalini Moningi
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Neil Martin
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Peter F Orio
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Anthony V D'Amico
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Jeremy S Bredfeldt
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Leslie K Lee
- Department of Radiology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Christian V Guthier
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Martin T King
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts.
| |
Collapse
|
4
|
Simeth J, Jiang J, Nosov A, Wibmer A, Zelefsky M, Tyagi N, Veeraraghavan H. Deep learning-based dominant index lesion segmentation for MR-guided radiation therapy of prostate cancer. Med Phys 2023; 50:4854-4870. [PMID: 36856092 PMCID: PMC11098147 DOI: 10.1002/mp.16320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 01/11/2023] [Accepted: 01/29/2023] [Indexed: 03/02/2023] Open
Abstract
BACKGROUND Dose escalation radiotherapy enables increased control of prostate cancer (PCa) but requires segmentation of dominant index lesions (DIL). This motivates the development of automated methods for fast, accurate, and consistent segmentation of PCa DIL. PURPOSE To construct and validate a model for deep-learning-based automatic segmentation of PCa DIL defined by Gleason score (GS) ≥3+4 from MR images applied to MR-guided radiation therapy. Validate generalizability of constructed models across scanner and acquisition differences. METHODS Five deep-learning networks were evaluated on apparent diffusion coefficient (ADC) MRI from 500 lesions in 365 patients arising from internal training Dataset 1 (156 lesions in 125 patients, 1.5Tesla GE MR with endorectal coil), testing using Dataset 1 (35 lesions in 26 patients), external ProstateX Dataset 2 (299 lesions in 204 patients, 3Tesla Siemens MR), and internal inter-rater Dataset 3 (10 lesions in 10 patients, 3Tesla Philips MR). The five networks include: multiple resolution residually connected network (MRRN) and MRRN regularized in training with deep supervision implemented into the last convolutional block (MRRN-DS), Unet, Unet++, ResUnet, and fast panoptic segmentation (FPSnet) as well as fast panoptic segmentation with smoothed labels (FPSnet-SL). Models were evaluated by volumetric DIL segmentation accuracy using Dice similarity coefficient (DSC) and the balanced F1 measure of detection accuracy, as a function of lesion aggressiveness and size (Dataset 1 and 2), and accuracy with respect to two-raters (on Dataset 3). Upon acceptance for publication segmentation models will be made available in an open-source GitHub repository. RESULTS In general, MRRN-DS more accurately segmented tumors than other methods on the testing datasets. MRRN-DS significantly outperformed ResUnet in Dataset2 (DSC of 0.54 vs. 0.44, p < 0.001) and the Unet++ in Dataset3 (DSC of 0.45 vs. p = 0.04). FPSnet-SL was similarly accurate as MRRN-DS in Dataset2 (p = 0.30), but MRRN-DS significantly outperformed FPSnet and FPSnet-SL in both Dataset1 (0.60 vs. 0.51 [p = 0.01] and 0.54 [p = 0.049] respectively) and Dataset3 (0.45 vs. 0.06 [p = 0.002] and 0.24 [p = 0.004] respectively). Finally, MRRN-DS produced slightly higher agreement with experienced radiologist than two radiologists in Dataset 3 (DSC of 0.45 vs. 0.41). CONCLUSIONS MRRN-DS was generalizable to different MR testing datasets acquired using different scanners. It produced slightly higher agreement with an experienced radiologist than that between two radiologists. Finally, MRRN-DS more accurately segmented aggressive lesions, which are generally candidates for radiative dose ablation.
Collapse
Affiliation(s)
- Josiah Simeth
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Anton Nosov
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Andreas Wibmer
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Michael Zelefsky
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
5
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RL, Liu T, Wang T, Yang X. Deep Learning in MRI-guided Radiation Therapy: A Systematic Review. ARXIV 2023:arXiv:2303.11378v2. [PMID: 36994167 PMCID: PMC10055493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
MRI-guided radiation therapy (MRgRT) offers a precise and adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed. MRI-guided radiation therapy offers a precise, adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed with emphasis placed on underlying methods. Studies are further categorized into the areas of segmentation, synthesis, radiomics, and real time MRI. Finally, clinical implications, current challenges, and future directions are discussed.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|