1
|
Chen J, Ye Z, Zhang R, Li H, Fang B, Zhang LB, Wang W. Medical image translation with deep learning: Advances, datasets and perspectives. Med Image Anal 2025; 103:103605. [PMID: 40311301 DOI: 10.1016/j.media.2025.103605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2024] [Revised: 03/07/2025] [Accepted: 04/12/2025] [Indexed: 05/03/2025]
Abstract
Traditional medical image generation often lacks patient-specific clinical information, limiting its clinical utility despite enhancing downstream task performance. In contrast, medical image translation precisely converts images from one modality to another, preserving both anatomical structures and cross-modal features, thus enabling efficient and accurate modality transfer and offering unique advantages for model development and clinical practice. This paper reviews the latest advancements in deep learning(DL)-based medical image translation. Initially, it elaborates on the diverse tasks and practical applications of medical image translation. Subsequently, it provides an overview of fundamental models, including convolutional neural networks (CNNs), transformers, and state space models (SSMs). Additionally, it delves into generative models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Autoregressive Models (ARs), diffusion Models, and flow Models. Evaluation metrics for assessing translation quality are discussed, emphasizing their importance. Commonly used datasets in this field are also analyzed, highlighting their unique characteristics and applications. Looking ahead, the paper identifies future trends, challenges, and proposes research directions and solutions in medical image translation. It aims to serve as a valuable reference and inspiration for researchers, driving continued progress and innovation in this area.
Collapse
Affiliation(s)
- Junxin Chen
- School of Software, Dalian University of Technology, Dalian 116621, China.
| | - Zhiheng Ye
- School of Software, Dalian University of Technology, Dalian 116621, China.
| | - Renlong Zhang
- Institute of Research and Clinical Innovations, Neusoft Medical Systems Co., Ltd., Beijing, China.
| | - Hao Li
- School of Computing Science, University of Glasgow, Glasgow G12 8QQ, United Kingdom.
| | - Bo Fang
- School of Computer Science, The University of Sydney, Sydney, NSW 2006, Australia.
| | - Li-Bo Zhang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang 110840, China.
| | - Wei Wang
- Guangdong-Hong Kong-Macao Joint Laboratory for Emotion Intelligence and Pervasive Computing, Artificial Intelligence Research Institute, Shenzhen MSU-BIT University, Shenzhen 518172, China; School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
2
|
Zhao Y, Cozma A, Ding Y, Perles LA, Reiazi R, Chen X, Kang A, Prajapati S, Yu H, Subashi ED, Brock K, Wang J, Beddar S, Lee B, Mohammedsaid M, Cooper S, Westley R, Tree A, Mohamad O, Hassanzadeh C, Mok H, Choi S, Tang C, Yang J. Upper Urinary Tract Stereotactic Body Radiotherapy Using a 1.5 Tesla Magnetic Resonance Imaging-Guided Linear Accelerator: Workflow and Physics Considerations. Cancers (Basel) 2024; 16:3987. [PMID: 39682173 PMCID: PMC11640540 DOI: 10.3390/cancers16233987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2024] [Revised: 11/25/2024] [Accepted: 11/26/2024] [Indexed: 12/18/2024] Open
Abstract
Background/Objectives: Advancements in radiotherapy technology now enable the delivery of ablative doses to targets in the upper urinary tract, including primary renal cell carcinoma (RCC) or upper tract urothelial carcinomas (UTUC), and secondary involvement by other histologies. Magnetic resonance imaging-guided linear accelerators (MR-Linacs) have shown promise to further improve the precision and adaptability of stereotactic body radiotherapy (SBRT). Methods: This single-institution retrospective study analyzed 34 patients (31 with upper urinary tract non-metastatic primaries [RCC or UTUC] and 3 with metastases of non-genitourinary histology) who received SBRT from August 2020 through September 2024 using a 1.5 Tesla MR-Linac system. Treatment plans were adjusted by using [online settings] for "adapt-to-position" (ATP) and "adapt-to-shape" (ATS) strategies for anatomic changes that developed during treatment; compression belts were used for motion management. Results: The median duration of treatment was 56 min overall and was significantly shorter using the adapt-to-position (ATP) (median 54 min, range 38-97 min) in comparison with adapt-to-shape (ATS) option (median 80, range 53-235 min). Most patients (77%) experienced self-resolving grade 1-2 acute radiation-induced toxicity; none had grade ≥ 3. Three participants (9%) experienced late grade 1-2 toxicity, potentially attributable to SBRT, with one (3%) experiencing grade 3. Conclusions: We conclude that MR-Linac-based SBRT, supported by online plan adaptation, is a feasible, safe, and highly precise treatment modality for the definitive management of select upper urinary tract lesions.
Collapse
Affiliation(s)
- Yao Zhao
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Y.Z.); (Y.D.); (L.A.P.); (R.R.); (X.C.); (A.K.); (S.P.); (H.Y.); (E.D.S.); (K.B.); (J.W.); (S.B.); (B.L.)
| | - Adrian Cozma
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (A.C.); (M.M.); (O.M.); (C.H.); (H.M.); (S.C.)
| | - Yao Ding
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Y.Z.); (Y.D.); (L.A.P.); (R.R.); (X.C.); (A.K.); (S.P.); (H.Y.); (E.D.S.); (K.B.); (J.W.); (S.B.); (B.L.)
| | - Luis Augusto Perles
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Y.Z.); (Y.D.); (L.A.P.); (R.R.); (X.C.); (A.K.); (S.P.); (H.Y.); (E.D.S.); (K.B.); (J.W.); (S.B.); (B.L.)
| | - Reza Reiazi
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Y.Z.); (Y.D.); (L.A.P.); (R.R.); (X.C.); (A.K.); (S.P.); (H.Y.); (E.D.S.); (K.B.); (J.W.); (S.B.); (B.L.)
| | - Xinru Chen
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Y.Z.); (Y.D.); (L.A.P.); (R.R.); (X.C.); (A.K.); (S.P.); (H.Y.); (E.D.S.); (K.B.); (J.W.); (S.B.); (B.L.)
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Anthony Kang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Y.Z.); (Y.D.); (L.A.P.); (R.R.); (X.C.); (A.K.); (S.P.); (H.Y.); (E.D.S.); (K.B.); (J.W.); (S.B.); (B.L.)
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Surendra Prajapati
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Y.Z.); (Y.D.); (L.A.P.); (R.R.); (X.C.); (A.K.); (S.P.); (H.Y.); (E.D.S.); (K.B.); (J.W.); (S.B.); (B.L.)
| | - Henry Yu
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Y.Z.); (Y.D.); (L.A.P.); (R.R.); (X.C.); (A.K.); (S.P.); (H.Y.); (E.D.S.); (K.B.); (J.W.); (S.B.); (B.L.)
| | - Ergys David Subashi
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Y.Z.); (Y.D.); (L.A.P.); (R.R.); (X.C.); (A.K.); (S.P.); (H.Y.); (E.D.S.); (K.B.); (J.W.); (S.B.); (B.L.)
| | - Kristy Brock
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Y.Z.); (Y.D.); (L.A.P.); (R.R.); (X.C.); (A.K.); (S.P.); (H.Y.); (E.D.S.); (K.B.); (J.W.); (S.B.); (B.L.)
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jihong Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Y.Z.); (Y.D.); (L.A.P.); (R.R.); (X.C.); (A.K.); (S.P.); (H.Y.); (E.D.S.); (K.B.); (J.W.); (S.B.); (B.L.)
| | - Sam Beddar
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Y.Z.); (Y.D.); (L.A.P.); (R.R.); (X.C.); (A.K.); (S.P.); (H.Y.); (E.D.S.); (K.B.); (J.W.); (S.B.); (B.L.)
| | - Belinda Lee
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Y.Z.); (Y.D.); (L.A.P.); (R.R.); (X.C.); (A.K.); (S.P.); (H.Y.); (E.D.S.); (K.B.); (J.W.); (S.B.); (B.L.)
| | - Mustefa Mohammedsaid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (A.C.); (M.M.); (O.M.); (C.H.); (H.M.); (S.C.)
| | - Sian Cooper
- The Royal Marsden Hospital, Institute of Cancer Research, London SW3 6JJ, UK; (S.C.); (R.W.); (A.T.)
| | - Rosalyne Westley
- The Royal Marsden Hospital, Institute of Cancer Research, London SW3 6JJ, UK; (S.C.); (R.W.); (A.T.)
| | - Alison Tree
- The Royal Marsden Hospital, Institute of Cancer Research, London SW3 6JJ, UK; (S.C.); (R.W.); (A.T.)
| | - Osama Mohamad
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (A.C.); (M.M.); (O.M.); (C.H.); (H.M.); (S.C.)
| | - Comron Hassanzadeh
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (A.C.); (M.M.); (O.M.); (C.H.); (H.M.); (S.C.)
| | - Henry Mok
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (A.C.); (M.M.); (O.M.); (C.H.); (H.M.); (S.C.)
| | - Seungtaek Choi
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (A.C.); (M.M.); (O.M.); (C.H.); (H.M.); (S.C.)
| | - Chad Tang
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (A.C.); (M.M.); (O.M.); (C.H.); (H.M.); (S.C.)
| | - Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Y.Z.); (Y.D.); (L.A.P.); (R.R.); (X.C.); (A.K.); (S.P.); (H.Y.); (E.D.S.); (K.B.); (J.W.); (S.B.); (B.L.)
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| |
Collapse
|
3
|
Li X, Bellotti R, Bachtiary B, Hrbacek J, Weber DC, Lomax AJ, Buhmann JM, Zhang Y. A unified generation-registration framework for improved MR-based CT synthesis in proton therapy. Med Phys 2024; 51:8302-8316. [PMID: 39137294 DOI: 10.1002/mp.17338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 06/11/2024] [Accepted: 07/06/2024] [Indexed: 08/15/2024] Open
Abstract
BACKGROUND The use of magnetic resonance (MR) imaging for proton therapy treatment planning is gaining attention as a highly effective method for guidance. At the core of this approach is the generation of computed tomography (CT) images from MR scans. However, the critical issue in this process is accurately aligning the MR and CT images, a task that becomes particularly challenging in frequently moving body areas, such as the head-and-neck. Misalignments in these images can result in blurred synthetic CT (sCT) images, adversely affecting the precision and effectiveness of the treatment planning. PURPOSE This study introduces a novel network that cohesively unifies image generation and registration processes to enhance the quality and anatomical fidelity of sCTs derived from better-aligned MR images. METHODS The approach synergizes a generation network (G) with a deformable registration network (R), optimizing them jointly in MR-to-CT synthesis. This goal is achieved by alternately minimizing the discrepancies between the generated/registered CT images and their corresponding reference CT counterparts. The generation network employs a UNet architecture, while the registration network leverages an implicit neural representation (INR) of the displacement vector fields (DVFs). We validated this method on a dataset comprising 60 head-and-neck patients, reserving 12 cases for holdout testing. RESULTS Compared to the baseline Pix2Pix method with MAE 124.95 ± $\pm$ 30.74 HU, the proposed technique demonstrated 80.98 ± $\pm$ 7.55 HU. The unified translation-registration network produced sharper and more anatomically congruent outputs, showing superior efficacy in converting MR images to sCTs. Additionally, from a dosimetric perspective, the plan recalculated on the resulting sCTs resulted in a remarkably reduced discrepancy to the reference proton plans. CONCLUSIONS This study conclusively demonstrates that a holistic MR-based CT synthesis approach, integrating both image-to-image translation and deformable registration, significantly improves the precision and quality of sCT generation, particularly for the challenging body area with varied anatomic changes between corresponding MR and CT.
Collapse
Affiliation(s)
- Xia Li
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Computer Science, ETH Zürich, Zürich, Switzerland
| | - Renato Bellotti
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Physics, ETH Zürich, Zürich, Switzerland
| | - Barbara Bachtiary
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
| | - Jan Hrbacek
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
| | - Damien C Weber
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Radiation Oncology, University Hospital of Zürich, Zürich, Switzerland
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Antony J Lomax
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Physics, ETH Zürich, Zürich, Switzerland
| | | | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
| |
Collapse
|
4
|
Zhao Y, Wang X, Phan J, Chen X, Lee A, Yu C, Huang K, Court LE, Pan T, Wang H, Wahid KA, Mohamed ASR, Naser M, Fuller CD, Yang J. Multi-modal segmentation with missing image data for automatic delineation of gross tumor volumes in head and neck cancers. Med Phys 2024; 51:7295-7307. [PMID: 38896829 PMCID: PMC11479854 DOI: 10.1002/mp.17260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 05/22/2024] [Accepted: 06/05/2024] [Indexed: 06/21/2024] Open
Abstract
BACKGROUND Head and neck (HN) gross tumor volume (GTV) auto-segmentation is challenging due to the morphological complexity and low image contrast of targets. Multi-modality images, including computed tomography (CT) and positron emission tomography (PET), are used in the routine clinic to assist radiation oncologists for accurate GTV delineation. However, the availability of PET imaging may not always be guaranteed. PURPOSE To develop a deep learning segmentation framework for automated GTV delineation of HN cancers using a combination of PET/CT images, while addressing the challenge of missing PET data. METHODS Two datasets were included for this study: Dataset I: 524 (training) and 359 (testing) oropharyngeal cancer patients from different institutions with their PET/CT pairs provided by the HECKTOR Challenge; Dataset II: 90 HN patients(testing) from a local institution with their planning CT, PET/CT pairs. To handle potentially missing PET images, a model training strategy named the "Blank Channel" method was implemented. To simulate the absence of a PET image, a blank array with the same dimensions as the CT image was generated to meet the dual-channel input requirement of the deep learning model. During the model training process, the model was randomly presented with either a real PET/CT pair or a blank/CT pair. This allowed the model to learn the relationship between the CT image and the corresponding GTV delineation based on available modalities. As a result, our model had the ability to handle flexible inputs during prediction, making it suitable for cases where PET images are missing. To evaluate the performance of our proposed model, we trained it using training patients from Dataset I and tested it with Dataset II. We compared our model (Model 1) with two other models which were trained for specific modality segmentations: Model 2 trained with only CT images, and Model 3 trained with real PET/CT pairs. The performance of the models was evaluated using quantitative metrics, including Dice similarity coefficient (DSC), mean surface distance (MSD), and 95% Hausdorff Distance (HD95). In addition, we evaluated our Model 1 and Model 3 using the 359 test cases in Dataset I. RESULTS Our proposed model(Model 1) achieved promising results for GTV auto-segmentation using PET/CT images, with the flexibility of missing PET images. Specifically, when assessed with only CT images in Dataset II, Model 1 achieved DSC of 0.56 ± 0.16, MSD of 3.4 ± 2.1 mm, and HD95 of 13.9 ± 7.6 mm. When the PET images were included, the performance of our model was improved to DSC of 0.62 ± 0.14, MSD of 2.8 ± 1.7 mm, and HD95 of 10.5 ± 6.5 mm. These results are comparable to those achieved by Model 2 and Model 3, illustrating Model 1's effectiveness in utilizing flexible input modalities. Further analysis using the test dataset from Dataset I showed that Model 1 achieved an average DSC of 0.77, surpassing the overall average DSC of 0.72 among all participants in the HECKTOR Challenge. CONCLUSIONS We successfully refined a multi-modal segmentation tool for accurate GTV delineation for HN cancer. Our method addressed the issue of missing PET images by allowing flexible data input, thereby providing a practical solution for clinical settings where access to PET imaging may be limited.
Collapse
Affiliation(s)
- Yao Zhao
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Xin Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Jack Phan
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Xinru Chen
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Anna Lee
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Cenji Yu
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Kai Huang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Laurence E. Court
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Tinsu Pan
- UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - He Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Kareem Abdul Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Abdalah S R Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Mohamed Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
5
|
Villegas F, Dal Bello R, Alvarez-Andres E, Dhont J, Janssen T, Milan L, Robert C, Salagean GAM, Tejedor N, Trnková P, Fusella M, Placidi L, Cusumano D. Challenges and opportunities in the development and clinical implementation of artificial intelligence based synthetic computed tomography for magnetic resonance only radiotherapy. Radiother Oncol 2024; 198:110387. [PMID: 38885905 DOI: 10.1016/j.radonc.2024.110387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 06/13/2024] [Accepted: 06/13/2024] [Indexed: 06/20/2024]
Abstract
Synthetic computed tomography (sCT) generated from magnetic resonance imaging (MRI) can serve as a substitute for planning CT in radiation therapy (RT), thereby removing registration uncertainties associated with multi-modality imaging pairing, reducing costs and patient radiation exposure. CE/FDA-approved sCT solutions are nowadays available for pelvis, brain, and head and neck, while more complex deep learning (DL) algorithms are under investigation for other anatomic sites. The main challenge in achieving a widespread clinical implementation of sCT lies in the absence of consensus on sCT commissioning and quality assurance (QA), resulting in variation of sCT approaches across different hospitals. To address this issue, a group of experts gathered at the ESTRO Physics Workshop 2022 to discuss the integration of sCT solutions into clinics and report the process and its outcomes. This position paper focuses on aspects of sCT development and commissioning, outlining key elements crucial for the safe implementation of an MRI-only RT workflow.
Collapse
Affiliation(s)
- Fernanda Villegas
- Department of Oncology-Pathology, Karolinska Institute, Solna, Sweden; Radiotherapy Physics and Engineering, Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Solna, Sweden
| | - Riccardo Dal Bello
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Emilie Alvarez-Andres
- OncoRay - National Center for Radiation Research in Oncology, Medical Faculty and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany; Faculty of Medicine Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Jennifer Dhont
- Université libre de Bruxelles (ULB), Hôpital Universitaire de Bruxelles (H.U.B), Institut Jules Bordet, Department of Medical Physics, Brussels, Belgium; Université Libre De Bruxelles (ULB), Radiophysics and MRI Physics Laboratory, Brussels, Belgium
| | - Tomas Janssen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Lisa Milan
- Medical Physics Unit, Imaging Institute of Southern Switzerland (IIMSI), Ente Ospedaliero Cantonale, Bellinzona, Switzerland
| | - Charlotte Robert
- UMR 1030 Molecular Radiotherapy and Therapeutic Innovations, ImmunoRadAI, Paris-Saclay University, Institut Gustave Roussy, Inserm, Villejuif, France; Department of Radiation Oncology, Gustave Roussy, Villejuif, France
| | - Ghizela-Ana-Maria Salagean
- Faculty of Physics, Babes-Bolyai University, Cluj-Napoca, Romania; Department of Radiation Oncology, TopMed Medical Centre, Targu Mures, Romania
| | - Natalia Tejedor
- Department of Medical Physics and Radiation Protection, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
| | - Petra Trnková
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Lorenzo Placidi
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Department of Diagnostic Imaging, Oncological Radiotherapy and Hematology, Rome, Italy.
| | - Davide Cusumano
- Mater Olbia Hospital, Strada Statale Orientale Sarda 125, Olbia, Sassari, Italy
| |
Collapse
|
6
|
Chen X, Zhao Y, Court LE, Wang H, Pan T, Phan J, Wang X, Ding Y, Yang J. SC-GAN: Structure-completion generative adversarial network for synthetic CT generation from MR images with truncated anatomy. Comput Med Imaging Graph 2024; 113:102353. [PMID: 38387114 DOI: 10.1016/j.compmedimag.2024.102353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 12/14/2023] [Accepted: 02/04/2024] [Indexed: 02/24/2024]
Abstract
Creating synthetic CT (sCT) from magnetic resonance (MR) images enables MR-based treatment planning in radiation therapy. However, the MR images used for MR-guided adaptive planning are often truncated in the boundary regions due to the limited field of view and the need for sequence optimization. Consequently, the sCT generated from these truncated MR images lacks complete anatomic information, leading to dose calculation error for MR-based adaptive planning. We propose a novel structure-completion generative adversarial network (SC-GAN) to generate sCT with full anatomic details from the truncated MR images. To enable anatomy compensation, we expand input channels of the CT generator by including a body mask and introduce a truncation loss between sCT and real CT. The body mask for each patient was automatically created from the simulation CT scans and transformed to daily MR images by rigid registration as another input for our SC-GAN in addition to the MR images. The truncation loss was constructed by implementing either an auto-segmentor or an edge detector to penalize the difference in body outlines between sCT and real CT. The experimental results show that our SC-GAN achieved much improved accuracy of sCT generation in both truncated and untruncated regions compared to the original cycleGAN and conditional GAN methods.
Collapse
Affiliation(s)
- Xinru Chen
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA.
| | - Yao Zhao
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA.
| | - Laurence E Court
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - He Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Tinsu Pan
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Jack Phan
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Xin Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Yao Ding
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA.
| |
Collapse
|
7
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
8
|
Gong C, Huang Y, Luo M, Cao S, Gong X, Ding S, Yuan X, Zheng W, Zhang Y. Channel-wise attention enhanced and structural similarity constrained cycleGAN for effective synthetic CT generation from head and neck MRI images. Radiat Oncol 2024; 19:37. [PMID: 38486193 PMCID: PMC10938692 DOI: 10.1186/s13014-024-02429-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 03/04/2024] [Indexed: 03/18/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) plays an increasingly important role in radiotherapy, enhancing the accuracy of target and organs at risk delineation, but the absence of electron density information limits its further clinical application. Therefore, the aim of this study is to develop and evaluate a novel unsupervised network (cycleSimulationGAN) for unpaired MR-to-CT synthesis. METHODS The proposed cycleSimulationGAN in this work integrates contour consistency loss function and channel-wise attention mechanism to synthesize high-quality CT-like images. Specially, the proposed cycleSimulationGAN constrains the structural similarity between the synthetic and input images for better structural retention characteristics. Additionally, we propose to equip a novel channel-wise attention mechanism based on the traditional generator of GAN to enhance the feature representation capability of deep network and extract more effective features. The mean absolute error (MAE) of Hounsfield Units (HU), peak signal-to-noise ratio (PSNR), root-mean-square error (RMSE) and structural similarity index (SSIM) were calculated between synthetic CT (sCT) and ground truth (GT) CT images to quantify the overall sCT performance. RESULTS One hundred and sixty nasopharyngeal carcinoma (NPC) patients who underwent volumetric-modulated arc radiotherapy (VMAT) were enrolled in this study. The generated sCT of our method were more consistent with the GT compared with other methods in terms of visual inspection. The average MAE, RMSE, PSNR, and SSIM calculated over twenty patients were 61.88 ± 1.42, 116.85 ± 3.42, 36.23 ± 0.52 and 0.985 ± 0.002 for the proposed method. The four image quality assessment metrics were significantly improved by our approach compared to conventional cycleGAN, the proposed cycleSimulationGAN produces significantly better synthetic results except for SSIM in bone. CONCLUSIONS We developed a novel cycleSimulationGAN model that can effectively create sCT images, making them comparable to GT images, which could potentially benefit the MRI-based treatment planning.
Collapse
Affiliation(s)
- Changfei Gong
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Yuling Huang
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Mingming Luo
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Shunxiang Cao
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Xiaochang Gong
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
- Key Laboratory of Personalized Diagnosis and Treatment of Nasopharyngeal Carcinoma Nanchang, Jiangxi, PR China
| | - Shenggou Ding
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Xingxing Yuan
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
| | - Wenheng Zheng
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Yun Zhang
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China.
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China.
- Key Laboratory of Personalized Diagnosis and Treatment of Nasopharyngeal Carcinoma Nanchang, Jiangxi, PR China.
| |
Collapse
|
9
|
Podobnik G, Ibragimov B, Peterlin P, Strojan P, Vrtovec T. vOARiability: Interobserver and intermodality variability analysis in OAR contouring from head and neck CT and MR images. Med Phys 2024; 51:2175-2186. [PMID: 38230752 DOI: 10.1002/mp.16924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 10/31/2023] [Accepted: 12/13/2023] [Indexed: 01/18/2024] Open
Abstract
BACKGROUND Accurate and consistent contouring of organs-at-risk (OARs) from medical images is a key step of radiotherapy (RT) cancer treatment planning. Most contouring approaches rely on computed tomography (CT) images, but the integration of complementary magnetic resonance (MR) modality is highly recommended, especially from the perspective of OAR contouring, synthetic CT and MR image generation for MR-only RT, and MR-guided RT. Although MR has been recognized as valuable for contouring OARs in the head and neck (HaN) region, the accuracy and consistency of the resulting contours have not been yet objectively evaluated. PURPOSE To analyze the interobserver and intermodality variability in contouring OARs in the HaN region, performed by observers with different level of experience from CT and MR images of the same patients. METHODS In the final cohort of 27 CT and MR images of the same patients, contours of up to 31 OARs were obtained by a radiation oncology resident (junior observer, JO) and a board-certified radiation oncologist (senior observer, SO). The resulting contours were then evaluated in terms of interobserver variability, characterized as the agreement among different observers (JO and SO) when contouring OARs in a selected modality (CT or MR), and intermodality variability, characterized as the agreement among different modalities (CT and MR) when OARs were contoured by a selected observer (JO or SO), both by the Dice coefficient (DC) and 95-percentile Hausdorff distance (HD95 $_{95}$ ). RESULTS The mean (±standard deviation) interobserver variability was 69.0 ± 20.2% and 5.1 ± 4.1 mm, while the mean intermodality variability was 61.6 ± 19.0% and 6.1 ± 4.3 mm in terms of DC and HD95 $_{95}$ , respectively, across all OARs. Statistically significant differences were only found for specific OARs. The performed MR to CT image registration resulted in a mean target registration error of 1.7 ± 0.5 mm, which was considered as valid for the analysis of intermodality variability. CONCLUSIONS The contouring variability was, in general, similar for both image modalities, and experience did not considerably affect the contouring performance. However, the results indicate that an OAR is difficult to contour regardless of whether it is contoured in the CT or MR image, and that observer experience may be an important factor for OARs that are deemed difficult to contour. Several of the differences in the resulting variability can be also attributed to adherence to guidelines, especially for OARs with poor visibility or without distinctive boundaries in either CT or MR images. Although considerable contouring differences were observed for specific OARs, it can be concluded that almost all OARs can be contoured with a similar degree of variability in either the CT or MR modality, which works in favor of MR images from the perspective of MR-only and MR-guided RT.
Collapse
Affiliation(s)
- Gašper Podobnik
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | - Bulat Ibragimov
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | | | | | - Tomaž Vrtovec
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
10
|
Gay SS, Cardenas CE, Nguyen C, Netherton TJ, Yu C, Zhao Y, Skett S, Patel T, Adjogatse D, Guerrero Urbano T, Naidoo K, Beadle BM, Yang J, Aggarwal A, Court LE. Fully-automated, CT-only GTV contouring for palliative head and neck radiotherapy. Sci Rep 2023; 13:21797. [PMID: 38066074 PMCID: PMC10709623 DOI: 10.1038/s41598-023-48944-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/01/2023] [Indexed: 12/18/2023] Open
Abstract
Planning for palliative radiotherapy is performed without the advantage of MR or PET imaging in many clinics. Here, we investigated CT-only GTV delineation for palliative treatment of head and neck cancer. Two multi-institutional datasets of palliative-intent treatment plans were retrospectively acquired: a set of 102 non-contrast-enhanced CTs and a set of 96 contrast-enhanced CTs. The nnU-Net auto-segmentation network was chosen for its strength in medical image segmentation, and five approaches separately trained: (1) heuristic-cropped, non-contrast images with a single GTV channel, (2) cropping around a manually-placed point in the tumor center for non-contrast images with a single GTV channel, (3) contrast-enhanced images with a single GTV channel, (4) contrast-enhanced images with separate primary and nodal GTV channels, and (5) contrast-enhanced images along with synthetic MR images with separate primary and nodal GTV channels. Median Dice similarity coefficient ranged from 0.6 to 0.7, surface Dice from 0.30 to 0.56, and 95th Hausdorff distance from 14.7 to 19.7 mm across the five approaches. Only surface Dice exhibited statistically-significant difference across these five approaches using a two-tailed Wilcoxon Rank-Sum test (p ≤ 0.05). Our CT-only results met or exceeded published values for head and neck GTV autocontouring using multi-modality images. However, significant edits would be necessary before clinical use in palliative radiotherapy.
Collapse
Affiliation(s)
- Skylar S Gay
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA.
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA.
| | - Carlos E Cardenas
- Department of Radiation Oncology, The University of Alabama at Birmingham, Birmingham, AL, USA
| | - Callistus Nguyen
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| | - Tucker J Netherton
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| | - Cenji Yu
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Yao Zhao
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | | | | | | | | | | | | | - Jinzhong Yang
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| | | | - Laurence E Court
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| |
Collapse
|
11
|
Baroudi H, Chen X, Cao W, El Basha MD, Gay S, Gronberg MP, Hernandez S, Huang K, Kaffey Z, Melancon AD, Mumme RP, Sjogreen C, Tsai JY, Yu C, Court LE, Pino R, Zhao Y. Synthetic Megavoltage Cone Beam Computed Tomography Image Generation for Improved Contouring Accuracy of Cardiac Pacemakers. J Imaging 2023; 9:245. [PMID: 37998092 PMCID: PMC10672228 DOI: 10.3390/jimaging9110245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 10/31/2023] [Accepted: 11/02/2023] [Indexed: 11/25/2023] Open
Abstract
In this study, we aimed to enhance the contouring accuracy of cardiac pacemakers by improving their visualization using deep learning models to predict MV CBCT images based on kV CT or CBCT images. Ten pacemakers and four thorax phantoms were included, creating a total of 35 combinations. Each combination was imaged on a Varian Halcyon (kV/MV CBCT images) and Siemens SOMATOM CT scanner (kV CT images). Two generative adversarial network (GAN)-based models, cycleGAN and conditional GAN (cGAN), were trained to generate synthetic MV (sMV) CBCT images from kV CT/CBCT images using twenty-eight datasets (80%). The pacemakers in the sMV CBCT images and original MV CBCT images were manually delineated and reviewed by three users. The Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and mean surface distance (MSD) were used to compare contour accuracy. Visual inspection showed the improved visualization of pacemakers on sMV CBCT images compared to original kV CT/CBCT images. Moreover, cGAN demonstrated superior performance in enhancing pacemaker visualization compared to cycleGAN. The mean DSC, HD95, and MSD for contours on sMV CBCT images generated from kV CT/CBCT images were 0.91 ± 0.02/0.92 ± 0.01, 1.38 ± 0.31 mm/1.18 ± 0.20 mm, and 0.42 ± 0.07 mm/0.36 ± 0.06 mm using the cGAN model. Deep learning-based methods, specifically cycleGAN and cGAN, can effectively enhance the visualization of pacemakers in thorax kV CT/CBCT images, therefore improving the contouring precision of these devices.
Collapse
Affiliation(s)
- Hana Baroudi
- MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas, Houston, TX 77030, USA
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Xinru Chen
- MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas, Houston, TX 77030, USA
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Wenhua Cao
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Mohammad D. El Basha
- MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas, Houston, TX 77030, USA
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Skylar Gay
- MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas, Houston, TX 77030, USA
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Mary Peters Gronberg
- MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas, Houston, TX 77030, USA
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Soleil Hernandez
- MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas, Houston, TX 77030, USA
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Kai Huang
- MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas, Houston, TX 77030, USA
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Zaphanlene Kaffey
- MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas, Houston, TX 77030, USA
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Adam D. Melancon
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Raymond P. Mumme
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Carlos Sjogreen
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - January Y. Tsai
- Department of Anesthesiology and Perioperative Medicine, Division of Anesthesiology, Critical Care Medicine and Pain Medicine, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Cenji Yu
- MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas, Houston, TX 77030, USA
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Laurence E. Court
- MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas, Houston, TX 77030, USA
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Ramiro Pino
- Department of Radiation Oncology, Houston Methodist Hospital, Houston, TX 77030, USA
| | - Yao Zhao
- MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, The University of Texas, Houston, TX 77030, USA
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| |
Collapse
|
12
|
Liu C, Liu Z, Holmes J, Zhang L, Zhang L, Ding Y, Shu P, Wu Z, Dai H, Li Y, Shen D, Liu N, Li Q, Li X, Zhu D, Liu T, Liu W. Artificial general intelligence for radiation oncology. META-RADIOLOGY 2023; 1:100045. [PMID: 38344271 PMCID: PMC10857824 DOI: 10.1016/j.metrad.2023.100045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/15/2024]
Abstract
The emergence of artificial general intelligence (AGI) is transforming radiation oncology. As prominent vanguards of AGI, large language models (LLMs) such as GPT-4 and PaLM 2 can process extensive texts and large vision models (LVMs) such as the Segment Anything Model (SAM) can process extensive imaging data to enhance the efficiency and precision of radiation therapy. This paper explores full-spectrum applications of AGI across radiation oncology including initial consultation, simulation, treatment planning, treatment delivery, treatment verification, and patient follow-up. The fusion of vision data with LLMs also creates powerful multimodal models that elucidate nuanced clinical patterns. Together, AGI promises to catalyze a shift towards data-driven, personalized radiation therapy. However, these models should complement human expertise and care. This paper provides an overview of how AGI can transform radiation oncology to elevate the standard of patient care in radiation oncology, with the key insight being AGI's ability to exploit multimodal clinical data at scale.
Collapse
Affiliation(s)
- Chenbin Liu
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, Guangdong, China
| | | | - Jason Holmes
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Lu Zhang
- Department of Computer Science and Engineering, The University of Texas at Arlington, USA
| | - Lian Zhang
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Yuzhen Ding
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Peng Shu
- School of Computing, University of Georgia, USA
| | - Zihao Wu
- School of Computing, University of Georgia, USA
| | - Haixing Dai
- School of Computing, University of Georgia, USA
| | - Yiwei Li
- School of Computing, University of Georgia, USA
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, China
- Shanghai United Imaging Intelligence Co., Ltd, China
- Shanghai Clinical Research and Trial Center, China
| | - Ninghao Liu
- School of Computing, University of Georgia, USA
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA
| | - Xiang Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA
| | - Dajiang Zhu
- Department of Computer Science and Engineering, The University of Texas at Arlington, USA
| | | | - Wei Liu
- Department of Radiation Oncology, Mayo Clinic, USA
| |
Collapse
|
13
|
McNaughton J, Fernandez J, Holdsworth S, Chong B, Shim V, Wang A. Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering (Basel) 2023; 10:1078. [PMID: 37760180 PMCID: PMC10525905 DOI: 10.3390/bioengineering10091078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/30/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. METHODS A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. RESULTS A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. CONCLUSIONS Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs.
Collapse
Affiliation(s)
- Jake McNaughton
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Department of Engineering Science and Biomedical Engineering, University of Auckland, 3/70 Symonds Street, Auckland 1010, New Zealand
| | - Samantha Holdsworth
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Benjamin Chong
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| |
Collapse
|