1
|
Salari E, Wang J, Wynne JF, Chang C, Wu Y, Yang X. Artificial intelligence-based motion tracking in cancer radiotherapy: A review. J Appl Clin Med Phys 2024; 25:e14500. [PMID: 39194360 PMCID: PMC11540048 DOI: 10.1002/acm2.14500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 07/13/2024] [Accepted: 07/27/2024] [Indexed: 08/29/2024] Open
Abstract
Radiotherapy aims to deliver a prescribed dose to the tumor while sparing neighboring organs at risk (OARs). Increasingly complex treatment techniques such as volumetric modulated arc therapy (VMAT), stereotactic radiosurgery (SRS), stereotactic body radiotherapy (SBRT), and proton therapy have been developed to deliver doses more precisely to the target. While such technologies have improved dose delivery, the implementation of intra-fraction motion management to verify tumor position at the time of treatment has become increasingly relevant. Artificial intelligence (AI) has recently demonstrated great potential for real-time tracking of tumors during treatment. However, AI-based motion management faces several challenges, including bias in training data, poor transparency, difficult data collection, complex workflows and quality assurance, and limited sample sizes. This review presents the AI algorithms used for chest, abdomen, and pelvic tumor motion management/tracking for radiotherapy and provides a literature summary on the topic. We will also discuss the limitations of these AI-based studies and propose potential improvements.
Collapse
Affiliation(s)
- Elahheh Salari
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | | | - Chih‐Wei Chang
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| | - Yizhou Wu
- School of Electrical and Computer EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
2
|
Mori S, Hirai R, Sakata Y, Tachibana Y, Koto M, Ishikawa H. Deep neural network-based synthetic image digital fluoroscopy using digitally reconstructed tomography. Phys Eng Sci Med 2023; 46:1227-1237. [PMID: 37349631 DOI: 10.1007/s13246-023-01290-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 06/16/2023] [Indexed: 06/24/2023]
Abstract
We developed a deep neural network (DNN) to generate X-ray flat panel detector (FPD) images from digitally reconstructed radiographic (DRR) images. FPD and treatment planning CT images were acquired from patients with prostate and head and neck (H&N) malignancies. The DNN parameters were optimized for FPD image synthesis. The synthetic FPD images' features were evaluated to compare to the corresponding ground-truth FPD images using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). The image quality of the synthetic FPD image was also compared with that of the DRR image to understand the performance of our DNN. For the prostate cases, the MAE of the synthetic FPD image was improved (= 0.12 ± 0.02) from that of the input DRR image (= 0.35 ± 0.08). The synthetic FPD image showed higher PSNRs (= 16.81 ± 1.54 dB) than those of the DRR image (= 8.74 ± 1.56 dB), while SSIMs for both images (= 0.69) were almost the same. All metrics for the synthetic FPD images of the H&N cases were improved (MAE 0.08 ± 0.03, PSNR 19.40 ± 2.83 dB, and SSIM 0.80 ± 0.04) compared to those for the DRR image (MAE 0.48 ± 0.11, PSNR 5.74 ± 1.63 dB, and SSIM 0.52 ± 0.09). Our DNN successfully generated FPD images from DRR images. This technique would be useful to increase throughput when images from two different modalities are compared by visual inspection.
Collapse
Affiliation(s)
- Shinichiro Mori
- National Institutes for Quantum Science and Technology, Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, Inage-ku, Chiba, 263-8555, Japan.
| | - Ryusuke Hirai
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yukinobu Sakata
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yasuhiko Tachibana
- National Institutes for Quantum Science and Technology, Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, Inage-ku, Chiba, 263-8555, Japan
| | - Masashi Koto
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| | - Hitoshi Ishikawa
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
3
|
Zhao Y, Wang X, Che T, Bao G, Li S. Multi-task deep learning for medical image computing and analysis: A review. Comput Biol Med 2023; 153:106496. [PMID: 36634599 DOI: 10.1016/j.compbiomed.2022.106496] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/06/2022] [Accepted: 12/27/2022] [Indexed: 12/29/2022]
Abstract
The renaissance of deep learning has provided promising solutions to various tasks. While conventional deep learning models are constructed for a single specific task, multi-task deep learning (MTDL) that is capable to simultaneously accomplish at least two tasks has attracted research attention. MTDL is a joint learning paradigm that harnesses the inherent correlation of multiple related tasks to achieve reciprocal benefits in improving performance, enhancing generalizability, and reducing the overall computational cost. This review focuses on the advanced applications of MTDL for medical image computing and analysis. We first summarize four popular MTDL network architectures (i.e., cascaded, parallel, interacted, and hybrid). Then, we review the representative MTDL-based networks for eight application areas, including the brain, eye, chest, cardiac, abdomen, musculoskeletal, pathology, and other human body regions. While MTDL-based medical image processing has been flourishing and demonstrating outstanding performance in many tasks, in the meanwhile, there are performance gaps in some tasks, and accordingly we perceive the open challenges and the perspective trends. For instance, in the 2018 Ischemic Stroke Lesion Segmentation challenge, the reported top dice score of 0.51 and top recall of 0.55 achieved by the cascaded MTDL model indicate further research efforts in high demand to escalate the performance of current models.
Collapse
Affiliation(s)
- Yan Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia.
| | - Tongtong Che
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Guoqing Bao
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia
| | - Shuyu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
4
|
Zhang X, He P, Li Y, Liu X, Ma Y, Shen G, Dai Z, Zhang H, Chen W, Li Q. DR-only Carbon-ion radiotherapy treatment planning via deep learning. Phys Med 2022; 100:120-128. [DOI: 10.1016/j.ejmp.2022.06.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 05/31/2022] [Accepted: 06/29/2022] [Indexed: 10/17/2022] Open
|
5
|
Dasnoy-Sumell D, Aspeel A, Souris K, Macq B. Locally tuned deformation fields combination for 2D cine-MRI-based driving of 3D motion models. Phys Med 2021; 94:8-16. [PMID: 34968950 DOI: 10.1016/j.ejmp.2021.12.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 12/13/2021] [Accepted: 12/19/2021] [Indexed: 12/25/2022] Open
Abstract
PURPOSE To target mobile tumors in radiotherapy with the recent MR-Linac hardware solutions, research is being conducted to drive a 3D motion model with 2D cine-MRI to reproduce the breathing motion in 4D. This work presents a method to combine several deformation fields using local measures to better drive 3D motion models. METHODS The method uses weight maps, each representing the proximity with a specific area of interest. The breathing state is evaluated on cine-MRI frames in these areas and a different deformation field is estimated for each using a 2D to 3D motion model. The different deformation fields are multiplied by their respective weight maps and combined to form the final field to apply to a reference image. A global motion model is adjusted locally on the selected areas and creates a 3DCT for each cine-MRI frame. RESULTS The 13 patients on which it was tested showed on average an improvement of the accuracy of our model of 0.71 mm for areas selected to drive the model and 0.5 mm for other areas compared to our previous method without local adjustment. The additional computation time for each region was around 40 ms on a modern laptop. CONCLUSION The method improves the accuracy of the2D-based driving of 3D motion models. It can be used on top of existing methods relying on deformation fields. It does add some computation time but, depending on the area to deform and the number of regions of interests, offers the potential of online use.
Collapse
Affiliation(s)
- D Dasnoy-Sumell
- Universite Catholique de Louvain, Institute of Information and Communication Technologies, Electronics and Applied Mathematics, ImagX-R Lab, Place du Levant 3 Box L5.03.02, 1348 Louvain-la-Neuve, Belgium.
| | - A Aspeel
- Universite Catholique de Louvain, Institute of Information and Communication Technologies, Electronics and Applied Mathematics, ImagX-R Lab, Place du Levant 3 Box L5.03.02, 1348 Louvain-la-Neuve, Belgium.
| | - K Souris
- Universite Catholique de Louvain, Institut de Recherche Experimentale et Clinique (IREC), Molecular Imaging, Radiotherapy and Oncology (MIRO), Avenue Hippocrate 54 Box B1.54.07, 1200 Brussels, Belgium.
| | - B Macq
- Universite Catholique de Louvain, Institute of Information and Communication Technologies, Electronics and Applied Mathematics, ImagX-R Lab, Place du Levant 3 Box L5.03.02, 1348 Louvain-la-Neuve, Belgium.
| |
Collapse
|
6
|
Mylonas A, Booth J, Nguyen DT. A review of artificial intelligence applications for motion tracking in radiotherapy. J Med Imaging Radiat Oncol 2021; 65:596-611. [PMID: 34288501 DOI: 10.1111/1754-9485.13285] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 06/29/2021] [Indexed: 11/28/2022]
Abstract
During radiotherapy, the organs and tumour move as a result of the dynamic nature of the body; this is known as intrafraction motion. Intrafraction motion can result in tumour underdose and healthy tissue overdose, thereby reducing the effectiveness of the treatment while increasing toxicity to the patients. There is a growing appreciation of intrafraction target motion management by the radiation oncology community. Real-time image-guided radiation therapy (IGRT) can track the target and account for the motion, improving the radiation dose to the tumour and reducing the dose to healthy tissue. Recently, artificial intelligence (AI)-based approaches have been applied to motion management and have shown great potential. In this review, four main categories of motion management using AI are summarised: marker-based tracking, markerless tracking, full anatomy monitoring and motion prediction. Marker-based and markerless tracking approaches focus on tracking the individual target throughout the treatment. Full anatomy algorithms monitor for intrafraction changes in the full anatomy within the field of view. Motion prediction algorithms can be used to account for the latencies due to the time for the system to localise, process and act.
Collapse
Affiliation(s)
- Adam Mylonas
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,School of Biomedical Engineering, University of Technology Sydney, Sydney, New South Wales, Australia
| | - Jeremy Booth
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales, Australia.,Institute of Medical Physics, School of Physics, The University of Sydney, Sydney, New South Wales, Australia
| | - Doan Trang Nguyen
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,School of Biomedical Engineering, University of Technology Sydney, Sydney, New South Wales, Australia.,Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales, Australia
| |
Collapse
|
7
|
Barragán-Montero A, Javaid U, Valdés G, Nguyen D, Desbordes P, Macq B, Willems S, Vandewinckele L, Holmström M, Löfman F, Michiels S, Souris K, Sterpin E, Lee JA. Artificial intelligence and machine learning for medical imaging: A technology review. Phys Med 2021; 83:242-256. [PMID: 33979715 PMCID: PMC8184621 DOI: 10.1016/j.ejmp.2021.04.016] [Citation(s) in RCA: 134] [Impact Index Per Article: 33.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 04/15/2021] [Accepted: 04/18/2021] [Indexed: 02/08/2023] Open
Abstract
Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium.
| | - Umair Javaid
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, USA
| | - Paul Desbordes
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Benoit Macq
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Steven Michiels
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium; KU Leuven, Department of Oncology, Laboratory of Experimental Radiotherapy, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| |
Collapse
|