1
|
Chen J, Qiu RLJ, Wang T, Momin S, Yang X. A review of artificial intelligence in brachytherapy. J Appl Clin Med Phys 2025; 26:e70034. [PMID: 40014044 DOI: 10.1002/acm2.70034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2024] [Revised: 01/25/2025] [Accepted: 01/30/2025] [Indexed: 02/28/2025] Open
Abstract
Artificial intelligence (AI) has the potential to revolutionize brachytherapy's clinical workflow. This review comprehensively examines the application of AI, focusing on machine learning and deep learning, in various aspects of brachytherapy. We analyze AI's role in making brachytherapy treatments more personalized, efficient, and effective. The applications are systematically categorized into seven categories: imaging, preplanning, treatment planning, applicator reconstruction, quality assurance, outcome prediction, and real-time monitoring. Each major category is further subdivided based on cancer type or specific tasks, with detailed summaries of models, data sizes, and results presented in corresponding tables. Additionally, we discuss the limitations, challenges, and ethical concerns of current AI applications, along with perspectives on future directions. This review offers insights into the current advancements, challenges, and the impact of AI on treatment paradigms, encouraging further research to expand its clinical utility.
Collapse
Affiliation(s)
- Jingchu Chen
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Shadab Momin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| |
Collapse
|
2
|
Goulet M, Duguay-Drouin P, Mascolo-Fortin J, Mégrourèche J, Octave N, Tsui JMG. Clinical Application of Deep Learning-Assisted Needles Reconstruction in Prostate Ultrasound Brachytherapy. Int J Radiat Oncol Biol Phys 2025; 122:199-207. [PMID: 39800329 DOI: 10.1016/j.ijrobp.2024.12.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 11/12/2024] [Accepted: 12/25/2024] [Indexed: 02/04/2025]
Abstract
PURPOSE High dose rate (HDR) prostate brachytherapy (BT) procedure requires image-guided needle insertion. Given that general anesthesia is often employed during the procedure, minimizing overall planning time is crucial. In this study, we explore the clinical feasibility and time-saving potential of artificial intelligence (AI)-driven auto-reconstruction of transperineal needles in the context of ultrasound (US)-guided prostate BT planning. METHODS AND MATERIALS This study included a total of 102 US-planned BT images from a single institution and split into 3 groups: 50 for model training and validation, 11 to evaluate reconstruction accuracy (test set), and 41 to evaluate the AI tool in a clinical implementation (clinical set). Reconstruction accuracy for the test set was evaluated by comparing the performance of AI-derived and manually reconstructed needles from 5 medical physicists on the 3D-US scans after treatment. The needle total reconstruction time for the clinical set was defined as the timestamp difference from scan acquisition to the start of dose calculations and was compared with values recorded before the clinical implementation of the AI-assisted tool. RESULTS A mean error of (0.44 ± 0.32) mm was found between the AI-reconstructed and the human consensus needle positions in the test set, with 95.0% of AI needle points falling below 1 mm from their human-made counterparts. Post-hoc analysis showed that only one of the human observers' reconstructions were significantly different from the others including the AIs. In the clinical set, the AI algorithm achieved a true positive reconstruction rate of 93.4% with only 4.5% of these needles requiring manual corrections from the planner before dosimetry. Total time required to perform AI-assisted catheter reconstruction on clinical cases was on average 15.2 min lower (P < .01) compared with procedure without AI assistance. CONCLUSIONS This study demonstrates the feasibility of an AI-assisted needle reconstructing tool for 3D-US-based HDR prostate BT. This is a step toward treatment planning automation and increased efficiency in HDR prostate BT.
Collapse
Affiliation(s)
- Mathieu Goulet
- Département de radio-oncologie, CISSS de Chaudière-Appalaches, Lévis, Québec, Canada.
| | | | - Julia Mascolo-Fortin
- Département de radio-oncologie, CISSS de Chaudière-Appalaches, Lévis, Québec, Canada
| | - Julien Mégrourèche
- Département de radio-oncologie, CISSS de Chaudière-Appalaches, Lévis, Québec, Canada
| | - Nadia Octave
- Département de radio-oncologie, CISSS de Chaudière-Appalaches, Lévis, Québec, Canada
| | - James Man Git Tsui
- Department of Radiation Oncology, McGill University Health Center, Montreal, Québec, Canada
| |
Collapse
|
3
|
Geiger A, Bernhard L, Gassert F, Feußner H, Wilhelm D, Friess H, Jell A. Towards multimodal visualization of esophageal motility: fusion of manometry, impedance, and videofluoroscopic image sequences. Int J Comput Assist Radiol Surg 2025; 20:713-721. [PMID: 39379641 PMCID: PMC12034594 DOI: 10.1007/s11548-024-03265-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 08/23/2024] [Indexed: 10/10/2024]
Abstract
PURPOSE Dysphagia is the inability or difficulty to swallow normally. Standard procedures for diagnosing the exact disease are, among others, X-ray videofluoroscopy, manometry and impedance examinations, usually performed consecutively. In order to gain more insights, ongoing research is aiming to collect these different modalities at the same time, with the goal to present them in a joint visualization. One idea to create a combined view is the projection of the manometry and impedance values onto the right location in the X-ray images. This requires to identify the exact sensor locations in the images. METHODS This work gives an overview of the challenges associated with the sensor detection task and proposes a robust approach to detect the sensors in X-ray image sequences, ultimately allowing to project the manometry and impedance values onto the right location in the images. RESULTS The developed sensor detection approach is evaluated on a total of 14 sequences from different patients, achieving a F1-score of 86.36%. To demonstrate the robustness of the approach, another study is performed by adding different levels of noise to the images, with the performance of our sensor detection method only slightly decreasing in these scenarios. This robust sensor detection provides the basis to accurately project manometry and impedance values onto the images, allowing to create a multimodal visualization of the swallow process. The resulting visualizations are evaluated qualitatively by domain experts, indicating a great benefit of this proposed fused visualization approach. CONCLUSION Using our preprocessing and sensor detection method, we show that the sensor detection task can be successfully approached with high accuracy. This allows to create a novel, multimodal visualization of esophageal motility, helping to provide more insights into swallow disorders of patients.
Collapse
Affiliation(s)
- Alexander Geiger
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Research Group MITI, Munich, Germany.
| | - Lukas Bernhard
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Research Group MITI, Munich, Germany
| | - Florian Gassert
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Department of Radiology, Munich, Germany
| | - Hubertus Feußner
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Research Group MITI, Munich, Germany
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Department of Surgery, Munich, Germany
| | - Dirk Wilhelm
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Research Group MITI, Munich, Germany
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Department of Surgery, Munich, Germany
| | - Helmut Friess
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Department of Surgery, Munich, Germany
| | - Alissa Jell
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Research Group MITI, Munich, Germany
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Department of Surgery, Munich, Germany
| |
Collapse
|
4
|
Shi J, Chen J, He G, Peng Q. Artificial intelligence in high-dose-rate brachytherapy treatment planning for cervical cancer: a review. Front Oncol 2025; 15:1507592. [PMID: 39931087 PMCID: PMC11808022 DOI: 10.3389/fonc.2025.1507592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Accepted: 01/10/2025] [Indexed: 02/13/2025] Open
Abstract
Cervical cancer remains a significant global health concern, characterized by high morbidity and mortality rates. High-dose-rate brachytherapy (HDR-BT) is a critical component of cervical cancer treatment, requiring precise and efficient treatment planning. However, the process is labor-intensive, heavily reliant on operator expertise, and prone to variability due to factors such as applicator shifts and organ filling changes. Recent advancements in artificial intelligence (AI), particularly in medical image processing, offer significant potential for automating and standardizing treatment planning in HDR-BT. This review examines the progress and challenge of AI applications in HDR-BT treatment planning, focusing on automatic segmentation, applicator reconstruction, dose calculation, and plan optimization. By addressing current limitations and exploring future directions, this paper aims to guide the integration of AI into clinical practice, ultimately improving treatment accuracy, reducing preparation time, and enhancing patient outcomes.
Collapse
Affiliation(s)
- Junyue Shi
- Department of Nuclear Technology Application, China Institute of Atomic Energy, Beijing, China
- Department of Radiation Oncology, Foresea Life Insurance Guangzhou General Hospital, Guangzhou, China
| | - Jun Chen
- Department of Radiation Oncology, Foresea Life Insurance Guangzhou General Hospital, Guangzhou, China
| | - Gaokui He
- Department of Nuclear Technology Application, China Institute of Atomic Energy, Beijing, China
| | - Qinghe Peng
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, China
| |
Collapse
|
5
|
Fionda B, Placidi E, de Ridder M, Strigari L, Patarnello S, Tanderup K, Hannoun-Levi JM, Siebert FA, Boldrini L, Antonietta Gambacorta M, De Spirito M, Sala E, Tagliaferri L. Artificial intelligence in interventional radiotherapy (brachytherapy): Enhancing patient-centered care and addressing patients' needs. Clin Transl Radiat Oncol 2024; 49:100865. [PMID: 39381628 PMCID: PMC11459626 DOI: 10.1016/j.ctro.2024.100865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 09/11/2024] [Accepted: 09/20/2024] [Indexed: 10/10/2024] Open
Abstract
This review explores the integration of artificial intelligence (AI) in interventional radiotherapy (IRT), emphasizing its potential to streamline workflows and enhance patient care. Through a systematic analysis of 78 relevant papers spanning from 2002 to 2024, we identified significant advancements in contouring, treatment planning, outcome prediction, and quality assurance. AI-driven approaches offer promise in reducing procedural times, personalizing treatments, and improving treatment outcomes for oncological patients. However, challenges such as clinical validation and quality assurance protocols persist. Nonetheless, AI presents a transformative opportunity to optimize IRT and meet evolving patient needs.
Collapse
Affiliation(s)
- Bruno Fionda
- Dipartimento di Diagnostica per Immagini e Radioterapia Oncologica, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Rome, Italy
| | - Elisa Placidi
- Dipartimento di Diagnostica per Immagini e Radioterapia Oncologica, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Rome, Italy
| | - Mischa de Ridder
- Department of Radiation Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Lidia Strigari
- Department of Medical Physics, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Bologna, Italy
| | - Stefano Patarnello
- Real World Data Facility, Gemelli Generator, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Rome, Italy
| | - Kari Tanderup
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Jean-Michel Hannoun-Levi
- Department of Radiation Oncology, Antoine Lacassagne Cancer Centre, University of Côte d’Azur, Nice, France
| | - Frank-André Siebert
- Clinic of Radiotherapy (Radiooncology), University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany
| | - Luca Boldrini
- Dipartimento di Diagnostica per Immagini e Radioterapia Oncologica, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Rome, Italy
| | - Maria Antonietta Gambacorta
- Dipartimento di Diagnostica per Immagini e Radioterapia Oncologica, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Rome, Italy
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Marco De Spirito
- Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Rome, Italy
- Dipartimento di Neuroscienze, Sezione di Fisica, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Evis Sala
- Dipartimento di Diagnostica per Immagini e Radioterapia Oncologica, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Rome, Italy
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Luca Tagliaferri
- Dipartimento di Diagnostica per Immagini e Radioterapia Oncologica, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Rome, Italy
- Istituto di Radiologia, Università Cattolica del Sacro Cuore, Rome, Italy
| |
Collapse
|
6
|
Chen J, Qiu RL, Wang T, Momin S, Yang X. A Review of Artificial Intelligence in Brachytherapy. ARXIV 2024:arXiv:2409.16543v1. [PMID: 39398213 PMCID: PMC11469420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/15/2024]
Abstract
Artificial intelligence (AI) has the potential to revolutionize brachytherapy's clinical workflow. This review comprehensively examines the application of AI, focusing on machine learning and deep learning, in facilitating various aspects of brachytherapy. We analyze AI's role in making brachytherapy treatments more personalized, efficient, and effective. The applications are systematically categorized into seven categories: imaging, preplanning, treatment planning, applicator reconstruction, quality assurance, outcome prediction, and real-time monitoring. Each major category is further subdivided based on cancer type or specific tasks, with detailed summaries of models, data sizes, and results presented in corresponding tables. This review offers insights into the current advancements, challenges, and the impact of AI on treatment paradigms, encouraging further research to expand its clinical utility.
Collapse
Affiliation(s)
- Jingchu Chen
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
- School of Mechanical Engineering, Georgia Institute of Technology, GA, Atlanta, USA
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065
| | - Shadab Momin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
| |
Collapse
|
7
|
Aleong AM, Berlin A, Borg J, Helou J, Beiki-Ardakani A, Rink A, Raman S, Chung P, Weersink RA. Rapid multi-catheter segmentation for magnetic resonance image-guided catheter-based interventions. Med Phys 2024; 51:5361-5373. [PMID: 38713919 DOI: 10.1002/mp.17117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 04/02/2024] [Accepted: 04/18/2024] [Indexed: 05/09/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) is the gold standard for delineating cancerous lesions in soft tissue. Catheter-based interventions require the accurate placement of multiple long, flexible catheters at the target site. The manual segmentation of catheters in MR images is a challenging and time-consuming task. There is a need for automated catheter segmentation to improve the efficiency of MR-guided procedures. PURPOSE To develop and assess a machine learning algorithm for the detection of multiple catheters in magnetic resonance images used during catheter-based interventions. METHODS In this work, a 3D U-Net was trained to retrospectively segment catheters in scans acquired during clinical MR-guided high dose rate (HDR) prostate brachytherapy cases. To assess confidence in segmentation, multiple AI models were trained. On clinical test cases, average segmentation results were used to plan the brachytherapy delivery. Dosimetric parameters were compared to the original clinical plan. Data was obtained from 35 patients who underwent HDR prostate brachytherapy for focal disease with a total of 214 image volumes. 185 image volumes from 30 patients were used for training using a five-fold cross validation split to divide the data for training and validation. To generate confidence measures of segmentation accuracy, five trained models were generated. The remaining five patients (29 volumes) were used to test the performance of the trained model by comparison to manual segmentations of three independent observers and assessment of dosimetric impact on the final clinical brachytherapy plans. RESULTS The network successfully identified 95% of catheters in the test set at a rate of 0.89 s per volume. The multi-model method identified the small number of cases where AI segmentation of individual catheters was poor, flagging the need for user input. AI-based segmentation performed as well as segmentations by independent observers. Plan dosimetry using AI-segmented catheters was comparable to the original plan. CONCLUSION The vast majority of catheters were accurately identified by AI segmentation, with minimal impact on plan outcomes. The use of multiple AI models provided confidence in the segmentation accuracy and identified catheter segmentations that required further manual assessment. Real-time AI catheter segmentation can be used during MR-guided insertions to assess deflections and for rapid planning of prostate brachytherapy.
Collapse
Affiliation(s)
- Amanda M Aleong
- Institute of Biomedical Engineering, University of Toronto, Toronto, Ontario, Canada
| | - Alejandro Berlin
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| | - Jette Borg
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| | - Joelle Helou
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| | - Akbar Beiki-Ardakani
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Alexandra Rink
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
| | - Srinivas Raman
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| | - Peter Chung
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
| | - Robert A Weersink
- Institute of Biomedical Engineering, University of Toronto, Toronto, Ontario, Canada
- Department of Radiation Medicine, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
8
|
Fechter T, Sachpazidis I, Baltas D. The use of deep learning in interventional radiotherapy (brachytherapy): A review with a focus on open source and open data. Z Med Phys 2024; 34:180-196. [PMID: 36376203 PMCID: PMC11156786 DOI: 10.1016/j.zemedi.2022.10.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 11/13/2022]
Abstract
Deep learning advanced to one of the most important technologies in almost all medical fields. Especially in areas, related to medical imaging it plays a big role. However, in interventional radiotherapy (brachytherapy) deep learning is still in an early phase. In this review, first, we investigated and scrutinised the role of deep learning in all processes of interventional radiotherapy and directly related fields. Additionally, we summarised the most recent developments. For better understanding, we provide explanations of key terms and approaches to solving common deep learning problems. To reproduce results of deep learning algorithms both source code and training data must be available. Therefore, a second focus of this work is on the analysis of the availability of open source, open data and open models. In our analysis, we were able to show that deep learning plays already a major role in some areas of interventional radiotherapy, but is still hardly present in others. Nevertheless, its impact is increasing with the years, partly self-propelled but also influenced by closely related fields. Open source, data and models are growing in number but are still scarce and unevenly distributed among different research groups. The reluctance in publishing code, data and models limits reproducibility and restricts evaluation to mono-institutional datasets. The conclusion of our analysis is that deep learning can positively change the workflow of interventional radiotherapy but there is still room for improvements when it comes to reproducible results and standardised evaluation methods.
Collapse
Affiliation(s)
- Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany.
| | - Ilias Sachpazidis
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| | - Dimos Baltas
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| |
Collapse
|
9
|
Ye Y, Liu Z, Zhu J, Wu J, Sun K, Peng Y, Qiu J, Gong L. Development trends and knowledge framework in the application of magnetic resonance imaging in prostate cancer: a bibliometric analysis from 1984 to 2022. Quant Imaging Med Surg 2023; 13:6761-6777. [PMID: 37869318 PMCID: PMC10585509 DOI: 10.21037/qims-23-446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/07/2023] [Indexed: 10/24/2023]
Abstract
Background Prostate cancer (PCa) is the most common tumor of the male genitourinary system. With the development of imaging technology, the role of magnetic resonance imaging (MRI) in the management of PCa is increasing. The present study summarizes research on the application of MRI in the field of PCa using bibliometric analysis and predicts future research hotspots. Methods Articles regarding the application of MRI in PCa between January 1, 1984 and June 30, 2022 were selected from the Web of Science Core Collection (WoSCC) on November 6, 2022. Microsoft Excel 2016 and the Bibliometrix Biblioshiny R-package software were used for data analysis and bibliometric indicator extraction. CiteSpace (version 6.1.R3) was used to visualize literature feature clustering, including co-occurrence analysis of countries, institutions, authors, references, and burst keywords analysis. Results A total of 10,230 articles were included in the study. Turkbey was the most prolific author. The USA was the most productive country and had strong partnerships with other countries. The most productive institution was Memorial Sloan Kettering Cancer Center. Journal of Magnetic Resonance Imaging and Radiology were the most productive and highest impact factor (IF) journals in the field, respectively. Timeline views showed that "#1 multiparametric magnetic resonance imaging", "#4 pi-rads", and "#8 psma" were currently the latest research hotspots. Keywords burst analysis showed that "machine learning", "psa density", "multi parametric mri", "deep learning", and "artificial intelligence" were the most frequently used keywords in the past 3 years. Conclusions MRI has a wide range of applications in PCa. The USA is the leading country in this field, with a concentration of highly productive and high-level institutions. Meanwhile, it can be projected that "deep learning", "radiomics", and "artificial intelligence" will be research hotspots in the future.
Collapse
Affiliation(s)
- Yinquan Ye
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Zhixuan Liu
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Jianghua Zhu
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Jialong Wu
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Ke Sun
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Yun Peng
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Jia Qiu
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Lianggeng Gong
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| |
Collapse
|
10
|
Zhao JZ, Ni R, Chow R, Rink A, Weersink R, Croke J, Raman S. Artificial intelligence applications in brachytherapy: A literature review. Brachytherapy 2023; 22:429-445. [PMID: 37248158 DOI: 10.1016/j.brachy.2023.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 04/02/2023] [Accepted: 04/07/2023] [Indexed: 05/31/2023]
Abstract
PURPOSE Artificial intelligence (AI) has the potential to simplify and optimize various steps of the brachytherapy workflow, and this literature review aims to provide an overview of the work done in this field. METHODS AND MATERIALS We conducted a literature search in June 2022 on PubMed, Embase, and Cochrane for papers that proposed AI applications in brachytherapy. RESULTS A total of 80 papers satisfied inclusion/exclusion criteria. These papers were categorized as follows: segmentation (24), registration and image processing (6), preplanning (13), dose prediction and treatment planning (11), applicator/catheter/needle reconstruction (16), and quality assurance (10). AI techniques ranged from classical models such as support vector machines and decision tree-based learning to newer techniques such as U-Net and deep reinforcement learning, and were applied to facilitate small steps of a process (e.g., optimizing applicator selection) or even automate the entire step of the workflow (e.g., end-to-end preplanning). Many of these algorithms demonstrated human-level performance and offer significant improvements in speed. CONCLUSIONS AI has potential to augment, automate, and/or accelerate many steps of the brachytherapy workflow. We recommend that future studies adhere to standard reporting guidelines. We also stress the importance of using larger sample sizes and reporting results using clinically interpretable measures.
Collapse
Affiliation(s)
- Jonathan Zl Zhao
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Ruiyan Ni
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Ronald Chow
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Alexandra Rink
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Robert Weersink
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Jennifer Croke
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada
| | - Srinivas Raman
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada.
| |
Collapse
|
11
|
Kallis K, Moore LC, Cortes KG, Brown D, Mayadev J, Moore KL, Meyers SM. Automated treatment planning framework for brachytherapy of cervical cancer using 3D dose predictions. Phys Med Biol 2023; 68:10.1088/1361-6560/acc37c. [PMID: 36898161 PMCID: PMC10101723 DOI: 10.1088/1361-6560/acc37c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 03/10/2023] [Indexed: 03/12/2023]
Abstract
Objective. To lay the foundation for automated knowledge-based brachytherapy treatment planning using 3D dose estimations, we describe an optimization framework to convert brachytherapy dose distributions directly into dwell times (DTs).Approach. A dose rate kernelḋ(r,θ,φ)was produced by exporting 3D dose for one dwell position from the treatment planning system and normalizing by DT. By translating and rotating this kernel to each dwell position, scaling by DT and summing over all dwell positions, dose was computed (Dcalc). We used a Python-coded COBYLA optimizer to iteratively determine the DTs that minimize the mean squared error betweenDcalcand reference doseDref, computed using voxels withDref80%-120% of prescription. As validation of the optimization, we showed that the optimizer replicates clinical plans whenDref= clinical dose in 40 patients treated with tandem-and-ovoid (T&O) or tandem-and-ring (T&R) and 0-3 needles. Then we demonstrated automated planning in 10 T&O usingDref= dose predicted from a convolutional neural network developed in past work. Validation and automated plans were compared to clinical plans using mean absolute differences (MAD=1N∑n=1Nabsxn-xn') over all voxels (xn= Dose,N= #voxels) and DTs (xn= DT,N= #dwell positions), mean differences (MD) in organD2ccand high-risk CTV D90 over all patients (where positive indicates higher clinical dose), and mean Dice similarity coefficients (DSC) for 100% isodose contours.Main results. Validation plans agreed well with clinical plans (MADdose= 1.1%, MADDT= 4 s or 0.8% of total plan time,D2ccMD = -0.2% to 0.2% and D90 MD = -0.6%, DSC = 0.99). For automated plans, MADdose= 6.5% and MADDT= 10.3 s (2.1%). The slightly higher clinical metrics in automated plans (D2ccMD = -3.8% to 1.3% and D90 MD = -5.1%) were due to higher neural network dose predictions. The overall shape of the automated dose distributions were similar to clinical doses (DSC = 0.91).Significance. Automated planning with 3D dose predictions could provide significant time savings and standardize treatment planning across practitioners, regardless of experience.
Collapse
Affiliation(s)
- Karoline Kallis
- Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, CA, United States of America
| | - Lance C Moore
- Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, CA, United States of America
| | - Katherina G Cortes
- Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, CA, United States of America
| | - Derek Brown
- Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, CA, United States of America
| | - Jyoti Mayadev
- Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, CA, United States of America
| | - Kevin L Moore
- Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, CA, United States of America
| | - Sandra M Meyers
- Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, CA, United States of America
| |
Collapse
|
12
|
Xie H, Wang J, Chen Y, Tu Y, Chen Y, Zhao Y, Zhou P, Wang S, Bai Z, Tang Q. Automatic reconstruction of interstitial needles using CT images in post-operative cervical cancer brachytherapy based on deep learning. J Contemp Brachytherapy 2023; 15:134-140. [PMID: 37215613 PMCID: PMC10196730 DOI: 10.5114/jcb.2023.126514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 03/27/2023] [Indexed: 05/24/2023] Open
Abstract
Purpose The purpose of this study was to investigate the precision of deep learning (DL)-based auto-reconstruction in localizing interstitial needles in post-operative cervical cancer brachytherapy (BT) using three-dimensional (3D) computed tomography (CT) images. Material and methods A convolutional neural network (CNN) was developed and presented for automatic reconstruction of interstitial needles. Data of 70 post-operative cervical cancer patients who received CT-based BT were used to train and test this DL model. All patients were treated with three metallic needles. Dice similarity coefficient (DSC), 95% Hausdorff distance (95% HD), and Jaccard coefficient (JC) were applied to evaluate the geometric accuracy of auto-reconstruction for each needle. Dose-volume indexes (DVI) between manual and automatic methods were used to analyze the dosimetric difference. Correlation between geometric metrics and dosimetric difference was evaluated using Spearman correlation analysis. Results The mean DSC values of DL-based model were 0.88, 0.89, and 0.90 for three metallic needles. Wilcoxon signed-rank test indicated no significant dosimetric differences in all BT planning structures between manual and automatic reconstruction methods (p > 0.05). Spearman correlation analysis demonstrated weak link between geometric metrics and dosimetry differences. Conclusions DL-based reconstruction method can be used to precisely localize the interstitial needles in 3D-CT images. The proposed automatic approach could improve the consistency of treatment planning for post-operative cervical cancer brachytherapy.
Collapse
Affiliation(s)
- Hongling Xie
- Department of Radiation Oncology, Women’s Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jiahao Wang
- Department of Radiation Oncology, Women’s Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Yuanyuan Chen
- Department of Radiation Oncology, Women’s Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Yeqiang Tu
- Department of Radiation Oncology, Women’s Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Yukai Chen
- Department of Radiation Oncology, Women’s Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Yadong Zhao
- Department of Radiation Oncology, Women’s Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Pengfei Zhou
- Department of Radiation Oncology, Women’s Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Shichun Wang
- Hangzhou Ruicare MedTech Co., Ltd., Hangzhou, Zhejiang, China
| | - Zhixin Bai
- Hangzhou Ruicare MedTech Co., Ltd., Hangzhou, Zhejiang, China
| | - Qiu Tang
- Department of Radiation Oncology, Women’s Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| |
Collapse
|
13
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RL, Liu T, Wang T, Yang X. Deep Learning in MRI-guided Radiation Therapy: A Systematic Review. ARXIV 2023:arXiv:2303.11378v2. [PMID: 36994167 PMCID: PMC10055493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
MRI-guided radiation therapy (MRgRT) offers a precise and adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed. MRI-guided radiation therapy offers a precise, adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed with emphasis placed on underlying methods. Studies are further categorized into the areas of segmentation, synthesis, radiomics, and real time MRI. Finally, clinical implications, current challenges, and future directions are discussed.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|
14
|
An advanced W-shaped network with adaptive multi-scale supervision for osteosarcoma segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
15
|
|
16
|
Li Y, Yang C, Bahl A, Persad R, Melhuish C. A review on the techniques used in prostate brachytherapy. COGNITIVE COMPUTATION AND SYSTEMS 2022. [DOI: 10.1049/ccs2.12067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Yanlei Li
- Bristol Robotics Laboratory University of the West of England Bristol UK
| | - Chenguang Yang
- Bristol Robotics Laboratory University of the West of England Bristol UK
| | - Amit Bahl
- University Hospitals Bristol and Weston NHS Trust and Bristol Robotics Laboratory University of the West of England Bristol UK
| | - Raj Persad
- University Hospitals Bristol and Weston NHS Trust and Bristol Robotics Laboratory University of the West of England Bristol UK
| | - Chris Melhuish
- Bristol Robotics Laboratory University of the West of England Bristol UK
| |
Collapse
|
17
|
Eidex Z, Wang T, Lei Y, Axente M, Akin-Akintayo OO, Ojo OAA, Akintayo AA, Roper J, Bradley JD, Liu T, Schuster DM, Yang X. MRI-based prostate and dominant lesion segmentation using cascaded scoring convolutional neural network. Med Phys 2022; 49:5216-5224. [PMID: 35533237 PMCID: PMC9388615 DOI: 10.1002/mp.15687] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 03/18/2022] [Accepted: 04/16/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Dose escalation to dominant intraprostatic lesions (DILs) is a novel treatment strategy to improve the treatment outcome of prostate radiation therapy. Treatment planning requires accurate and fast delineation of the prostate and DILs. In this study, a 3D cascaded scoring convolutional neural network is proposed to automatically segment the prostate and DILs from MRI. METHODS AND MATERIALS The proposed cascaded scoring convolutional neural network performs end-to-end segmentation by locating a region-of-interest (ROI), identifying the object within the ROI, and defining the target. A scoring strategy, which is learned to judge the segmentation quality of DIL, is integrated into cascaded convolutional neural network to solve the challenge of segmenting the irregular shapes of the DIL. To evaluate the proposed method, 77 patients who underwent MRI and PET/CT were retrospectively investigated. The prostate and DIL ground truth contours were delineated by experienced radiologists. The proposed method was evaluated with five-fold cross validation and holdout testing. RESULTS The average centroid distance, volume difference, and Dice similarity coefficient (DSC) value for prostate/DIL are 4.3±7.5mm/3.73±3.78mm, 4.5±7.9cc/0.41±0.59cc and 89.6±8.9%/84.3±11.9%, respectively. Comparable results were obtained in the holdout test. Similar or superior segmentation outcomes were seen when compared the results of the proposed method to those of competing segmentation approaches CONCLUSIONS: : The proposed automatic segmentation method can accurately and simultaneously segment both the prostate and DILs. The intended future use for this algorithm is focal boost prostate radiation therapy. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology, Emory University, Atlanta, GA.,School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA
| | - Marian Axente
- Department of Radiation Oncology, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | | | | | | | - Justin Roper
- Department of Radiation Oncology, Emory University, Atlanta, GA.,School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jeffery D Bradley
- Department of Radiation Oncology, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA.,School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA.,Winship Cancer Institute, Emory University, Atlanta, GA
| |
Collapse
|
18
|
Dai X, Lei Y, Wang T, Zhou J, Rudra S, McDonald M, Curran WJ, Liu T, Yang X. Multi-organ auto-delineation in head-and-neck MRI for radiation therapy using regional convolutional neural network. Phys Med Biol 2022; 67:10.1088/1361-6560/ac3b34. [PMID: 34794138 PMCID: PMC8811683 DOI: 10.1088/1361-6560/ac3b34] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Accepted: 11/18/2021] [Indexed: 01/23/2023]
Abstract
Magnetic resonance imaging (MRI) allows accurate and reliable organ delineation for many disease sites in radiation therapy because MRI is able to offer superb soft-tissue contrast. Manual organ-at-risk delineation is labor-intensive and time-consuming. This study aims to develop a deep-learning-based automated multi-organ segmentation method to release the labor and accelerate the treatment planning process for head-and-neck (HN) cancer radiotherapy. A novel regional convolutional neural network (R-CNN) architecture, namely, mask scoring R-CNN, has been developed in this study. In the proposed model, a deep attention feature pyramid network is used as a backbone to extract the coarse features given by MRI, followed by feature refinement using R-CNN. The final segmentation is obtained through mask and mask scoring networks taking those refined feature maps as input. With the mask scoring mechanism incorporated into conventional mask supervision, the classification error can be highly minimized in conventional mask R-CNN architecture. A cohort of 60 HN cancer patients receiving external beam radiation therapy was used for experimental validation. Five-fold cross-validation was performed for the assessment of our proposed method. The Dice similarity coefficients of brain stem, left/right cochlea, left/right eye, larynx, left/right lens, mandible, optic chiasm, left/right optic nerve, oral cavity, left/right parotid, pharynx, and spinal cord were 0.89 ± 0.06, 0.68 ± 0.14/0.68 ± 0.18, 0.89 ± 0.07/0.89 ± 0.05, 0.90 ± 0.07, 0.67 ± 0.18/0.67 ± 0.10, 0.82 ± 0.10, 0.61 ± 0.14, 0.67 ± 0.11/0.68 ± 0.11, 0.92 ± 0.07, 0.85 ± 0.06/0.86 ± 0.05, 0.80 ± 0.13, and 0.77 ± 0.15, respectively. After the model training, all OARs can be segmented within 1 min.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Soumon Rudra
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Mark McDonald
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| |
Collapse
|
19
|
Shaaer A, Paudel M, Smith M, Tonolete F, Ravi A. Deep-learning-assisted algorithm for catheter reconstruction during MR-only gynecological interstitial brachytherapy. J Appl Clin Med Phys 2021; 23:e13494. [PMID: 34889509 PMCID: PMC8833281 DOI: 10.1002/acm2.13494] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 11/10/2021] [Accepted: 11/15/2021] [Indexed: 11/18/2022] Open
Abstract
Magnetic resonance imaging (MRI) offers excellent soft‐tissue contrast enabling the contouring of targets and organs at risk during gynecological interstitial brachytherapy procedure. Despite its advantage, one of the main obstacles preventing a transition to an MRI‐only workflow is that implanted plastic catheters are not reliably visualized on MR images. This study aims to evaluate the feasibility of a deep‐learning‐based algorithm for semiautomatic reconstruction of interstitial catheters during an MR‐only workflow. MR images of 20 gynecological patients were used in this study. Note that 360 catheters were reconstructed using T1‐ and T2‐weighted images by five experienced brachytherapy planners. The mean of the five reconstructed paths were used for training (257 catheters), validation (15 catheters), and testing/evaluation (88 catheters). To automatically identify and localize the catheters, a two‐dimensional (2D) U‐net algorithm was used to find their approximate location in each image slice. Once localized, thresholding was applied to those regions to find the extrema, as catheters appear as bright and dark regions in T1‐ and T2‐weighted images, respectively. The localized dwell positions of the proposed algorithm were compared to the ground truth reconstruction. Reconstruction time was also evaluated. A total of 34 009 catheter dwell positions were evaluated between the algorithm and all planners to estimate the reconstruction variability. The average variation was 0.97 ± 0.66 mm. The average reconstruction time for this approach was 11 ± 1 min, compared with 46 ± 10 min for the expert planners. This study suggests that the proposed deep learning, MR‐based framework has potential to replace the conventional manual catheter reconstruction. The adoption of this approach in the brachytherapy workflow is expected to improve treatment efficiency while reducing planning time, resources, and human errors.
Collapse
Affiliation(s)
- Amani Shaaer
- Department of Physics, Ryerson University, Toronto, Ontario, Canada.,Department of Biomedical Physics, King Faisal Specialist Hospital and Research Centre, Riyadh, Saudi Arabia
| | - Moti Paudel
- Department of Medical Physics, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Department of Medical Physics, University of Toronto, Toronto, Ontario, Canada
| | - Mackenzie Smith
- Department of Radiation Therapy, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
| | - Frances Tonolete
- Department of Radiation Therapy, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
| | - Ananth Ravi
- Department of Medical Physics, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Department of Medical Physics, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
20
|
Matkovic LA, Wang T, Lei Y, Akin-Akintayo OO, Ojo OAA, Akintayo AA, Roper J, Bradley JD, Liu T, Schuster DM, Yang X. Prostate and dominant intraprostatic lesion segmentation on PET/CT using cascaded regional-net. Phys Med Biol 2021; 66:10.1088/1361-6560/ac3c13. [PMID: 34808603 PMCID: PMC8725511 DOI: 10.1088/1361-6560/ac3c13] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Accepted: 11/22/2021] [Indexed: 12/22/2022]
Abstract
Focal boost to dominant intraprostatic lesions (DILs) has recently been proposed for prostate radiation therapy. Accurate and fast delineation of the prostate and DILs is thus required during treatment planning. In this paper, we develop a learning-based method using positron emission tomography (PET)/computed tomography (CT) images to automatically segment the prostate and its DILs. To enable end-to-end segmentation, a deep learning-based method, called cascaded regional-Net, is utilized. The first network, referred to as dual attention network, is used to segment the prostate via extracting comprehensive features from both PET and CT images. A second network, referred to as mask scoring regional convolutional neural network (MSR-CNN), is used to segment the DILs from the PET and CT within the prostate region. Scoring strategy is used to diminish the misclassification of the DILs. For DIL segmentation, the proposed cascaded regional-Net uses two steps to remove normal tissue regions, with the first step cropping images based on prostate segmentation and the second step using MSR-CNN to further locate the DILs. The binary masks of DILs and prostates of testing patients are generated on the PET/CT images by the trained model. For evaluation, we retrospectively investigated 49 prostate cancer patients with PET/CT images acquired. The prostate and DILs of each patient were contoured by radiation oncologists and set as the ground truths and targets. We used five-fold cross-validation and a hold-out test to train and evaluate our method. The mean surface distance and DSC values were 0.666 ± 0.696 mm and 0.932 ± 0.059 for the prostate and 0.814 ± 1.002 mm and 0.801 ± 0.178 for the DILs among all 49 patients. The proposed method has shown promise for facilitating prostate and DIL delineation for DIL focal boost prostate radiation therapy.
Collapse
Affiliation(s)
- Luke A. Matkovic
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Yang Lei
- Department of Radiation Oncology, Emory University,
Atlanta, GA
| | | | | | | | - Justin Roper
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Jeffery D. Bradley
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Tian Liu
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - David M. Schuster
- Department of Radiology and Imaging Sciences, Emory
University, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| |
Collapse
|
21
|
Dai X, Lei Y, Roper J, Chen Y, Bradley JD, Curran WJ, Liu T, Yang X. Deep learning-based motion tracking using ultrasound images. Med Phys 2021; 48:7747-7756. [PMID: 34724712 PMCID: PMC11742242 DOI: 10.1002/mp.15321] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 10/13/2021] [Accepted: 10/22/2021] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Ultrasound (US) imaging is an established imaging modality capable of offering video-rate volumetric images without ionizing radiation. It has the potential for intra-fraction motion tracking in radiation therapy. In this study, a deep learning-based method has been developed to tackle the challenges in motion tracking using US imaging. METHODS We present a Markov-like network, which is implemented via generative adversarial networks, to extract features from sequential US frames (one tracked frame followed by untracked frames) and thereby estimate a set of deformation vector fields (DVFs) through the registration of the tracked frame and the untracked frames. The positions of the landmarks in the untracked frames are finally determined by shifting landmarks in the tracked frame according to the estimated DVFs. The performance of the proposed method was evaluated on the testing dataset by calculating the tracking error (TE) between the predicted and ground truth landmarks on each frame. RESULTS The proposed method was evaluated using the MICCAI CLUST 2015 dataset which was collected using seven US scanners with eight types of transducers and the Cardiac Acquisitions for Multi-structure Ultrasound Segmentation (CAMUS) dataset which was acquired using GE Vivid E95 ultrasound scanners. The CLUST dataset contains 63 2D and 22 3D US image sequences respectively from 42 and 18 subjects, and the CAMUS dataset includes 2D US images from 450 patients. On CLUST dataset, our proposed method achieved a mean tracking error of 0.70 ± 0.38 mm for the 2D sequences and 1.71 ± 0.84 mm for the 3D sequences for those public available annotations. And on CAMUS dataset, a mean tracking error of 0.54 ± 1.24 mm for the landmarks in the left atrium was achieved. CONCLUSIONS A novel motion tracking algorithm using US images based on modern deep learning techniques has been demonstrated in this study. The proposed method can offer millimeter-level tumor motion prediction in real time, which has the potential to be adopted into routine tumor motion management in radiation therapy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yue Chen
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University School of Medicine, Atlanta, Georgia, USA
| | - Jeffrey D. Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University School of Medicine, Atlanta, Georgia, USA
| |
Collapse
|
22
|
Song WY, Robar JL, Morén B, Larsson T, Carlsson Tedgren Å, Jia X. Emerging technologies in brachytherapy. Phys Med Biol 2021; 66. [PMID: 34710856 DOI: 10.1088/1361-6560/ac344d] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 10/28/2021] [Indexed: 01/15/2023]
Abstract
Brachytherapy is a mature treatment modality. The literature is abundant in terms of review articles and comprehensive books on the latest established as well as evolving clinical practices. The intent of this article is to part ways and look beyond the current state-of-the-art and review emerging technologies that are noteworthy and perhaps may drive the future innovations in the field. There are plenty of candidate topics that deserve a deeper look, of course, but with practical limits in this communicative platform, we explore four topics that perhaps is worthwhile to review in detail at this time. First, intensity modulated brachytherapy (IMBT) is reviewed. The IMBT takes advantage ofanisotropicradiation profile generated through intelligent high-density shielding designs incorporated onto sources and applicators such to achieve high quality plans. Second, emerging applications of 3D printing (i.e. additive manufacturing) in brachytherapy are reviewed. With the advent of 3D printing, interest in this technology in brachytherapy has been immense and translation swift due to their potential to tailor applicators and treatments customizable to each individual patient. This is followed by, in third, innovations in treatment planning concerning catheter placement and dwell times where new modelling approaches, solution algorithms, and technological advances are reviewed. And, fourth and lastly, applications of a new machine learning technique, called deep learning, which has the potential to improve and automate all aspects of brachytherapy workflow, are reviewed. We do not expect that all ideas and innovations reviewed in this article will ultimately reach clinic but, nonetheless, this review provides a decent glimpse of what is to come. It would be exciting to monitor as IMBT, 3D printing, novel optimization algorithms, and deep learning technologies evolve over time and translate into pilot testing and sensibly phased clinical trials, and ultimately make a difference for cancer patients. Today's fancy is tomorrow's reality. The future is bright for brachytherapy.
Collapse
Affiliation(s)
- William Y Song
- Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia, United States of America
| | - James L Robar
- Department of Radiation Oncology, Dalhousie University, Halifax, Nova Scotia, Canada
| | - Björn Morén
- Department of Mathematics, Linköping University, Linköping, Sweden
| | - Torbjörn Larsson
- Department of Mathematics, Linköping University, Linköping, Sweden
| | - Åsa Carlsson Tedgren
- Radiation Physics, Department of Medical and Health Sciences, Linköping University, Linköping, Sweden.,Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Stockholm, Sweden.,Department of Oncology Pathology, Karolinska Institute, Stockholm, Sweden
| | - Xun Jia
- Innovative Technology Of Radiotherapy Computations and Hardware (iTORCH) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, United States of America
| |
Collapse
|
23
|
Dai X, Lei Y, Wynne J, Janopaul-Naylor J, Wang T, Roper J, Curran WJ, Liu T, Patel P, Yang X. Synthetic CT-aided multiorgan segmentation for CBCT-guided adaptive pancreatic radiotherapy. Med Phys 2021; 48:7063-7073. [PMID: 34609745 PMCID: PMC8595847 DOI: 10.1002/mp.15264] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 09/15/2021] [Accepted: 09/17/2021] [Indexed: 12/19/2022] Open
Abstract
PURPOSE The delineation of organs at risk (OARs) is fundamental to cone-beam CT (CBCT)-based adaptive radiotherapy treatment planning, but is time consuming, labor intensive, and subject to interoperator variability. We investigated a deep learning-based rapid multiorgan delineation method for use in CBCT-guided adaptive pancreatic radiotherapy. METHODS To improve the accuracy of OAR delineation, two innovative solutions have been proposed in this study. First, instead of directly segmenting organs on CBCT images, a pretrained cycle-consistent generative adversarial network (cycleGAN) was applied to generating synthetic CT images given CBCT images. Second, an advanced deep learning model called mask-scoring regional convolutional neural network (MS R-CNN) was applied on those synthetic CT to detect the positions and shapes of multiple organs simultaneously for final segmentation. The OAR contours delineated by the proposed method were validated and compared with expert-drawn contours for geometric agreement using the Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), and residual mean square distance (RMS). RESULTS Across eight abdominal OARs including duodenum, large bowel, small bowel, left and right kidneys, liver, spinal cord, and stomach, the geometric comparisons between automated and expert contours are as follows: 0.92 (0.89-0.97) mean DSC, 2.90 mm (1.63-4.19 mm) mean HD95, 0.89 mm (0.61-1.36 mm) mean MSD, and 1.43 mm (0.90-2.10 mm) mean RMS. Compared to the competing methods, our proposed method had significant improvements (p < 0.05) in all the metrics for all the eight organs. Once the model was trained, the contours of eight OARs can be obtained on the order of seconds. CONCLUSIONS We demonstrated the feasibility of a synthetic CT-aided deep learning framework for automated delineation of multiple OARs on CBCT. The proposed method could be implemented in the setting of pancreatic adaptive radiotherapy to rapidly contour OARs with high accuracy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jacob Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - James Janopaul-Naylor
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
24
|
Dai X, Lei Y, Wang T, Zhou J, Roper J, McDonald M, Beitler JJ, Curran WJ, Liu T, Yang X. Automated delineation of head and neck organs at risk using synthetic MRI-aided mask scoring regional convolutional neural network. Med Phys 2021; 48:5862-5873. [PMID: 34342878 PMCID: PMC11700377 DOI: 10.1002/mp.15146] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 06/30/2021] [Accepted: 07/25/2021] [Indexed: 01/10/2023] Open
Abstract
PURPOSE Auto-segmentation algorithms offer a potential solution to eliminate the labor-intensive, time-consuming, and observer-dependent manual delineation of organs-at-risk (OARs) in radiotherapy treatment planning. This study aimed to develop a deep learning-based automated OAR delineation method to tackle the current challenges remaining in achieving reliable expert performance with the state-of-the-art auto-delineation algorithms. METHODS The accuracy of OAR delineation is expected to be improved by utilizing the complementary contrasts provided by computed tomography (CT) (bony-structure contrast) and magnetic resonance imaging (MRI) (soft-tissue contrast). Given CT images, synthetic MR images were firstly generated by a pre-trained cycle-consistent generative adversarial network. The features of CT and synthetic MRI were then extracted and combined for the final delineation of organs using mask scoring regional convolutional neural network. Both in-house and public datasets containing CT scans from head-and-neck (HN) cancer patients were adopted to quantitatively evaluate the performance of the proposed method against current state-of-the-art algorithms in metrics including Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), and residual mean square distance (RMS). RESULTS Across all of 18 OARs in our in-house dataset, the proposed method achieved an average DSC, HD95, MSD, and RMS of 0.77 (0.58-0.90), 2.90 mm (1.32-7.63 mm), 0.89 mm (0.42-1.85 mm), and 1.44 mm (0.71-3.15 mm), respectively, outperforming the current state-of-the-art algorithms by 6%, 16%, 25%, and 36%, respectively. On public datasets, for all nine OARs, an average DSC of 0.86 (0.73-0.97) were achieved, 6% better than the competing methods. CONCLUSION We demonstrated the feasibility of a synthetic MRI-aided deep learning framework for automated delineation of OARs in HN radiotherapy treatment planning. The proposed method could be adopted into routine HN cancer radiotherapy treatment planning to rapidly contour OARs with high accuracy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Mark McDonald
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jonathan J Beitler
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
25
|
Dai X, Lei Y, Wang T, Axente M, Xu D, Patel P, Jani AB, Curran WJ, Liu T, Yang X. Self-supervised learning for accelerated 3D high-resolution ultrasound imaging. Med Phys 2021; 48:3916-3926. [PMID: 33993508 PMCID: PMC11699523 DOI: 10.1002/mp.14946] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 05/03/2021] [Accepted: 05/10/2021] [Indexed: 12/23/2022] Open
Abstract
PURPOSE Ultrasound (US) imaging has been widely used in diagnosis, image-guided intervention, and therapy, where high-quality three-dimensional (3D) images are highly desired from sparsely acquired two-dimensional (2D) images. This study aims to develop a deep learning-based algorithm to reconstruct high-resolution (HR) 3D US images only reliant on the acquired sparsely distributed 2D images. METHODS We propose a self-supervised learning framework using cycle-consistent generative adversarial network (cycleGAN), where two independent cycleGAN models are trained with paired original US images and two sets of low-resolution (LR) US images, respectively. The two sets of LR US images are obtained through down-sampling the original US images along the two axes, respectively. In US imaging, in-plane spatial resolution is generally much higher than through-plane resolution. By learning the mapping from down-sampled in-plane LR images to original HR US images, cycleGAN can generate through-plane HR images from original sparely distributed 2D images. Finally, HR 3D US images are reconstructed by combining the generated 2D images from the two cycleGAN models. RESULTS The proposed method was assessed on two different datasets. One is automatic breast ultrasound (ABUS) images from 70 breast cancer patients, the other is collected from 45 prostate cancer patients. By applying a spatial resolution enhancement factor of 3 to the breast cases, our proposed method achieved the mean absolute error (MAE) value of 0.90 ± 0.15, the peak signal-to-noise ratio (PSNR) value of 37.88 ± 0.88 dB, and the visual information fidelity (VIF) value of 0.69 ± 0.01, which significantly outperforms bicubic interpolation. Similar performances have been achieved using the enhancement factor of 5 in these breast cases and using the enhancement factors of 5 and 10 in the prostate cases. CONCLUSIONS We have proposed and investigated a new deep learning-based algorithm for reconstructing HR 3D US images from sparely acquired 2D images. Significant improvement on through-plane resolution has been achieved by only using the acquired 2D images without any external atlas images. Its self-supervision capability could accelerate HR US imaging.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Marian Axente
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Dong Xu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
26
|
Dickhoff LR, Vrancken Peeters MJ, Bosman PA, Alderliesten T. Therapeutic applications of radioactive sources: from image-guided brachytherapy to radio-guided surgical resection. THE QUARTERLY JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING : OFFICIAL PUBLICATION OF THE ITALIAN ASSOCIATION OF NUCLEAR MEDICINE (AIMN) [AND] THE INTERNATIONAL ASSOCIATION OF RADIOPHARMACOLOGY (IAR), [AND] SECTION OF THE SOCIETY OF... 2021; 65:190-201. [PMID: 34105339 DOI: 10.23736/s1824-4785.21.03370-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
It is well known nowadays that radioactivity can destroy the living cells it interacts with. It is therefore unsurprising that radioactive sources, such as iodine-125, were historically developed for treatment purposes within radiation oncology with the goal of damaging malignant cells. However, since then, new techniques have been invented that make creative use of the same radioactivity properties of these sources for medical applications. Here, we review two distinct kinds of therapeutic uses of radioactive sources with applications to prostate, cervical, and breast cancer: brachytherapy and radioactive seed localization. In brachytherapy (BT), the radioactive sources are used for internal radiation treatment. Current approaches make use of real-time image guidance, for instance by means of magnetic resonance imaging, ultrasound, computed tomography, and sometimes positron emission tomography, depending on clinical availability and cancer type. Such image-guided BT for prostate and cervical cancer presents a promising alternative and/or addition to external beam radiation treatments or surgical resections. Radioactive sources can also be used for radio-guided tumor localization during surgery, for which the example of iodine-125 seed use in breast cancer is given. Radioactive seed localization (RSL) is increasingly popular as an alternative tumor localization technique during breast cancer surgery. Advantages of applying RSL include added flexibility in the clinical scheduling logistics, an increase in tumor localization accuracy, and higher patient satisfaction; safety measures do however have to be employed. We exemplify the implementation of RSL in a clinic through experiences at the Netherlands Cancer Institute.
Collapse
Affiliation(s)
- Leah R Dickhoff
- Department of Radiation Oncology, Leiden University Medical Center, Leiden, The Netherlands -
| | - Marie-Jeanne Vrancken Peeters
- Department of Surgical Oncology, Netherlands Cancer Institute - Antoni van Leeuwenhoek, Amsterdam, The Netherlands.,Department of Surgery, Amsterdam University Medical Centers, Amsterdam, The Netherlands
| | - Peter A Bosman
- Life Sciences and Health group, Centrum Wiskunde & Informatica, Amsterdam, The Netherlands
| | - Tanja Alderliesten
- Department of Radiation Oncology, Leiden University Medical Center, Leiden, The Netherlands
| |
Collapse
|
27
|
Deep learning applications in automatic segmentation and reconstruction in CT-based cervix brachytherapy. J Contemp Brachytherapy 2021; 13:325-330. [PMID: 34122573 PMCID: PMC8170523 DOI: 10.5114/jcb.2021.106118] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 04/06/2021] [Indexed: 12/09/2022] Open
Abstract
Purpose Motivated by recent advances in deep learning, the purpose of this study was to investigate a deep learning method in automatic segment and reconstruct applicators in computed tomography (CT) images for cervix brachytherapy treatment planning. Material and methods U-Net model was developed for applicator segmentation in CT images. Sixty cervical cancer patients with Fletcher applicator were divided into training data and validation data according to ratio of 50 : 10, and another 10 patients with Fletcher applicator were employed to test the model. Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (HD95) were used to evaluate the model. Segmented applicator coordinates were calculated and applied into RT structure file. Tip error and shaft error of applicators were evaluated. Dosimetric differences between manual reconstruction and deep learning-based reconstruction were compared. Results The averaged overall 10 test patients’ DSC, HD95, and reconstruction time were 0.89, 1.66 mm, and 17.12 s, respectively. The average tip error was 0.80 mm, and the average shaft error was less than 0.50 mm. The dosimetric differences between manual reconstruction and automatic reconstruction were 0.29% for high-risk clinical target volume (HR-CTV) D90%, and less than 2.64% for organs at risk D2cc at a scenario of doubled maximum shaft error. Conclusions We proposed a deep learning-based reconstruction method to localize Fletcher applicator in three-dimensional CT images. The achieved accuracy and efficiency confirmed our method as clinically attractive. It paves the way for the automation of brachytherapy treatment planning.
Collapse
|
28
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 92] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
29
|
Dai X, Lei Y, Liu Y, Wang T, Ren L, Curran WJ, Patel P, Liu T, Yang X. Intensity non-uniformity correction in MR imaging using residual cycle generative adversarial network. Phys Med Biol 2020; 65:215025. [PMID: 33245059 PMCID: PMC7934018 DOI: 10.1088/1361-6560/abb31f] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Correcting or reducing the effects of voxel intensity non-uniformity (INU) within a given tissue type is a crucial issue for quantitative magnetic resonance (MR) image analysis in daily clinical practice. Although having no severe impact on visual diagnosis, the INU can highly degrade the performance of automatic quantitative analysis such as segmentation, registration, feature extraction and radiomics. In this study, we present an advanced deep learning based INU correction algorithm called residual cycle generative adversarial network (res-cycle GAN), which integrates the residual block concept into a cycle-consistent GAN (cycle-GAN). In cycle-GAN, an inverse transformation was implemented between the INU uncorrected and corrected magnetic resonance imaging (MRI) images to constrain the model through forcing the calculation of both an INU corrected MRI and a synthetic corrected MRI. A fully convolution neural network integrating residual blocks was applied in the generator of cycle-GAN to enhance end-to-end raw MRI to INU corrected MRI transformation. A cohort of 55 abdominal patients with T1-weighted MR INU images and their corrections with a clinically established and commonly used method, namely, N4ITK were used as a pair to evaluate the proposed res-cycle GAN based INU correction algorithm. Quantitatively comparisons of normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC) indices, and spatial non-uniformity (SNU) were made among the proposed method and other approaches. Our res-cycle GAN based method achieved an NMAE of 0.011 ± 0.002, a PSNR of 28.0 ± 1.9 dB, an NCC of 0.970 ± 0.017, and a SNU of 0.298 ± 0.085. Our proposed method has significant improvements (p < 0.05) in NMAE, PSNR, NCC and SNU over other algorithms including conventional GAN and U-net. Once the model is well trained, our approach can automatically generate the corrected MR images in a few minutes, eliminating the need for manual setting of parameters.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Lei Ren
- Department of Radiation Oncology, Duke University, Durham, NC, 27708, United States of America
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| |
Collapse
|
30
|
Gillies DJ, Rodgers JR, Gyacskov I, Roy P, Kakani N, Cool DW, Fenster A. Deep learning segmentation of general interventional tools in two‐dimensional ultrasound images. Med Phys 2020; 47:4956-4970. [DOI: 10.1002/mp.14427] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 07/05/2020] [Accepted: 07/21/2020] [Indexed: 12/18/2022] Open
Affiliation(s)
- Derek J. Gillies
- Department of Medical Biophysics Western University London OntarioN6A 3K7 Canada
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
| | - Jessica R. Rodgers
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
- School of Biomedical Engineering Western University London OntarioN6A 3K7 Canada
| | - Igor Gyacskov
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
| | - Priyanka Roy
- Department of Medical Biophysics Western University London OntarioN6A 3K7 Canada
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
| | - Nirmal Kakani
- Department of Radiology Manchester Royal Infirmary ManchesterM13 9WL UK
| | - Derek W. Cool
- Department of Medical Imaging Western University London OntarioN6A 3K7 Canada
| | - Aaron Fenster
- Department of Medical Biophysics Western University London OntarioN6A 3K7 Canada
- Robarts Research Institute Western University London OntarioN6A 3K7 Canada
- School of Biomedical Engineering Western University London OntarioN6A 3K7 Canada
- Department of Medical Imaging Western University London OntarioN6A 3K7 Canada
| |
Collapse
|