1
|
Zhao G, Chen X, Zhu M, Liu Y, Wang Y. Exploring the application and future outlook of Artificial intelligence in pancreatic cancer. Front Oncol 2024; 14:1345810. [PMID: 38450187 PMCID: PMC10915754 DOI: 10.3389/fonc.2024.1345810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 01/29/2024] [Indexed: 03/08/2024] Open
Abstract
Pancreatic cancer, an exceptionally malignant tumor of the digestive system, presents a challenge due to its lack of typical early symptoms and highly invasive nature. The majority of pancreatic cancer patients are diagnosed when curative surgical resection is no longer possible, resulting in a poor overall prognosis. In recent years, the rapid progress of Artificial intelligence (AI) in the medical field has led to the extensive utilization of machine learning and deep learning as the prevailing approaches. Various models based on AI technology have been employed in the early screening, diagnosis, treatment, and prognostic prediction of pancreatic cancer patients. Furthermore, the development and application of three-dimensional visualization and augmented reality navigation techniques have also found their way into pancreatic cancer surgery. This article provides a concise summary of the current state of AI technology in pancreatic cancer and offers a promising outlook for its future applications.
Collapse
Affiliation(s)
- Guohua Zhao
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Liaoning, China
| | - Xi Chen
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Liaoning, China
- Department of Clinical integration of traditional Chinese and Western medicine, Liaoning University of Traditional Chinese Medicine, Liaoning, China
| | - Mengying Zhu
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Liaoning, China
- Department of Clinical integration of traditional Chinese and Western medicine, Liaoning University of Traditional Chinese Medicine, Liaoning, China
| | - Yang Liu
- Department of Ophthalmology, First Hospital of China Medical University, Liaoning, China
| | - Yue Wang
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Liaoning, China
| |
Collapse
|
2
|
Boldrini L, D'Aviero A, De Felice F, Desideri I, Grassi R, Greco C, Iorio GC, Nardone V, Piras A, Salvestrini V. Artificial intelligence applied to image-guided radiation therapy (IGRT): a systematic review by the Young Group of the Italian Association of Radiotherapy and Clinical Oncology (yAIRO). LA RADIOLOGIA MEDICA 2024; 129:133-151. [PMID: 37740838 DOI: 10.1007/s11547-023-01708-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/16/2023] [Indexed: 09/25/2023]
Abstract
INTRODUCTION The advent of image-guided radiation therapy (IGRT) has recently changed the workflow of radiation treatments by ensuring highly collimated treatments. Artificial intelligence (AI) and radiomics are tools that have shown promising results for diagnosis, treatment optimization and outcome prediction. This review aims to assess the impact of AI and radiomics on modern IGRT modalities in RT. METHODS A PubMed/MEDLINE and Embase systematic review was conducted to investigate the impact of radiomics and AI to modern IGRT modalities. The search strategy was "Radiomics" AND "Cone Beam Computed Tomography"; "Radiomics" AND "Magnetic Resonance guided Radiotherapy"; "Radiomics" AND "on board Magnetic Resonance Radiotherapy"; "Artificial Intelligence" AND "Cone Beam Computed Tomography"; "Artificial Intelligence" AND "Magnetic Resonance guided Radiotherapy"; "Artificial Intelligence" AND "on board Magnetic Resonance Radiotherapy" and only original articles up to 01.11.2022 were considered. RESULTS A total of 402 studies were obtained using the previously mentioned search strategy on PubMed and Embase. The analysis was performed on a total of 84 papers obtained following the complete selection process. Radiomics application to IGRT was analyzed in 23 papers, while a total 61 papers were focused on the impact of AI on IGRT techniques. DISCUSSION AI and radiomics seem to significantly impact IGRT in all the phases of RT workflow, even if the evidence in the literature is based on retrospective data. Further studies are needed to confirm these tools' potential and provide a stronger correlation with clinical outcomes and gold-standard treatment strategies.
Collapse
Affiliation(s)
- Luca Boldrini
- UOC Radioterapia Oncologica, Fondazione Policlinico Universitario IRCCS "A. Gemelli", Rome, Italy
- Università Cattolica del Sacro Cuore, Rome, Italy
| | - Andrea D'Aviero
- Radiation Oncology, Mater Olbia Hospital, Olbia, Sassari, Italy
| | - Francesca De Felice
- Radiation Oncology, Department of Radiological, Policlinico Umberto I, Rome, Italy
- Oncological and Pathological Sciences, "Sapienza" University of Rome, Rome, Italy
| | - Isacco Desideri
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
| | - Roberta Grassi
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Carlo Greco
- Department of Radiation Oncology, Università Campus Bio-Medico di Roma, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
| | | | - Valerio Nardone
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Antonio Piras
- UO Radioterapia Oncologica, Villa Santa Teresa, Bagheria, Palermo, Italy.
| | - Viola Salvestrini
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
- Cyberknife Center, Istituto Fiorentino di Cura e Assistenza (IFCA), 50139, Florence, Italy
| |
Collapse
|
3
|
Jiang J, Min Seo Choi C, Deasy JO, Rimner A, Thor M, Veeraraghavan H. Artificial intelligence-based automated segmentation and radiotherapy dose mapping for thoracic normal tissues. Phys Imaging Radiat Oncol 2024; 29:100542. [PMID: 38369989 PMCID: PMC10869275 DOI: 10.1016/j.phro.2024.100542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/17/2024] [Accepted: 01/24/2024] [Indexed: 02/20/2024] Open
Abstract
Background and purpose Objective assessment of delivered radiotherapy (RT) to thoracic organs requires fast and accurate deformable dose mapping. The aim of this study was to implement and evaluate an artificial intelligence (AI) deformable image registration (DIR) and organ segmentation-based AI dose mapping (AIDA) applied to the esophagus and the heart. Materials and methods AIDA metrics were calculated for 72 locally advanced non-small cell lung cancer patients treated with concurrent chemo-RT to 60 Gy in 2 Gy fractions in an automated pipeline. The pipeline steps were: (i) automated rigid alignment and cropping of planning CT to week 1 and week 2 cone-beam CT (CBCT) field-of-views, (ii) AI segmentation on CBCTs, and (iii) AI-DIR-based dose mapping to compute dose metrics. AIDA dose metrics were compared to the planned dose and manual contour dose mapping (manual DA). Results AIDA required ∼2 min/patient. Esophagus and heart segmentations were generated with a mean Dice similarity coefficient (DSC) of 0.80±0.15 and 0.94±0.05, a Hausdorff distance at 95th percentile (HD95) of 3.9±3.4 mm and 14.1±8.3 mm, respectively. AIDA heart dose was significantly lower than the planned heart dose (p = 0.04). Larger dose deviations (>=1Gy) were more frequently observed between AIDA and the planned dose (N = 26) than with manual DA (N = 6). Conclusions Rapid estimation of RT dose to thoracic tissues from CBCT is feasible with AIDA. AIDA-derived metrics and segmentations were similar to manual DA, thus motivating the use of AIDA for RT applications.
Collapse
Affiliation(s)
- Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Chloe Min Seo Choi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, South Korea
| | - Joseph O. Deasy
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Andreas Rimner
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Maria Thor
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| |
Collapse
|
4
|
Kumar V, Gaddam M, Moustafa A, Iqbal R, Gala D, Shah M, Gayam VR, Bandaru P, Reddy M, Gadaputi V. The Utility of Artificial Intelligence in the Diagnosis and Management of Pancreatic Cancer. Cureus 2023; 15:e49560. [PMID: 38156176 PMCID: PMC10754023 DOI: 10.7759/cureus.49560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/28/2023] [Indexed: 12/30/2023] Open
Abstract
Artificial intelligence (AI) has made significant advancements in the medical domain in recent years. AI, an expansive field comprising Machine Learning (ML) and, within it, Deep Learning (DL), seeks to emulate the intricate operations of the human brain. It examines vast amounts of data and plays a crucial role in decision-making, overcoming limitations related to human evaluation. DL utilizes complex algorithms to analyze data. ML and DL are subsets of AI that utilize hard statistical techniques that help machines consistently improve at tasks with experience. Pancreatic cancer is more common in developed countries and is one of the leading causes of cancer-related mortality worldwide. Managing pancreatic cancer remains a challenge despite significant advancements in diagnosis and treatment. AI has secured an almost ubiquitous presence in the field of oncological workup and management, especially in gastroenterology malignancies. AI is particularly useful for various investigations of pancreatic carcinoma because it has specific radiological features that enable diagnostic procedures without the requirement of a histological study. However, interpreting and evaluating resulting images is not always simple since images vary as the disease progresses. Secondly, a number of factors may impact prognosis and response to the treatment process. Currently, AI models have been created for diagnosing, grading, staging, and predicting prognosis and treatment response. This review presents the most up-to-date knowledge on the use of AI in the diagnosis and treatment of pancreatic carcinoma.
Collapse
Affiliation(s)
- Vikash Kumar
- Internal Medicine, The Brooklyn Hospital Center, Brooklyn, USA
| | | | - Amr Moustafa
- Internal Medicine, The Brooklyn Hospital Center, Brooklyn, USA
| | - Rabia Iqbal
- Internal Medicine, The Brooklyn Hospital Center, Brooklyn, USA
| | - Dhir Gala
- Internal Medicine, American University of the Caribbean School of Medicine, Sint Maarten, SXM
| | - Mili Shah
- Internal Medicine, American University of the Caribbean School of Medicine, Sint Maarten, SXM
| | - Vijay Reddy Gayam
- Gastroenterology and Hepatology, The Brooklyn Hospital Center, Brooklyn, USA
| | - Praneeth Bandaru
- Gastroenterology and Hepatology, The Brooklyn Hospital Center, Brooklyn, USA
| | - Madhavi Reddy
- Gastroenterology and Hepatology, The Brooklyn Hospital Center, Brooklyn, USA
| | - Vinaya Gadaputi
- Gastroenterology and Hepatology, Blanchard Valley Health System, Findlay, USA
| |
Collapse
|
5
|
Liu Y, Yang B, Chen X, Zhu J, Ji G, Liu Y, Chen B, Lu N, Yi J, Wang S, Li Y, Dai J, Men K. Efficient segmentation using domain adaptation for MRI-guided and CBCT-guided online adaptive radiotherapy. Radiother Oncol 2023; 188:109871. [PMID: 37634767 DOI: 10.1016/j.radonc.2023.109871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 07/31/2023] [Accepted: 08/20/2023] [Indexed: 08/29/2023]
Abstract
BACKGROUND Delineation of regions of interest (ROIs) is important for adaptive radiotherapy (ART) but it is also time consuming and labor intensive. AIM This study aims to develop efficient segmentation methods for magnetic resonance imaging-guided ART (MRIgART) and cone-beam computed tomography-guided ART (CBCTgART). MATERIALS AND METHODS MRIgART and CBCTgART studies enrolled 242 prostate cancer patients and 530 nasopharyngeal carcinoma patients, respectively. A public dataset of CBCT from 35 pancreatic cancer patients was adopted to test the framework. We designed two domain adaption methods to learn and adapt the features from planning computed tomography (pCT) to MRI or CBCT modalities. The pCT was transformed to synthetic MRI (sMRI) for MRIgART, while CBCT was transformed to synthetic CT (sCT) for CBCTgART. Generalized segmentation models were trained with large popular data in which the inputs were sMRI for MRIgART and pCT for CBCTgART. Finally, the personalized models for each patient were established by fine-tuning the generalized model with the contours on pCT of that patient. The proposed method was compared with deformable image registration (DIR), a regular deep learning (DL) model trained on the same modality (DL-regular), and a generalized model in our framework (DL-generalized). RESULTS The proposed method achieved better or comparable performance. For MRIgART of the prostate cancer patients, the mean dice similarity coefficient (DSC) of four ROIs was 87.2%, 83.75%, 85.36%, and 92.20% for the DIR, DL-regular, DL-generalized, and proposed method, respectively. For CBCTgART of the nasopharyngeal carcinoma patients, the mean DSC of two target volumes were 90.81% and 91.18%, 75.17% and 58.30%, for the DIR, DL-regular, DL-generalized, and the proposed method, respectively. For CBCTgART of the pancreatic cancer patients, the mean DSC of two ROIs were 61.94% and 61.44%, 63.94% and 81.56%, for the DIR, DL-regular, DL-generalized, and the proposed method, respectively. CONCLUSION The proposed method utilizing personalized modeling improved the segmentation accuracy of ART.
Collapse
Affiliation(s)
- Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Guangqian Ji
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yueping Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Bo Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ningning Lu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Junlin Yi
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Shulian Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yexiong Li
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.
| |
Collapse
|
6
|
Tian D, Sun G, Zheng H, Yu S, Jiang J. CT-CBCT deformable registration using weakly-supervised artifact-suppression transfer learning network. Phys Med Biol 2023; 68:165011. [PMID: 37433303 DOI: 10.1088/1361-6560/ace675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 07/11/2023] [Indexed: 07/13/2023]
Abstract
Objective.Computed tomography-cone-beam computed tomography (CT-CBCT) deformable registration has great potential in adaptive radiotherapy. It plays an important role in tumor tracking, secondary planning, accurate irradiation, and the protection of at-risk organs. Neural networks have been improving CT-CBCT deformable registration, and almost all registration algorithms based on neural networks rely on the gray values of both CT and CBCT. The gray value is a key factor in the loss function, parameter training, and final efficacy of the registration. Unfortunately, the scattering artifacts in CBCT affect the gray values of different pixels inconsistently. Therefore, the direct registration of the original CT-CBCT introduces artifact superposition loss.Approach. In this study, a histogram analysis method for the gray values was used. Based on an analysis of the gray value distribution characteristics of different regions in CT and CBCT, the degree of superposition of the artifact in the region of disinterest was found to be much higher than that in the region of interest. Moreover, the former was the main reason for artifact superposition loss. Consequently, a new weakly supervised two-stage transfer-learning network based on artifact suppression was proposed. The first stage was a pre-training network designed to suppress artifacts contained in the region of disinterest. The second stage was a convolutional neural network that registered the suppressed CBCT and CT.Main Results. Through a comparative test of the thoracic CT-CBCT deformable registration, whose data were collected from the Elekta XVI system, the rationality and accuracy after artifact suppression were confirmed to be significantly improved compared with the other algorithms without artifact suppression.Significance. This study proposed and verified a new deformable registration method with multi-stage neural networks, which can effectively suppress artifacts and further improve registration by incorporating a pre-training technique and an attention mechanism.
Collapse
Affiliation(s)
- Dingshu Tian
- University of Science and Technology of China, Hefei 230026, People's Republic of China
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, People's Republic of China
| | - Guangyao Sun
- SuperSafety Science and Technology Co., Ltd, Hefei 230088, People's Republic of China
- International Academy of Neutron Science, Qingdao 266199, People's Republic of China
| | - Huaqing Zheng
- International Academy of Neutron Science, Qingdao 266199, People's Republic of China
- Super Accuracy Science and Technology Co., Ltd, Nanjing 210044, People's Republic of China
| | - Shengpeng Yu
- SuperSafety Science and Technology Co., Ltd, Hefei 230088, People's Republic of China
- International Academy of Neutron Science, Qingdao 266199, People's Republic of China
| | - Jieqiong Jiang
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, People's Republic of China
| |
Collapse
|
7
|
Jiang J, Hong J, Tringale K, Reyngold M, Crane C, Tyagi N, Veeraraghavan H. Progressively refined deep joint registration segmentation (ProRSeg) of gastrointestinal organs at risk: Application to MRI and cone-beam CT. Med Phys 2023; 50:4758-4774. [PMID: 37265185 PMCID: PMC11009869 DOI: 10.1002/mp.16527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 04/27/2023] [Accepted: 05/08/2023] [Indexed: 06/03/2023] Open
Abstract
BACKGROUND Adaptive radiation treatment (ART) for locally advanced pancreatic cancer (LAPC) requires consistently accurate segmentation of the extremely mobile gastrointestinal (GI) organs at risk (OAR) including the stomach, duodenum, large and small bowel. Also, due to lack of sufficiently accurate and fast deformable image registration (DIR), accumulated dose to the GI OARs is currently only approximated, further limiting the ability to more precisely adapt treatments. PURPOSE Develop a 3-D Progressively refined joint Registration-Segmentation (ProRSeg) deep network to deformably align and segment treatment fraction magnetic resonance images (MRI)s, then evaluate segmentation accuracy, registration consistency, and feasibility for OAR dose accumulation. METHOD ProRSeg was trained using five-fold cross-validation with 110 T2-weighted MRI acquired at five treatment fractions from 10 different patients, taking care that same patient scans were not placed in training and testing folds. Segmentation accuracy was measured using Dice similarity coefficient (DSC) and Hausdorff distance at 95th percentile (HD95). Registration consistency was measured using coefficient of variation (CV) in displacement of OARs. Statistical comparison to other deep learning and iterative registration methods were done using the Kruskal-Wallis test, followed by pair-wise comparisons with Bonferroni correction applied for multiple testing. Ablation tests and accuracy comparisons against multiple methods were done. Finally, applicability of ProRSeg to segment cone-beam CT (CBCT) scans was evaluated on a publicly available dataset of 80 scans using five-fold cross-validation. RESULTS ProRSeg processed 3D volumes (128 × 192 × 128) in 3 s on a NVIDIA Tesla V100 GPU. It's segmentations were significantly more accurate (p < 0.001 $p<0.001$ ) than compared methods, achieving a DSC of 0.94 ±0.02 for liver, 0.88±0.04 for large bowel, 0.78±0.03 for small bowel and 0.82±0.04 for stomach-duodenum from MRI. ProRSeg achieved a DSC of 0.72±0.01 for small bowel and 0.76±0.03 for stomach-duodenum from public CBCT dataset. ProRSeg registrations resulted in the lowest CV in displacement (stomach-duodenumC V x $CV_{x}$ : 0.75%,C V y $CV_{y}$ : 0.73%, andC V z $CV_{z}$ : 0.81%; small bowelC V x $CV_{x}$ : 0.80%,C V y $CV_{y}$ : 0.80%, andC V z $CV_{z}$ : 0.68%; large bowelC V x $CV_{x}$ : 0.71%,C V y $CV_{y}$ : 0.81%, andC V z $CV_{z}$ : 0.75%). ProRSeg based dose accumulation accounting for intra-fraction (pre-treatment to post-treatment MRI scan) and inter-fraction motion showed that the organ dose constraints were violated in four patients for stomach-duodenum and for three patients for small bowel. Study limitations include lack of independent testing and ground truth phantom datasets to measure dose accumulation accuracy. CONCLUSIONS ProRSeg produced more accurate and consistent GI OARs segmentation and DIR of MRI and CBCTs compared to multiple methods. Preliminary results indicates feasibility for OAR dose accumulation using ProRSeg.
Collapse
Affiliation(s)
- Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Jun Hong
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Kathryn Tringale
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Marsha Reyngold
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Christopher Crane
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
8
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
9
|
Liang X, Morgan H, Bai T, Dohopolski M, Nguyen D, Jiang S. Deep learning based direct segmentation assisted by deformable image registration for cone-beam CT based auto-segmentation for adaptive radiotherapy. Phys Med Biol 2023; 68. [PMID: 36657169 DOI: 10.1088/1361-6560/acb4d7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 01/19/2023] [Indexed: 01/21/2023]
Abstract
Cone-beam CT (CBCT)-based online adaptive radiotherapy calls for accurate auto-segmentation to reduce the time cost for physicians. However, deep learning (DL)-based direct segmentation of CBCT images is a challenging task, mainly due to the poor image quality and lack of well-labelled large training datasets. Deformable image registration (DIR) is often used to propagate the manual contours on the planning CT (pCT) of the same patient to CBCT. In this work, we undertake solving the problems mentioned above with the assistance of DIR. Our method consists of three main components. First, we use deformed pCT contours derived from multiple DIR methods between pCT and CBCT as pseudo labels for initial training of the DL-based direct segmentation model. Second, we use deformed pCT contours from another DIR algorithm as influencer volumes to define the region of interest for DL-based direct segmentation. Third, the initially trained DL model is further fine-tuned using a smaller set of true labels. Nine patients are used for model evaluation. We found that DL-based direct segmentation on CBCT without influencer volumes has much poorer performance compared to DIR-based segmentation. However, adding deformed pCT contours as influencer volumes in the direct segmentation network dramatically improves segmentation performance, reaching the accuracy level of DIR-based segmentation. The DL model with influencer volumes can be further improved through fine-tuning using a smaller set of true labels, achieving mean Dice similarity coefficient of 0.86, Hausdorff distance at the 95th percentile of 2.34 mm, and average surface distance of 0.56 mm. A DL-based direct CBCT segmentation model can be improved to outperform DIR-based segmentation models by using deformed pCT contours as pseudo labels and influencer volumes for initial training, and by using a smaller set of true labels for model fine tuning.
Collapse
Affiliation(s)
- Xiao Liang
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Howard Morgan
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Ti Bai
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Michael Dohopolski
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Steve Jiang
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| |
Collapse
|
10
|
Lee D, Alam S, Jiang J, Cervino L, Hu YC, Zhang P. Seq2Morph: A deep learning deformable image registration algorithm for longitudinal imaging studies and adaptive radiotherapy. Med Phys 2023; 50:970-979. [PMID: 36303270 PMCID: PMC10388694 DOI: 10.1002/mp.16026] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 07/25/2022] [Accepted: 10/02/2022] [Indexed: 11/08/2022] Open
Abstract
PURPOSE To simultaneously register all the longitudinal images acquired in a radiotherapy course for analyzing patients' anatomy changes for adaptive radiotherapy (ART). METHODS To address the unique needs of ART, we designed Seq2Morph, a novel deep learning-based deformable image registration (DIR) network. Seq2Morph was built upon VoxelMorph which is a general-purpose framework for learning-based image registration. The major upgrades are (1) expansion of inputs to all weekly cone-beam computed tomography (CBCTs) acquired for monitoring treatment responses throughout a radiotherapy course, for registration to their planning CT; (2) incorporation of 3D convolutional long short-term memory between the encoder and decoder of VoxelMorph, to parse the temporal patterns of anatomical changes; and (3) addition of bidirectional pathways to calculate and minimize inverse consistency errors (ICEs). Longitudinal image sets from 50 patients, including a planning CT and 6 weekly CBCTs per patient, were utilized for network training and cross-validation. The outputs were deformation vector fields for all the registration pairs. The loss function was composed of a normalized cross-correlation for image intensity similarity, a DICE for contour similarity, an ICE, and a deformation regularization term. For performance evaluation, DICE and Hausdorff distance (HD) for the manual versus predicted contours of tumor and esophagus on weekly basis were quantified and further compared with other state-of-the-art algorithms, including conventional VoxelMorph and large deformation diffeomorphic metric mapping (LDDMM). RESULTS Visualization of the hidden states of Seq2Morph revealed distinct spatiotemporal anatomy change patterns. Quantitatively, Seq2Morph performed similarly to LDDMM, but significantly outperformed VoxelMorph as measured by GTV DICE: (0.799±0.078, 0.798±0.081, and 0.773±0.078), and 50% HD (mm): (0.80±0.57, 0.88±0.66, and 0.95±0.60). The per-patient inference of Seq2Morph took 22 s, much less than LDDMM (∼30 min). CONCLUSIONS Seq2Morph can provide accurate and fast DIR for longitudinal image studies by exploiting spatial-temporal patterns. It closely matches the clinical workflow and has the potential to serve both online and offline ART.
Collapse
Affiliation(s)
- Donghoon Lee
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Sadegh Alam
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Laura Cervino
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Yu-Chi Hu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Pengpeng Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| |
Collapse
|
11
|
Mackay K, Bernstein D, Glocker B, Kamnitsas K, Taylor A. A Review of the Metrics Used to Assess Auto-Contouring Systems in Radiotherapy. Clin Oncol (R Coll Radiol) 2023; 35:354-369. [PMID: 36803407 DOI: 10.1016/j.clon.2023.01.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 12/05/2022] [Accepted: 01/23/2023] [Indexed: 02/01/2023]
Abstract
Auto-contouring could revolutionise future planning of radiotherapy treatment. The lack of consensus on how to assess and validate auto-contouring systems currently limits clinical use. This review formally quantifies the assessment metrics used in studies published during one calendar year and assesses the need for standardised practice. A PubMed literature search was undertaken for papers evaluating radiotherapy auto-contouring published during 2021. Papers were assessed for types of metric and the methodology used to generate ground-truth comparators. Our PubMed search identified 212 studies, of which 117 met the criteria for clinical review. Geometric assessment metrics were used in 116 of 117 studies (99.1%). This includes the Dice Similarity Coefficient used in 113 (96.6%) studies. Clinically relevant metrics, such as qualitative, dosimetric and time-saving metrics, were less frequently used in 22 (18.8%), 27 (23.1%) and 18 (15.4%) of 117 studies, respectively. There was heterogeneity within each category of metric. Over 90 different names for geometric measures were used. Methods for qualitative assessment were different in all but two papers. Variation existed in the methods used to generate radiotherapy plans for dosimetric assessment. Consideration of editing time was only given in 11 (9.4%) papers. A single manual contour as a ground-truth comparator was used in 65 (55.6%) studies. Only 31 (26.5%) studies compared auto-contours to usual inter- and/or intra-observer variation. In conclusion, significant variation exists in how research papers currently assess the accuracy of automatically generated contours. Geometric measures are the most popular, however their clinical utility is unknown. There is heterogeneity in the methods used to perform clinical assessment. Considering the different stages of system implementation may provide a framework to decide the most appropriate metrics. This analysis supports the need for a consensus on the clinical implementation of auto-contouring.
Collapse
Affiliation(s)
- K Mackay
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK.
| | - D Bernstein
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| | - B Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - K Kamnitsas
- Department of Computing, Imperial College London, South Kensington Campus, London, UK; Department of Engineering Science, University of Oxford, Oxford, UK
| | - A Taylor
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| |
Collapse
|
12
|
Li N, Zhou X, Chen S, Dai J, Wang T, Zhang C, He W, Xie Y, Liang X. Incorporating the synthetic CT image for improving the performance of deformable image registration between planning CT and cone-beam CT. Front Oncol 2023; 13:1127866. [PMID: 36910636 PMCID: PMC9993856 DOI: 10.3389/fonc.2023.1127866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 01/25/2023] [Indexed: 02/25/2023] Open
Abstract
Objective To develop a contrast learning-based generative (CLG) model for the generation of high-quality synthetic computed tomography (sCT) from low-quality cone-beam CT (CBCT). The CLG model improves the performance of deformable image registration (DIR). Methods This study included 100 post-breast-conserving patients with the pCT images, CBCT images, and the target contours, which the physicians delineated. The CT images were generated from CBCT images via the proposed CLG model. We used the Sct images as the fixed images instead of the CBCT images to achieve the multi-modality image registration accurately. The deformation vector field is applied to propagate the target contour from the pCT to CBCT to realize the automatic target segmentation on CBCT images. We calculate the Dice similarity coefficient (DSC), 95 % Hausdorff distance (HD95), and average surface distance (ASD) between the prediction and reference segmentation to evaluate the proposed method. Results The DSC, HD95, and ASD of the target contours with the proposed method were 0.87 ± 0.04, 4.55 ± 2.18, and 1.41 ± 0.56, respectively. Compared with the traditional method without the synthetic CT assisted (0.86 ± 0.05, 5.17 ± 2.60, and 1.55 ± 0.72), the proposed method was outperformed, especially in the soft tissue target, such as the tumor bed region. Conclusion The CLG model proposed in this study can create the high-quality sCT from low-quality CBCT and improve the performance of DIR between the CBCT and the pCT. The target segmentation accuracy is better than using the traditional DIR.
Collapse
Affiliation(s)
- Na Li
- School of Biomedical Engineering, Guangdong Medical University, Dongguan, Guangdong, China.,Dongguan Key Laboratory of Medical Electronics and Medical Imaging Equipment, Dongguan, Guangdong, China.,Songshan Lake Innovation Center of Medicine & Engineering, Guangdong Medical University, Dongguan, Guangdong, China
| | - Xuanru Zhou
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China.,Department of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Shupeng Chen
- Department of Radiation Oncology, Beaumont Health, Royal Oak, MI, United States
| | - Jingjing Dai
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Tangsheng Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Wenfeng He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| |
Collapse
|
13
|
Hong J, Reyngold M, Crane C, Cuaron J, Hajj C, Mann J, Zinovoy M, Yorke E, LoCastro E, Apte AP, Mageras G. CT and cone-beam CT of ablative radiation therapy for pancreatic cancer with expert organ-at-risk contours. Sci Data 2022; 9:637. [PMID: 36271000 PMCID: PMC9587208 DOI: 10.1038/s41597-022-01758-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 10/04/2022] [Indexed: 11/15/2022] Open
Abstract
We describe a dataset from patients who received ablative radiation therapy for locally advanced pancreatic cancer (LAPC), consisting of computed tomography (CT) and cone-beam CT (CBCT) images with physician-drawn organ-at-risk (OAR) contours. The image datasets (one CT for treatment planning and two CBCT scans at the time of treatment per patient) were collected from 40 patients. All scans were acquired with the patient in the treatment position and in a deep inspiration breath-hold state. Six radiation oncologists delineated the gastrointestinal OARs consisting of small bowel, stomach and duodenum, such that the same physician delineated all image sets belonging to the same patient. Two trained medical physicists further edited the contours to ensure adherence to delineation guidelines. The image and contour files are available in DICOM format and are publicly available from The Cancer Imaging Archive (10.7937/TCIA.ESHQ-4D90, Version 2). The dataset can serve as a criterion standard for evaluating the accuracy and reliability of deformable image registration and auto-segmentation algorithms, as well as a training set for deep-learning-based methods. Measurement(s) | Image Segmentation • Stomach • Duodenum • Small Intestine | Technology Type(s) | Computed Tomography • Kilovoltage Cone Beam Computed Tomography • Manual | Factor Type(s) | Treatment planning dose | Sample Characteristic - Organism | Homo sapiens |
Collapse
Affiliation(s)
- Jun Hong
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Marsha Reyngold
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Christopher Crane
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - John Cuaron
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Carla Hajj
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Justin Mann
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Melissa Zinovoy
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Ellen Yorke
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Eve LoCastro
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Aditya P Apte
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Gig Mageras
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA.
| |
Collapse
|
14
|
Teuwen J, Gouw ZA, Sonke JJ. Artificial Intelligence for Image Registration in Radiation Oncology. Semin Radiat Oncol 2022; 32:330-342. [DOI: 10.1016/j.semradonc.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
15
|
Ma L, Chi W, Morgan HE, Lin MH, Chen M, Sher D, Moon D, Vo DT, Avkshtol V, Lu W, Gu X. Registration-guided deep learning image segmentation for cone beam CT-based online adaptive radiotherapy. Med Phys 2022; 49:5304-5316. [PMID: 35460584 DOI: 10.1002/mp.15677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 03/23/2022] [Accepted: 04/14/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Adaptive radiotherapy (ART), especially online ART, effectively accounts for positioning errors and anatomical changes. One key component of online ART process is accurately and efficiently delineating organs at risk (OARs) and targets on online images, such as Cone Beam Computed Tomography (CBCT). Direct application of deep learning (DL)-based segmentation to CBCT images suffered from issues such as low image quality and limited available contour labels for training. To overcome these obstacles to online CBCT segmentation, we propose a registration-guided DL (RgDL) segmentation framework that integrates image registration algorithms and DL segmentation models. METHODS The RgDL framework is composed of two components: image registration and registration-guided DL segmentation. The image registration algorithm transforms / deforms planning contours, which were subsequently used as guidance by the DL model to obtain accurate final segmentations. We had two implementations of the proposed framework-Rig-RgDL (Rig for rigid body) and Def-RgDL (Def for deformable)-with rigid body (RB) registration or deformable image registration (DIR) as the registration algorithm, respectively, and U-Net as the DL model architecture. The two implementations of RgDL framework were trained and evaluated on seven OARs in an institutional clinical Head and Neck (HN) dataset. RESULTS Compared to the baseline approaches using the registration or the DL alone, RgDLs achieved more accurate segmentation, as measured by higher mean Dice similarity coefficients (DSC) and other distance-based metrics. Rig-RgDL achieved a DSC of 84.5% on seven OARs on average, higher than RB or DL alone by 4.5% and 4.7%. The average DSC of Def-RgDL was 86.5%, higher than DIR or DL alone by 2.4% and 6.7%. The inference time required by the DL model component to generate final segmentations of seven OARs was less than one second in RgDL. By examining the contours from RgDLs and DL case by case, we found that RgDL was less susceptible to image artifacts. We also studied how the performances of RgDL and DL vary with the size of the training dataset. The DSC of DL dropped by 12.1% as the number of training data decreased from 22 to 5, while RgDL only dropped by 3.4%. CONCLUSION By incorporating the patient-specific registration guidance to a population-based DL segmentation model, RgDL framework overcame the obstacles associated with online CBCT segmentation, including low image quality and insufficient training data, and achieved better segmentation accuracy than baseline methods. The resulting segmentation accuracy and efficiency show promise for applying this RgDL framework for online ART. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Lin Ma
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Weicheng Chi
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA.,School of Software Engineering, South China University of Technology, Guangzhou, Guangdong, 510006, China
| | - Howard E Morgan
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Mu-Han Lin
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Mingli Chen
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - David Sher
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Dominic Moon
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Dat T Vo
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Vladimir Avkshtol
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Weiguo Lu
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Xuejun Gu
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA.,Department of Radiation Oncology, School of Medicine, Stanford University, 875 Blake Wilbur Drive, Stanford, CA, 95304, USA
| |
Collapse
|
16
|
Alam S, Veeraraghavan H, Tringale K, Amoateng E, Subashi E, Wu AJ, Crane CH, Tyagi N. Inter- and intrafraction motion assessment and accumulated dose quantification of upper gastrointestinal organs during magnetic resonance-guided ablative radiation therapy of pancreas patients. Phys Imaging Radiat Oncol 2022; 21:54-61. [PMID: 35243032 PMCID: PMC8861831 DOI: 10.1016/j.phro.2022.02.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 02/02/2022] [Accepted: 02/11/2022] [Indexed: 12/24/2022] Open
Abstract
Background and purpose Stereotactic body radiation therapy (SBRT) of locally advanced pancreatic cancer (LAPC) is challenging due to significant motion of gastrointestinal (GI) organs. The goal of our study was to quantify inter and intrafraction deformations and dose accumulation of upper GI organs in LAPC patients. Materials and methods Five LAPC patients undergoing five-fraction magnetic resonance-guided radiation therapy (MRgRT) using abdominal compression and daily online plan adaptation to 50 Gy were analyzed. A pre-treatment, verification, and post-treatment MR imaging (MRI) for each of the five fractions (75 total) were used to calculate intra and interfraction motion. The MRIs were registered using Large Deformation Diffeomorphic Metric Mapping (LDDMM) deformable image registration (DIR) method and total dose delivered to stomach_duodenum, small bowel (SB) and large bowel (LB) were accumulated. Deformations were quantified using gradient magnitude and Jacobian integral of the Deformation Vector Fields (DVF). Registration DVFs were geometrically assessed using Dice and 95th percentile Hausdorff distance (HD95) between the deformed and physician’s contours. Accumulated doses were then calculated from the DVFs. Results Median Dice and HD95 were: Stomach_duodenum (0.9, 1.0 mm), SB (0.9, 3.6 mm), and LB (0.9, 2.0 mm). Median (max) interfraction deformation for stomach_duodenum, SB and LB was 6.4 (25.8) mm, 7.9 (40.5) mm and 7.6 (35.9) mm. Median intrafraction deformation was 5.5 (22.6) mm, 8.2 (37.8) mm and 7.2 (26.5) mm. Accumulated doses for two patients exceeded institutional constraints for stomach_duodenum, one of whom experienced Grade1 acute and late abdominal toxicity. Conclusion LDDMM method indicates feasibility to measure large GI motion and accumulate dose. Further validation on larger cohort will allow quantitative dose accumulation to more reliably optimize online MRgRT.
Collapse
Affiliation(s)
- Sadegh Alam
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Kathryn Tringale
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Emmanuel Amoateng
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Ergys Subashi
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Abraham J. Wu
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Christopher H. Crane
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
| | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, USA
- Corresponding author at: Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, 545 East 74th Street, New York, NY 10021, USA.
| |
Collapse
|
17
|
Yang B, Chang Y, Liang Y, Wang Z, Pei X, Xu X, Qiu J. A Comparison Study Between CNN-Based Deformed Planning CT and CycleGAN-Based Synthetic CT Methods for Improving iCBCT Image Quality. Front Oncol 2022; 12:896795. [PMID: 35707352 PMCID: PMC9189355 DOI: 10.3389/fonc.2022.896795] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 04/27/2022] [Indexed: 12/24/2022] Open
Abstract
Purpose The aim of this study is to compare two methods for improving the image quality of the Varian Halcyon cone-beam CT (iCBCT) system through the deformed planning CT (dpCT) based on the convolutional neural network (CNN) and the synthetic CT (sCT) generation based on the cycle-consistent generative adversarial network (CycleGAN). Methods A total of 190 paired pelvic CT and iCBCT image datasets were included in the study, out of which 150 were used for model training and the remaining 40 were used for model testing. For the registration network, we proposed a 3D multi-stage registration network (MSnet) to deform planning CT images to agree with iCBCT images, and the contours from CT images were propagated to the corresponding iCBCT images through a deformation matrix. The overlap between the deformed contours (dpCT) and the fixed contours (iCBCT) was calculated for purposes of evaluating the registration accuracy. For the sCT generation, we trained the 2D CycleGAN using the deformation-registered CT-iCBCT slicers and generated the sCT with corresponding iCBCT image data. Then, on sCT images, physicians re-delineated the contours that were compared with contours of manually delineated iCBCT images. The organs for contour comparison included the bladder, spinal cord, femoral head left, femoral head right, and bone marrow. The dice similarity coefficient (DSC) was used to evaluate the accuracy of registration and the accuracy of sCT generation. Results The DSC values of the registration and sCT generation were found to be 0.769 and 0.884 for the bladder (p < 0.05), 0.765 and 0.850 for the spinal cord (p < 0.05), 0.918 and 0.923 for the femoral head left (p > 0.05), 0.916 and 0.921 for the femoral head right (p > 0.05), and 0.878 and 0.916 for the bone marrow (p < 0.05), respectively. When the bladder volume difference in planning CT and iCBCT scans was more than double, the accuracy of sCT generation was significantly better than that of registration (DSC of bladder: 0.859 vs. 0.596, p < 0.05). Conclusion The registration and sCT generation could both improve the iCBCT image quality effectively, and the sCT generation could achieve higher accuracy when the difference in planning CT and iCBCT was large.
Collapse
Affiliation(s)
- Bo Yang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Yankui Chang
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
| | - Yongguang Liang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Zhiqun Wang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Xi Pei
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
- Technology Development Department, Anhui Wisdom Technology Co., Ltd., Hefei, China
| | - Xie George Xu
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
- Department of Radiation Oncology, First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Jie Qiu
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
- *Correspondence: Jie Qiu,
| |
Collapse
|