1
|
Rabe M, Kurz C, Thummerer A, Landry G. Artificial intelligence for treatment delivery: image-guided radiotherapy. Strahlenther Onkol 2025; 201:283-297. [PMID: 39138806 DOI: 10.1007/s00066-024-02277-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 07/07/2024] [Indexed: 08/15/2024]
Abstract
Radiation therapy (RT) is a highly digitized field relying heavily on computational methods and, as such, has a high affinity for the automation potential afforded by modern artificial intelligence (AI). This is particularly relevant where imaging is concerned and is especially so during image-guided RT (IGRT). With the advent of online adaptive RT (ART) workflows at magnetic resonance (MR) linear accelerators (linacs) and at cone-beam computed tomography (CBCT) linacs, the need for automation is further increased. AI as applied to modern IGRT is thus one area of RT where we can expect important developments in the near future. In this review article, after outlining modern IGRT and online ART workflows, we cover the role of AI in CBCT and MRI correction for dose calculation, auto-segmentation on IGRT imaging, motion management, and response assessment based on in-room imaging.
Collapse
Affiliation(s)
- Moritz Rabe
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Christopher Kurz
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Adrian Thummerer
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
- German Cancer Consortium (DKTK), partner site Munich, a partnership between the DKFZ and the LMU University Hospital Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
- Bavarian Cancer Research Center (BZKF), Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
| |
Collapse
|
2
|
Boldrini L, D'Aviero A, De Felice F, Desideri I, Grassi R, Greco C, Iorio GC, Nardone V, Piras A, Salvestrini V. Artificial intelligence applied to image-guided radiation therapy (IGRT): a systematic review by the Young Group of the Italian Association of Radiotherapy and Clinical Oncology (yAIRO). LA RADIOLOGIA MEDICA 2024; 129:133-151. [PMID: 37740838 DOI: 10.1007/s11547-023-01708-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/16/2023] [Indexed: 09/25/2023]
Abstract
INTRODUCTION The advent of image-guided radiation therapy (IGRT) has recently changed the workflow of radiation treatments by ensuring highly collimated treatments. Artificial intelligence (AI) and radiomics are tools that have shown promising results for diagnosis, treatment optimization and outcome prediction. This review aims to assess the impact of AI and radiomics on modern IGRT modalities in RT. METHODS A PubMed/MEDLINE and Embase systematic review was conducted to investigate the impact of radiomics and AI to modern IGRT modalities. The search strategy was "Radiomics" AND "Cone Beam Computed Tomography"; "Radiomics" AND "Magnetic Resonance guided Radiotherapy"; "Radiomics" AND "on board Magnetic Resonance Radiotherapy"; "Artificial Intelligence" AND "Cone Beam Computed Tomography"; "Artificial Intelligence" AND "Magnetic Resonance guided Radiotherapy"; "Artificial Intelligence" AND "on board Magnetic Resonance Radiotherapy" and only original articles up to 01.11.2022 were considered. RESULTS A total of 402 studies were obtained using the previously mentioned search strategy on PubMed and Embase. The analysis was performed on a total of 84 papers obtained following the complete selection process. Radiomics application to IGRT was analyzed in 23 papers, while a total 61 papers were focused on the impact of AI on IGRT techniques. DISCUSSION AI and radiomics seem to significantly impact IGRT in all the phases of RT workflow, even if the evidence in the literature is based on retrospective data. Further studies are needed to confirm these tools' potential and provide a stronger correlation with clinical outcomes and gold-standard treatment strategies.
Collapse
Affiliation(s)
- Luca Boldrini
- UOC Radioterapia Oncologica, Fondazione Policlinico Universitario IRCCS "A. Gemelli", Rome, Italy
- Università Cattolica del Sacro Cuore, Rome, Italy
| | - Andrea D'Aviero
- Radiation Oncology, Mater Olbia Hospital, Olbia, Sassari, Italy
| | - Francesca De Felice
- Radiation Oncology, Department of Radiological, Policlinico Umberto I, Rome, Italy
- Oncological and Pathological Sciences, "Sapienza" University of Rome, Rome, Italy
| | - Isacco Desideri
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
| | - Roberta Grassi
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Carlo Greco
- Department of Radiation Oncology, Università Campus Bio-Medico di Roma, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
| | | | - Valerio Nardone
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Antonio Piras
- UO Radioterapia Oncologica, Villa Santa Teresa, Bagheria, Palermo, Italy.
| | - Viola Salvestrini
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
- Cyberknife Center, Istituto Fiorentino di Cura e Assistenza (IFCA), 50139, Florence, Italy
| |
Collapse
|
3
|
Chen J, Chen S, Wee L, Dekker A, Bermejo I. Deep learning based unpaired image-to-image translation applications for medical physics: a systematic review. Phys Med Biol 2023; 68:05TR01. [PMID: 36753766 DOI: 10.1088/1361-6560/acba74] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 02/08/2023] [Indexed: 02/10/2023]
Abstract
Purpose. There is a growing number of publications on the application of unpaired image-to-image (I2I) translation in medical imaging. However, a systematic review covering the current state of this topic for medical physicists is lacking. The aim of this article is to provide a comprehensive review of current challenges and opportunities for medical physicists and engineers to apply I2I translation in practice.Methods and materials. The PubMed electronic database was searched using terms referring to unpaired (unsupervised), I2I translation, and medical imaging. This review has been reported in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. From each full-text article, we extracted information extracted regarding technical and clinical applications of methods, Transparent Reporting for Individual Prognosis Or Diagnosis (TRIPOD) study type, performance of algorithm and accessibility of source code and pre-trained models.Results. Among 461 unique records, 55 full-text articles were included in the review. The major technical applications described in the selected literature are segmentation (26 studies), unpaired domain adaptation (18 studies), and denoising (8 studies). In terms of clinical applications, unpaired I2I translation has been used for automatic contouring of regions of interest in MRI, CT, x-ray and ultrasound images, fast MRI or low dose CT imaging, CT or MRI only based radiotherapy planning, etc Only 5 studies validated their models using an independent test set and none were externally validated by independent researchers. Finally, 12 articles published their source code and only one study published their pre-trained models.Conclusion. I2I translation of medical images offers a range of valuable applications for medical physicists. However, the scarcity of external validation studies of I2I models and the shortage of publicly available pre-trained models limits the immediate applicability of the proposed methods in practice.
Collapse
Affiliation(s)
- Junhua Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Shenlun Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Leonard Wee
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Inigo Bermejo
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| |
Collapse
|
4
|
Mackay K, Bernstein D, Glocker B, Kamnitsas K, Taylor A. A Review of the Metrics Used to Assess Auto-Contouring Systems in Radiotherapy. Clin Oncol (R Coll Radiol) 2023; 35:354-369. [PMID: 36803407 DOI: 10.1016/j.clon.2023.01.016] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 12/05/2022] [Accepted: 01/23/2023] [Indexed: 02/01/2023]
Abstract
Auto-contouring could revolutionise future planning of radiotherapy treatment. The lack of consensus on how to assess and validate auto-contouring systems currently limits clinical use. This review formally quantifies the assessment metrics used in studies published during one calendar year and assesses the need for standardised practice. A PubMed literature search was undertaken for papers evaluating radiotherapy auto-contouring published during 2021. Papers were assessed for types of metric and the methodology used to generate ground-truth comparators. Our PubMed search identified 212 studies, of which 117 met the criteria for clinical review. Geometric assessment metrics were used in 116 of 117 studies (99.1%). This includes the Dice Similarity Coefficient used in 113 (96.6%) studies. Clinically relevant metrics, such as qualitative, dosimetric and time-saving metrics, were less frequently used in 22 (18.8%), 27 (23.1%) and 18 (15.4%) of 117 studies, respectively. There was heterogeneity within each category of metric. Over 90 different names for geometric measures were used. Methods for qualitative assessment were different in all but two papers. Variation existed in the methods used to generate radiotherapy plans for dosimetric assessment. Consideration of editing time was only given in 11 (9.4%) papers. A single manual contour as a ground-truth comparator was used in 65 (55.6%) studies. Only 31 (26.5%) studies compared auto-contours to usual inter- and/or intra-observer variation. In conclusion, significant variation exists in how research papers currently assess the accuracy of automatically generated contours. Geometric measures are the most popular, however their clinical utility is unknown. There is heterogeneity in the methods used to perform clinical assessment. Considering the different stages of system implementation may provide a framework to decide the most appropriate metrics. This analysis supports the need for a consensus on the clinical implementation of auto-contouring.
Collapse
Affiliation(s)
- K Mackay
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK.
| | - D Bernstein
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| | - B Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - K Kamnitsas
- Department of Computing, Imperial College London, South Kensington Campus, London, UK; Department of Engineering Science, University of Oxford, Oxford, UK
| | - A Taylor
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| |
Collapse
|
5
|
Abbani N, Baudier T, Rit S, Franco FD, Okoli F, Jaouen V, Tilquin F, Barateau A, Simon A, de Crevoisier R, Bert J, Sarrut D. Deep learning-based segmentation in prostate radiation therapy using Monte Carlo simulated cone-beam computed tomography. Med Phys 2022; 49:6930-6944. [PMID: 36000762 DOI: 10.1002/mp.15946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 07/28/2022] [Accepted: 08/05/2022] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Segmenting organs in cone-beam CT (CBCT) images would allow to adapt the radiotherapy based on the organ deformations that may occur between treatment fractions. However, this is a difficult task because of the relative lack of contrast in CBCT images, leading to high inter-observer variability. Deformable image registration (DIR) and deep-learning based automatic segmentation approaches have shown interesting results for this task in the past years. However, they are either sensitive to large organ deformations, or require to train a convolutional neural network (CNN) from a database of delineated CBCT images, which is difficult to do without improvement of image quality. In this work, we propose an alternative approach: to train a CNN (using a deep learning-based segmentation tool called nnU-Net) from a database of artificial CBCT images simulated from planning CT, for which it is easier to obtain the organ contours. METHODS Pseudo-CBCT (pCBCT) images were simulated from readily available segmented planning CT images, using the GATE Monte Carlo simulation. CT reference delineations were copied onto the pCBCT, resulting in a database of segmented images used to train the neural network. The studied segmentation contours were: bladder, rectum, and prostate contours. We trained multiple nnU-Net models using different training: (1) segmented real CBCT, (2) pCBCT, (3) segmented real CT and tested on pseudo-CT (pCT) generated from CBCT with cycleGAN, and (4) a combination of (2) and (3). The evaluation was performed on different datasets of segmented CBCT or pCT by comparing predicted segmentations with reference ones thanks to Dice similarity score and Hausdorff distance. A qualitative evaluation was also performed to compare DIR-based and nnU-Net-based segmentations. RESULTS Training with pCBCT was found to lead to comparable results to using real CBCT images. When evaluated on CBCT obtained from the same hospital as the CT images used in the simulation of the pCBCT, the model trained with pCBCT scored mean DSCs of 0.92 ± 0.05, 0.87 ± 0.02, and 0.85 ± 0.04 and mean Hausdorff distance 4.67 ± 3.01, 3.91 ± 0.98, and 5.00 ± 1.32 for the bladder, rectum, and prostate contours respectively, while the model trained with real CBCT scored mean DSCs of 0.91 ± 0.06, 0.83 ± 0.07, and 0.81 ± 0.05 and mean Hausdorff distance 5.62 ± 3.24, 6.43 ± 5.11, and 6.19 ± 1.14 for the bladder, rectum, and prostate contours, respectively. It was also found to outperform models using pCT or a combination of both, except for the prostate contour when tested on a dataset from a different hospital. Moreover, the resulting segmentations demonstrated a clinical acceptability, where 78% of bladder segmentations, 98% of rectum segmentations, and 93% of prostate segmentations required minor or no corrections, and for 76% of the patients, all structures of the patient required minor or no corrections. CONCLUSION We proposed to use simulated CBCT images to train a nnU-Net segmentation model, avoiding the need to gather complex and time-consuming reference delineations on CBCT images.
Collapse
Affiliation(s)
- Nelly Abbani
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| | - Thomas Baudier
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| | - Simon Rit
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| | - Francesca di Franco
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| | - Franklin Okoli
- LaTIM, Université de Bretagne Occidentale, Inserm, Brest, France
| | - Vincent Jaouen
- LaTIM, Université de Bretagne Occidentale, Inserm, Brest, France
| | | | - Anaïs Barateau
- Univ Rennes, CLCC Eugène Marquis, Inserm, Rennes, France
| | - Antoine Simon
- Univ Rennes, CLCC Eugène Marquis, Inserm, Rennes, France
| | | | - Julien Bert
- LaTIM, Université de Bretagne Occidentale, Inserm, Brest, France
| | - David Sarrut
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| |
Collapse
|
6
|
Yang Q, Guo X, Chen Z, Woo PYM, Yuan Y. D 2-Net: Dual Disentanglement Network for Brain Tumor Segmentation With Missing Modalities. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2953-2964. [PMID: 35576425 DOI: 10.1109/tmi.2022.3175478] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Multi-modal Magnetic Resonance Imaging (MRI) can provide complementary information for automatic brain tumor segmentation, which is crucial for diagnosis and prognosis. While missing modality data is common in clinical practice and it can result in the collapse of most previous methods relying on complete modality data. Current state-of-the-art approaches cope with the situations of missing modalities by fusing multi-modal images and features to learn shared representations of tumor regions, which often ignore explicitly capturing the correlations among modalities and tumor regions. Inspired by the fact that modality information plays distinct roles to segment different tumor regions, we aim to explicitly exploit the correlations among various modality-specific information and tumor-specific knowledge for segmentation. To this end, we propose a Dual Disentanglement Network (D2-Net) for brain tumor segmentation with missing modalities, which consists of a modality disentanglement stage (MD-Stage) and a tumor-region disentanglement stage (TD-Stage). In the MD-Stage, a spatial-frequency joint modality contrastive learning scheme is designed to directly decouple the modality-specific information from MRI data. To decompose tumor-specific representations and extract discriminative holistic features, we propose an affinity-guided dense tumor-region knowledge distillation mechanism in the TD-Stage through aligning the features of a disentangled binary teacher network with a holistic student network. By explicitly discovering relations among modalities and tumor regions, our model can learn sufficient information for segmentation even if some modalities are missing. Extensive experiments on the public BraTS-2018 database demonstrate the superiority of our framework over state-of-the-art methods in missing modalities situations. Codes are available at https://github.com/CityU-AIM-Group/D2Net.
Collapse
|
7
|
Ma L, Chi W, Morgan HE, Lin MH, Chen M, Sher D, Moon D, Vo DT, Avkshtol V, Lu W, Gu X. Registration-guided deep learning image segmentation for cone beam CT-based online adaptive radiotherapy. Med Phys 2022; 49:5304-5316. [PMID: 35460584 DOI: 10.1002/mp.15677] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 03/23/2022] [Accepted: 04/14/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Adaptive radiotherapy (ART), especially online ART, effectively accounts for positioning errors and anatomical changes. One key component of online ART process is accurately and efficiently delineating organs at risk (OARs) and targets on online images, such as Cone Beam Computed Tomography (CBCT). Direct application of deep learning (DL)-based segmentation to CBCT images suffered from issues such as low image quality and limited available contour labels for training. To overcome these obstacles to online CBCT segmentation, we propose a registration-guided DL (RgDL) segmentation framework that integrates image registration algorithms and DL segmentation models. METHODS The RgDL framework is composed of two components: image registration and registration-guided DL segmentation. The image registration algorithm transforms / deforms planning contours, which were subsequently used as guidance by the DL model to obtain accurate final segmentations. We had two implementations of the proposed framework-Rig-RgDL (Rig for rigid body) and Def-RgDL (Def for deformable)-with rigid body (RB) registration or deformable image registration (DIR) as the registration algorithm, respectively, and U-Net as the DL model architecture. The two implementations of RgDL framework were trained and evaluated on seven OARs in an institutional clinical Head and Neck (HN) dataset. RESULTS Compared to the baseline approaches using the registration or the DL alone, RgDLs achieved more accurate segmentation, as measured by higher mean Dice similarity coefficients (DSC) and other distance-based metrics. Rig-RgDL achieved a DSC of 84.5% on seven OARs on average, higher than RB or DL alone by 4.5% and 4.7%. The average DSC of Def-RgDL was 86.5%, higher than DIR or DL alone by 2.4% and 6.7%. The inference time required by the DL model component to generate final segmentations of seven OARs was less than one second in RgDL. By examining the contours from RgDLs and DL case by case, we found that RgDL was less susceptible to image artifacts. We also studied how the performances of RgDL and DL vary with the size of the training dataset. The DSC of DL dropped by 12.1% as the number of training data decreased from 22 to 5, while RgDL only dropped by 3.4%. CONCLUSION By incorporating the patient-specific registration guidance to a population-based DL segmentation model, RgDL framework overcame the obstacles associated with online CBCT segmentation, including low image quality and insufficient training data, and achieved better segmentation accuracy than baseline methods. The resulting segmentation accuracy and efficiency show promise for applying this RgDL framework for online ART. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Lin Ma
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Weicheng Chi
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA.,School of Software Engineering, South China University of Technology, Guangzhou, Guangdong, 510006, China
| | - Howard E Morgan
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Mu-Han Lin
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Mingli Chen
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - David Sher
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Dominic Moon
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Dat T Vo
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Vladimir Avkshtol
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Weiguo Lu
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Xuejun Gu
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA.,Department of Radiation Oncology, School of Medicine, Stanford University, 875 Blake Wilbur Drive, Stanford, CA, 95304, USA
| |
Collapse
|
8
|
Jiang J, Veeraraghavan H. One shot PACS: Patient specific Anatomic Context and Shape prior aware recurrent registration-segmentation of longitudinal thoracic cone beam CTs. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; PP:10.1109/TMI.2022.3154934. [PMID: 35213307 PMCID: PMC9642320 DOI: 10.1109/tmi.2022.3154934] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/27/2023]
Abstract
Image-guided adaptive lung radiotherapy requires accurate tumor and organs segmentation from during treatment cone-beam CT (CBCT) images. Thoracic CBCTs are hard to segment because of low soft-tissue contrast, imaging artifacts, respiratory motion, and large treatment induced intra-thoracic anatomic changes. Hence, we developed a novel Patient-specific Anatomic Context and Shape prior or PACS-aware 3D recurrent registration-segmentation network for longitudinal thoracic CBCT segmentation. Segmentation and registration networks were concurrently trained in an end-to-end framework and implemented with convolutional long-short term memory models. The registration network was trained in an unsupervised manner using pairs of planning CT (pCT) and CBCT images and produced a progressively deformed sequence of images. The segmentation network was optimized in a one-shot setting by combining progressively deformed pCT (anatomic context) and pCT delineations (shape context) with CBCT images. Our method, one-shot PACS was significantly more accurate (p <0.001) for tumor (DSC of 0.83 ± 0.08, surface DSC [sDSC] of 0.97 ± 0.06, and Hausdorff distance at 95th percentile [HD95] of 3.97±3.02mm) and the esophagus (DSC of 0.78 ± 0.13, sDSC of 0.90±0.14, HD95 of 3.22±2.02) segmentation than multiple methods. Ablation tests and comparative experiments were also done.
Collapse
|
9
|
Yu X, Jin F, Luo H, Lei Q, Wu Y. Gross Tumor Volume Segmentation for Stage III NSCLC Radiotherapy Using 3D ResSE-Unet. Technol Cancer Res Treat 2022; 21:15330338221090847. [PMID: 35443832 PMCID: PMC9047806 DOI: 10.1177/15330338221090847] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
INTRODUCTION Radiotherapy is one of the most effective ways to treat lung cancer. Accurately delineating the gross target volume is a key step in the radiotherapy process. In current clinical practice, the target area is still delineated manually by radiologists, which is time-consuming and laborious. However, these problems can be better solved by deep learning-assisted automatic segmentation methods. METHODS In this paper, a 3D CNN model named 3D ResSE-Unet is proposed for gross tumor volume segmentation for stage III NSCLC radiotherapy. This model is based on 3D Unet and combines residual connection and channel attention mechanisms. Three-dimensional convolution operation and encoding-decoding structure are used to mine three-dimensional spatial information of tumors from computed tomography data. Inspired by ResNet and SE-Net, residual connection and channel attention mechanisms are used to improve segmentation performance. A total of 214 patients with stage III NSCLC were collected selectively and 148 cases were randomly selected as the training set, 30 cases as the validation set, and 36 cases as the testing set. The segmentation performance of models was evaluated by the testing set. In addition, the segmentation results of different depths of 3D Unet were analyzed. And the performance of 3D ResSE-Unet was compared with 3D Unet, 3D Res-Unet, and 3D SE-Unet. RESULTS Compared with other depths, 3D Unet with four downsampling depths is more suitable for our work. Compared with 3D Unet, 3D Res-Unet, and 3D SE-Unet, 3D ResSE-Unet can obtain superior results. Its dice similarity coefficient, 95th-percentile of Hausdorff distance, and average surface distance can reach 0.7367, 21.39mm, 4.962mm, respectively. And the average time cost of 3D ResSE-Unet to segment a patient is only about 10s. CONCLUSION The method proposed in this study provides a new tool for GTV auto-segmentation and may be useful for lung cancer radiotherapy.
Collapse
Affiliation(s)
- Xinhao Yu
- College of Bioengineering, 47913Chongqing University, Chongqing, China.,Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Fu Jin
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - HuanLi Luo
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Qianqian Lei
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Yongzhong Wu
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| |
Collapse
|