1
|
Yue Y, Li N, Zhang G, Xing W, Zhu Z, Liu X, Song S, Ta D. A transformer-guided cross-modality adaptive feature fusion framework for esophageal gross tumor volume segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 251:108216. [PMID: 38761412 DOI: 10.1016/j.cmpb.2024.108216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 04/17/2024] [Accepted: 05/06/2024] [Indexed: 05/20/2024]
Abstract
BACKGROUND AND OBJECTIVE Accurate segmentation of esophageal gross tumor volume (GTV) indirectly enhances the efficacy of radiotherapy for patients with esophagus cancer. In this domain, learning-based methods have been employed to fuse cross-modality positron emission tomography (PET) and computed tomography (CT) images, aiming to improve segmentation accuracy. This fusion is essential as it combines functional metabolic information from PET with anatomical information from CT, providing complementary information. While the existing three-dimensional (3D) segmentation method has achieved state-of-the-art (SOTA) performance, it typically relies on pure-convolution architectures, limiting its ability to capture long-range spatial dependencies due to convolution's confinement to a local receptive field. To address this limitation and further enhance esophageal GTV segmentation performance, this work proposes a transformer-guided cross-modality adaptive feature fusion network, referred to as TransAttPSNN, which is based on cross-modality PET/CT scans. METHODS Specifically, we establish an attention progressive semantically-nested network (AttPSNN) by incorporating the convolutional attention mechanism into the progressive semantically-nested network (PSNN). Subsequently, we devise a plug-and-play transformer-guided cross-modality adaptive feature fusion model, which is inserted between the multi-scale feature counterparts of a two-stream AttPSNN backbone (one for the PET modality flow and another for the CT modality flow), resulting in the proposed TransAttPSNN architecture. RESULTS Through extensive four-fold cross-validation experiments on the clinical PET/CT cohort. The proposed approach acquires a Dice similarity coefficient (DSC) of 0.76 ± 0.13, a Hausdorff distance (HD) of 9.38 ± 8.76 mm, and a Mean surface distance (MSD) of 1.13 ± 0.94 mm, outperforming the SOTA competing methods. The qualitative results show a satisfying consistency with the lesion areas. CONCLUSIONS The devised transformer-guided cross-modality adaptive feature fusion module integrates the strengths of PET and CT, effectively enhancing the segmentation performance of esophageal GTV. The proposed TransAttPSNN has further advanced the research of esophageal GTV segmentation.
Collapse
Affiliation(s)
- Yaoting Yue
- Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, PR China
| | - Nan Li
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai 201321, PR China
| | - Gaobo Zhang
- Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, PR China
| | - Wenyu Xing
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, PR China
| | - Zhibin Zhu
- Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, PR China; School of Physics and Electromechanical Engineering, Hexi University, Zhangye 734000, Gansu, PR China
| | - Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, PR China.
| | - Shaoli Song
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai 201321, PR China.
| | - Dean Ta
- Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, PR China; Academy for Engineering and Technology, Fudan University, Shanghai 200433, PR China.
| |
Collapse
|
2
|
Zhong H, Li A, Chen Y, Huang Q, Chen X, Kang J, You Y. Comparative analysis of automatic segmentation of esophageal cancer using 3D Res-UNet on conventional and 40-keV virtual mono-energetic CT Images: a retrospective study. PeerJ 2023; 11:e15707. [PMID: 37483982 PMCID: PMC10358343 DOI: 10.7717/peerj.15707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 06/15/2023] [Indexed: 07/25/2023] Open
Abstract
Objectives To assess the performance of 3D Res-UNet for fully automated segmentation of esophageal cancer (EC) and compare the segmentation accuracy between conventional images (CI) and 40-keV virtual mono-energetic images (VMI40 kev). Methods Patients underwent spectral CT scanning and diagnosed of EC by operation or gastroscope biopsy in our hospital from 2019 to 2020 were analyzed retrospectively. All artery spectral base images were transferred to the dedicated workstation to generate VMI40 kev and CI. The segmentation model of EC was constructed by 3D Res-UNet neural network in VMI40 kev and CI, respectively. After optimization training, the Dice similarity coefficient (DSC), overlap (IOU), average symmetrical surface distance (ASSD) and 95% Hausdorff distance (HD_95) of EC at pixel level were tested and calculated in the test set. The paired rank sum test was used to compare the results of VMI40 kev and CI. Results A total of 160 patients were included in the analysis and randomly divided into the training dataset (104 patients), validation dataset (26 patients) and test dataset (30 patients). VMI40 kevas input data in the training dataset resulted in higher model performance in the test dataset in comparison with using CI as input data (DSC:0.875 vs 0.859, IOU: 0.777 vs 0.755, ASSD:0.911 vs 0.981, HD_95: 4.41 vs 6.23, all p-value <0.05). Conclusion Fully automated segmentation of EC with 3D Res-UNet has high accuracy and clinically feasibility for both CI and VMI40 kev. Compared with CI, VMI40 kev indicated slightly higher accuracy in this test dataset.
Collapse
Affiliation(s)
- Hua Zhong
- Department of Radiology, Zhong Shan Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, Fujian, China
| | - Anqi Li
- Department of Radiology, Zhong Shan Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, Fujian, China
| | - Yingdong Chen
- Department of Radiology, Zhong Shan Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, Fujian, China
| | - Qianwen Huang
- Department of Radiology, Zhong Shan Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, Fujian, China
| | - Xingbiao Chen
- Clinical Science, Philips Healthcare, Shanghai, China
| | - Jianghe Kang
- Department of Radiology, Zhong Shan Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, Fujian, China
| | - Youkuang You
- Department of Radiology, Xiamen Xianyue Hospital, Xiamen, Fujian, China
| |
Collapse
|
3
|
Ashok M, Gupta A. Automatic Segmentation of Organs-at-Risk in Thoracic Computed Tomography Images Using Ensembled U-Net InceptionV3 Model. J Comput Biol 2023; 30:346-362. [PMID: 36629856 DOI: 10.1089/cmb.2022.0248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
The objective of this article is to automatically segment organs at risk (OARs) for thoracic radiology in computed tomography (CT) scan images. The OARs in the thoracic anatomical region during the radiotherapy treatment are mainly the neighbouring organs such as the esophagus, heart, trachea, and aorta. The dataset of 40 patients was used in the proposed work by splitting it into three parts: training, validation, and test sets. The implementation was performed on the Google Colab Pro+ framework with 52 GB of RAM and 265 GB of storage space. An ensemble model was evolved for the automatic segmentation of four OARs in thoracic CT images. U-Net with InceptionV3 as the backbone was used, and different hyperparameters were used during the training of the model. The proposed model achieved precise accuracy for OARs segmentation with an average dice coefficient of 0.9413, Hausdorff value of 0.1838, sensitivity of 0.9783, and specificity of 0.9895 on the Test dataset. An ensembled U-Net InceptionV3 model has been proposed, improving the segmentation results compared with the state-of-the-art techniques such as U-Net, ResNet, Vgg16, etc. The results of the experiments revealed that the proposed model effectively improved the performance of the segmentation of the esophagus, heart, trachea, and aorta.
Collapse
Affiliation(s)
- Malvika Ashok
- School of Computer Science and Engineering, Shri Mata Vaishno Devi University, Katra, Jammu and Kashmir, India
| | - Abhishek Gupta
- School of Computer Science and Engineering, Shri Mata Vaishno Devi University, Katra, Jammu and Kashmir, India
| |
Collapse
|
4
|
Yue Y, Li N, Shahid H, Bi D, Liu X, Song S, Ta D. Gross Tumor Volume Definition and Comparative Assessment for Esophageal Squamous Cell Carcinoma From 3D 18F-FDG PET/CT by Deep Learning-Based Method. Front Oncol 2022; 12:799207. [PMID: 35372054 PMCID: PMC8967962 DOI: 10.3389/fonc.2022.799207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 02/17/2022] [Indexed: 11/13/2022] Open
Abstract
BackgroundThe accurate definition of gross tumor volume (GTV) of esophageal squamous cell carcinoma (ESCC) can promote precise irradiation field determination, and further achieve the radiotherapy curative effect. This retrospective study is intended to assess the applicability of leveraging deep learning-based method to automatically define the GTV from 3D 18F-FDG PET/CT images of patients diagnosed with ESCC.MethodsWe perform experiments on a clinical cohort with 164 18F-FDG PET/CT scans. The state-of-the-art esophageal GTV segmentation deep neural net is first employed to delineate the lesion area on PET/CT images. Afterwards, we propose a novel equivalent truncated elliptical cone integral method (ETECIM) to estimate the GTV value. Indexes of Dice similarity coefficient (DSC), Hausdorff distance (HD), and mean surface distance (MSD) are used to evaluate the segmentation performance. Conformity index (CI), degree of inclusion (DI), and motion vector (MV) are used to assess the differences between predicted and ground truth tumors. Statistical differences in the GTV, DI, and position are also determined.ResultsWe perform 4-fold cross-validation for evaluation, reporting the values of DSC, HD, and MSD as 0.72 ± 0.02, 11.87 ± 4.20 mm, and 2.43 ± 0.60 mm (mean ± standard deviation), respectively. Pearson correlations (R2) achieve 0.8434, 0.8004, 0.9239, and 0.7119 for each fold cross-validation, and there is no significant difference (t = 1.193, p = 0.235) between the predicted and ground truth GTVs. For DI, a significant difference is found (t = −2.263, p = 0.009). For position assessment, there is no significant difference (left-right in x direction: t = 0.102, p = 0.919, anterior–posterior in y direction: t = 0.221, p = 0.826, and cranial–caudal in z direction: t = 0.569, p = 0.570) between the predicted and ground truth GTVs. The median of CI is 0.63, and the gotten MV is small.ConclusionsThe predicted tumors correspond well with the manual ground truth. The proposed GTV estimation approach ETECIM is more precise than the most commonly used voxel volume summation method. The ground truth GTVs can be solved out due to the good linear correlation with the predicted results. Deep learning-based method shows its promising in GTV definition and clinical radiotherapy application.
Collapse
Affiliation(s)
- Yaoting Yue
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Nan Li
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Husnain Shahid
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Dongsheng Bi
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai, China
- *Correspondence: Xin Liu, ; Shaoli Song,
| | - Shaoli Song
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai, China
- *Correspondence: Xin Liu, ; Shaoli Song,
| | - Dean Ta
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| |
Collapse
|
5
|
Cardenas CE, Blinde SE, Mohamed ASR, Ng SP, Raaijmakers C, Philippens M, Kotte A, Al-Mamgani AA, Karam I, Thomson DJ, Robbins J, Newbold K, Fuller CD, Terhaard C, On Behalf Of The, Bahig H, Blanchard P, Dehnad H, Doornaert P, Elhalawani H, Frank SJ, Garden A, Gunn GB, Hamming-Vrieze O, Kamal M, Kasperts N, Lee LW, McDonald BA, McPartlin A, Meheissen MA, Morrison WH, Navran A, Nutting CM, Pameijer F, Phan J, Poon I, Rosenthal DI, Smid EJ, Sykes AJ. Comprehensive Quantitative Evaluation of Variability in MR-guided Delineation of Oropharyngeal Gross Tumor Volumes and High-risk Clinical Target Volumes: An R-IDEAL Stage 0 Prospective Study. Int J Radiat Oncol Biol Phys 2022; 113:426-436. [PMID: 35124134 DOI: 10.1016/j.ijrobp.2022.01.050] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 01/12/2022] [Accepted: 01/26/2022] [Indexed: 02/02/2023]
Abstract
PURPOSE Tumor and target volume manual delineation remains a challenging task in head-and-neck cancer radiotherapy. The purpose of this study was to conduct a multi-institutional evaluation of manual delineations of gross tumor volume (GTV), high-risk clinical target volume (CTV), parotids, and submandibular glands on treatment simulation MR scans of oropharyngeal cancer (OPC) patients. METHODS Pre-treatment T1-weighted (T1w), T1-weighted with gadolinium contrast (T1w+C) and T2-weighted (T2w) MRI scans were retrospectively collected for 4 OPC patients under an IRB-approved protocol. The scans were provided to twenty-six radiation oncologists from seven international cancer centers who participated in this delineation study. In addition, patients' clinical history and physical examination findings, along with a medical photographic image and radiological results, were provided. The contours were compared using overlap/distance metrics using both STAPLE and pair-wise comparisons. Lastly, participants completed a brief questionnaire to assess participants' experience and CTV delineation institutional practices. RESULTS Large variability was measured between observers' delineations for GTVs and CTVs. The mean Dice Similarity Coefficient values across all physicians' delineations for GTVp, GTVn, CTVp, and CTVn were 0.77, 0.67, 0.77, and 0.69, respectively, for STAPLE comparison and 0.67, 0.60, 0.67, and 0.58, respectively, for pair-wise analysis. Normal tissue contours were defined more consistently when considering overlap/distance metrics. The median radiation oncology clinical experience was 7 years. The median experience delineating on MRI was 3.5 years. The GTV-to-CTV margin used was 10 mm for six of seven participant institutions. One institution used 8 mm and three participants (from three different institutions) used a margin of 5 mm. CONCLUSION The data from this study suggests that appropriate guidelines, contouring quality assurance sessions, and training are still needed for the adoption of MR-based treatment planning for head-and-neck cancers. Such efforts should play a critical role in reducing delineation variation and ensure standardization of target design across clinical practices.
Collapse
Affiliation(s)
- Carlos E Cardenas
- Department of Radiation Oncology, The University of Alabama at Birmingham, Birmingham, AL, USA.
| | - Sanne E Blinde
- Department of Radiation Oncology, Klinikum Kassel, Kassel, Germany
| | - Abdallah S R Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Sweet Ping Ng
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA; Department of Radiation Oncology, Olivia Newton-John Cancer Centre, Austin Health, Melbourne, Australia
| | - Cornelis Raaijmakers
- Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Marielle Philippens
- Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Alexis Kotte
- Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Abrahim A Al-Mamgani
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Irene Karam
- Department of Radiation Oncology, Odette Cancer Centre, Sunnybrook Health Science Centre, University of Toronto, Toronto, ON, Canada
| | - David J Thomson
- Department of Clinical Oncology, The Christie NHS Foundation Trust, Manchester, UK
| | - Jared Robbins
- Department of Radiation Oncology, University of Arizona, Tucson, Arizona, USA
| | - Kate Newbold
- Royal Marsden NHS Foundation Trust and Institute of Cancer Research, London, UK
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.
| | - Chris Terhaard
- Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, Utrecht, The Netherlands.
| | - On Behalf Of The
- Department of Radiation Oncology, The University of Alabama at Birmingham, Birmingham, AL, USA
| | - Houda Bahig
- Department of Radiation Oncology, Centre Hospitalier de l'Université de Montréal, Montreal, Quebec, Canada
| | - Pierre Blanchard
- Department of Radiation Oncology, Institut Gustave Roussy, Villejuif, France
| | - Homan Dehnad
- Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Patricia Doornaert
- Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Hesham Elhalawani
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Steven J Frank
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Adam Garden
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - G Brandon Gunn
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Olga Hamming-Vrieze
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Mona Kamal
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Nicolien Kasperts
- Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Lip Wai Lee
- Department of Clinical Oncology, The Christie NHS Foundation Trust, Manchester, UK
| | - Brigid A McDonald
- Department of Radiation Oncology, The University of Alabama at Birmingham, Birmingham, AL, USA
| | - Andrew McPartlin
- Department of Clinical Oncology, The Christie NHS Foundation Trust, Manchester, UK
| | - Mohamed Am Meheissen
- Alexandria Clinical Oncology Department, Alexandria University, Alexandria, Egypt
| | - William H Morrison
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Arash Navran
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, The Netherlands
| | | | - Frank Pameijer
- Department of Radiology, Division of Imaging & Oncology, University Medical Center, Utrecht, The Netherlands
| | - Jack Phan
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Ian Poon
- Department of Radiation Oncology, Odette Cancer Centre, Sunnybrook Health Science Centre, University of Toronto, Toronto, ON, Canada
| | - David I Rosenthal
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Ernst J Smid
- Department of Radiotherapy, Division of Imaging & Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Andrew J Sykes
- Department of Clinical Oncology, The Christie NHS Foundation Trust, Manchester, UK
| |
Collapse
|
6
|
Yu X, Tang S, Cheang CF, Yu HH, Choi IC. Multi-Task Model for Esophageal Lesion Analysis Using Endoscopic Images: Classification with Image Retrieval and Segmentation with Attention. SENSORS 2021; 22:s22010283. [PMID: 35009825 PMCID: PMC8749873 DOI: 10.3390/s22010283] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 12/24/2021] [Accepted: 12/27/2021] [Indexed: 12/12/2022]
Abstract
The automatic analysis of endoscopic images to assist endoscopists in accurately identifying the types and locations of esophageal lesions remains a challenge. In this paper, we propose a novel multi-task deep learning model for automatic diagnosis, which does not simply replace the role of endoscopists in decision making, because endoscopists are expected to correct the false results predicted by the diagnosis system if more supporting information is provided. In order to help endoscopists improve the diagnosis accuracy in identifying the types of lesions, an image retrieval module is added in the classification task to provide an additional confidence level of the predicted types of esophageal lesions. In addition, a mutual attention module is added in the segmentation task to improve its performance in determining the locations of esophageal lesions. The proposed model is evaluated and compared with other deep learning models using a dataset of 1003 endoscopic images, including 290 esophageal cancer, 473 esophagitis, and 240 normal. The experimental results show the promising performance of our model with a high accuracy of 96.76% for the classification and a Dice coefficient of 82.47% for the segmentation. Consequently, the proposed multi-task deep learning model can be an effective tool to help endoscopists in judging esophageal lesions.
Collapse
Affiliation(s)
- Xiaoyuan Yu
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; (X.Y.); (S.T.)
| | - Suigu Tang
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; (X.Y.); (S.T.)
| | - Chak Fong Cheang
- Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau; (X.Y.); (S.T.)
- Correspondence: (C.F.C.); (H.H.Y.)
| | - Hon Ho Yu
- Kiang Wu Hospital, Santo António, Macau;
- Correspondence: (C.F.C.); (H.H.Y.)
| | | |
Collapse
|
7
|
Zhao Y, Rhee DJ, Cardenas C, Court LE, Yang J. Training deep-learning segmentation models from severely limited data. Med Phys 2021; 48:1697-1706. [PMID: 33474727 PMCID: PMC8058262 DOI: 10.1002/mp.14728] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 01/07/2021] [Accepted: 01/13/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE To enable generation of high-quality deep learning segmentation models from severely limited contoured cases (e.g., ~10 cases). METHODS Thirty head and neck computed tomography (CT) scans with well-defined contours were deformably registered to 200 CT scans of the same anatomic site without contours. Acquired deformation vector fields were used to train a principal component analysis (PCA) model for each of the 30 contoured CT scans by capturing the mean deformation and most prominent variations. Each PCA model can produce an infinite number of synthetic CT scans and corresponding contours by applying random deformations. We used 300, 600, 1000, and 2000 synthetic CT scans and contours generated from one PCA model to train V-Net, a 3D convolutional neural network architecture, to segment parotid and submandibular glands. We repeated the training using same numbers of training cases generated from 7, 10, 20, and 30 PCA models, with the data distributed evenly between each PCA model. Performance of the segmentation models was evaluated with Dice similarity coefficients between auto-generated contours and physician-drawn contours on 162 test CT scans for parotid glands and another 21 test CT scans for submandibular glands. RESULTS Dice values varied with the number of synthetic CT scans and the number of PCA models used to train the network. By using 2000 synthetic CT scans generated from 10 PCA models, we achieved Dice values of 82.8% ± 6.8% for right parotid, 82.0% ± 6.9% for left parotid, and 74.2% ± 6.8% for submandibular glands. These results are comparable with those obtained from state-of-the-art auto-contouring approaches, including a deep learning network trained from more than 1000 contoured patients and a multi-atlas algorithm from 12 well-contoured atlases. Improvement was marginal when >10 PCA models or >2000 synthetic CT scans were used. CONCLUSIONS We demonstrated an effective data augmentation approach to train high-quality deep learning segmentation models from a limited number of well-contoured patient cases.
Collapse
Affiliation(s)
- Yao Zhao
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX
- The University of Texas MD Anderson Graduate School of Biomedical Science, Houston, TX
| | - Dong Joo Rhee
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX
- The University of Texas MD Anderson Graduate School of Biomedical Science, Houston, TX
| | - Carlos Cardenas
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Laurence E. Court
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX
| |
Collapse
|
8
|
Huang K, Rhee DJ, Ger R, Layman R, Yang J, Cardenas CE, Court LE. Impact of slice thickness, pixel size, and CT dose on the performance of automatic contouring algorithms. J Appl Clin Med Phys 2021; 22:168-174. [PMID: 33779037 PMCID: PMC8130223 DOI: 10.1002/acm2.13207] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 12/12/2020] [Accepted: 01/30/2021] [Indexed: 12/25/2022] Open
Abstract
Purpose To investigate the impact of computed tomography (CT) image acquisition and reconstruction parameters, including slice thickness, pixel size, and dose, on automatic contouring algorithms. Methods Eleven scans from patients with head‐and‐neck cancer were reconstructed with varying slice thicknesses and pixel sizes. CT dose was varied by adding noise using low‐dose simulation software. The impact of these imaging parameters on two in‐house auto‐contouring algorithms, one convolutional neural network (CNN)‐based and one multiatlas‐based system (MACS) was investigated for 183 reconstructed scans. For each algorithm, auto‐contours for organs‐at‐risk were compared with auto‐contours from scans with 3 mm slice thickness, 0.977 mm pixel size, and 100% CT dose using Dice similarity coefficient (DSC), Hausdorff distance (HD), and mean surface distance (MSD). Results Increasing the slice thickness from baseline value of 3 mm gave a progressive reduction in DSC and an increase in HD and MSD on average for all structures. Reducing the CT dose only had a relatively minimal effect on DSC and HD. The rate of change with respect to dose for both auto‐contouring methods is approximately 0. Changes in pixel size had a small effect on DSC and HD for CNN‐based auto‐contouring with differences in DSC being within 0.07. Small structures had larger deviations from the baseline values than large structures for DSC. The relative differences in HD and MSD between the large and small structures were small. Conclusions Auto‐contours can deviate substantially with changes in CT acquisition and reconstruction parameters, especially slice thickness and pixel size. The CNN was less sensitive to changes in pixel size, and dose levels than the MACS. The results contraindicated more restrictive values for the parameters should be used than a typical imaging protocol for head‐and‐neck.
Collapse
Affiliation(s)
- Kai Huang
- The University of Texas MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, TX, USA.,Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Dong Joo Rhee
- The University of Texas MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, TX, USA.,Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Rachel Ger
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Rick Layman
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Carlos E Cardenas
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Laurence E Court
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
9
|
Rhee DJ, Jhingran A, Rigaud B, Netherton T, Cardenas CE, Zhang L, Vedam S, Kry S, Brock KK, Shaw W, O’Reilly F, Parkes J, Burger H, Fakie N, Trauernicht C, Simonds H, Court LE. Automatic contouring system for cervical cancer using convolutional neural networks. Med Phys 2020; 47:5648-5658. [PMID: 32964477 PMCID: PMC7756586 DOI: 10.1002/mp.14467] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 09/01/2020] [Accepted: 09/07/2020] [Indexed: 02/06/2023] Open
Abstract
PURPOSE To develop a tool for the automatic contouring of clinical treatment volumes (CTVs) and normal tissues for radiotherapy treatment planning in cervical cancer patients. METHODS An auto-contouring tool based on convolutional neural networks (CNN) was developed to delineate three cervical CTVs and 11 normal structures (seven OARs, four bony structures) in cervical cancer treatment for use with the Radiation Planning Assistant, a web-based automatic plan generation system. A total of 2254 retrospective clinical computed tomography (CT) scans from a single cancer center and 210 CT scans from a segmentation challenge were used to train and validate the CNN-based auto-contouring tool. The accuracy of the tool was evaluated by calculating the Sørensen-dice similarity coefficient (DSC) and mean surface and Hausdorff distances between the automatically generated contours and physician-drawn contours on 140 internal CT scans. A radiation oncologist scored the automatically generated contours on 30 external CT scans from three South African hospitals. RESULTS The average DSC, mean surface distance, and Hausdorff distance of our CNN-based tool were 0.86/0.19 cm/2.02 cm for the primary CTV, 0.81/0.21 cm/2.09 cm for the nodal CTV, 0.76/0.27 cm/2.00 cm for the PAN CTV, 0.89/0.11 cm/1.07 cm for the bladder, 0.81/0.18 cm/1.66 cm for the rectum, 0.90/0.06 cm/0.65 cm for the spinal cord, 0.94/0.06 cm/0.60 cm for the left femur, 0.93/0.07 cm/0.66 cm for the right femur, 0.94/0.08 cm/0.76 cm for the left kidney, 0.95/0.07 cm/0.84 cm for the right kidney, 0.93/0.05 cm/1.06 cm for the pelvic bone, 0.91/0.07 cm/1.25 cm for the sacrum, 0.91/0.07 cm/0.53 cm for the L4 vertebral body, and 0.90/0.08 cm/0.68 cm for the L5 vertebral bodies. On average, 80% of the CTVs, 97% of the organ at risk, and 98% of the bony structure contours in the external test dataset were clinically acceptable based on physician review. CONCLUSIONS Our CNN-based auto-contouring tool performed well on both internal and external datasets and had a high rate of clinical acceptability.
Collapse
Affiliation(s)
- Dong Joo Rhee
- MD Anderson UTHealth Graduate SchoolHoustonTXUSA
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Anuja Jhingran
- Department of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Bastien Rigaud
- Department of Imaging PhysicsThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Tucker Netherton
- MD Anderson UTHealth Graduate SchoolHoustonTXUSA
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Carlos E. Cardenas
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Lifei Zhang
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Sastry Vedam
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Stephen Kry
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Kristy K. Brock
- Department of Imaging PhysicsThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - William Shaw
- Department of Medical Physics (G68)University of the Free StateBloemfonteinSouth Africa
| | - Frederika O’Reilly
- Department of Medical Physics (G68)University of the Free StateBloemfonteinSouth Africa
| | - Jeannette Parkes
- Division of Radiation Oncology and Medical PhysicsUniversity of Cape Town and Groote Schuur HospitalCape TownSouth Africa
| | - Hester Burger
- Division of Radiation Oncology and Medical PhysicsUniversity of Cape Town and Groote Schuur HospitalCape TownSouth Africa
| | - Nazia Fakie
- Division of Radiation Oncology and Medical PhysicsUniversity of Cape Town and Groote Schuur HospitalCape TownSouth Africa
| | - Chris Trauernicht
- Division of Medical PhysicsStellenbosch UniversityTygerberg Academic HospitalCape TownSouth Africa
| | - Hannah Simonds
- Division of Radiation OncologyStellenbosch UniversityTygerberg Academic HospitalCape TownSouth Africa
| | - Laurence E. Court
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| |
Collapse
|
10
|
Rhee DJ, Jhingran A, Kisling K, Cardenas C, Simonds H, Court L. Automated Radiation Treatment Planning for Cervical Cancer. Semin Radiat Oncol 2020; 30:340-347. [PMID: 32828389 DOI: 10.1016/j.semradonc.2020.05.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The radiation treatment-planning process includes contouring, planning, and reviewing the final plan, and each component requires substantial time and effort from multiple experts. Automation of treatment planning can save time and reduce the cost of radiation treatment, and potentially provides more consistent and better quality plans. With the recent breakthroughs in computer hardware and artificial intelligence technology, automation methods for radiation treatment planning have achieved a clinically acceptable level of performance in general. At the same time, the automation process should be developed and evaluated independently for different disease sites and treatment techniques as they are unique from each other. In this article, we will discuss the current status of automated radiation treatment planning for cervical cancer for simple and complex plans and corresponding automated quality assurance methods. Furthermore, we will introduce Radiation Planning Assistant, a web-based system designed to fully automate treatment planning for cervical cancer and other treatment sites.
Collapse
Affiliation(s)
- Dong Joo Rhee
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX.
| | - Anuja Jhingran
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Kelly Kisling
- Department of Radiation Medicine and Applied Sciences, The University of California, San Diego, San Diego, CA
| | - Carlos Cardenas
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Hannah Simonds
- Department of Radiation Oncology, Tygerberg Hospital/University of Stellenbosch, Stellenbosch, South Africa
| | - Laurence Court
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX
| |
Collapse
|
11
|
Vrtovec T, Močnik D, Strojan P, Pernuš F, Ibragimov B. Auto-segmentation of organs at risk for head and neck radiotherapy planning: From atlas-based to deep learning methods. Med Phys 2020; 47:e929-e950. [PMID: 32510603 DOI: 10.1002/mp.14320] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2019] [Revised: 05/27/2020] [Accepted: 05/29/2020] [Indexed: 02/06/2023] Open
Abstract
Radiotherapy (RT) is one of the basic treatment modalities for cancer of the head and neck (H&N), which requires a precise spatial description of the target volumes and organs at risk (OARs) to deliver a highly conformal radiation dose to the tumor cells while sparing the healthy tissues. For this purpose, target volumes and OARs have to be delineated and segmented from medical images. As manual delineation is a tedious and time-consuming task subjected to intra/interobserver variability, computerized auto-segmentation has been developed as an alternative. The field of medical imaging and RT planning has experienced an increased interest in the past decade, with new emerging trends that shifted the field of H&N OAR auto-segmentation from atlas-based to deep learning-based approaches. In this review, we systematically analyzed 78 relevant publications on auto-segmentation of OARs in the H&N region from 2008 to date, and provided critical discussions and recommendations from various perspectives: image modality - both computed tomography and magnetic resonance image modalities are being exploited, but the potential of the latter should be explored more in the future; OAR - the spinal cord, brainstem, and major salivary glands are the most studied OARs, but additional experiments should be conducted for several less studied soft tissue structures; image database - several image databases with the corresponding ground truth are currently available for methodology evaluation, but should be augmented with data from multiple observers and multiple institutions; methodology - current methods have shifted from atlas-based to deep learning auto-segmentation, which is expected to become even more sophisticated; ground truth - delineation guidelines should be followed and participation of multiple experts from multiple institutions is recommended; performance metrics - the Dice coefficient as the standard volumetric overlap metrics should be accompanied with at least one distance metrics, and combined with clinical acceptability scores and risk assessments; segmentation performance - the best performing methods achieve clinically acceptable auto-segmentation for several OARs, however, the dosimetric impact should be also studied to provide clinically relevant endpoints for RT planning.
Collapse
Affiliation(s)
- Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Domen Močnik
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Primož Strojan
- Institute of Oncology Ljubljana, Zaloška cesta 2, Ljubljana, SI-1000, Slovenia
| | - Franjo Pernuš
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia.,Department of Computer Science, University of Copenhagen, Universitetsparken 1, Copenhagen, D-2100, Denmark
| |
Collapse
|
12
|
Yang J, Veeraraghavan H, van Elmpt W, Dekker A, Gooding M, Sharp G. CT images with expert manual contours of thoracic cancer for benchmarking auto-segmentation accuracy. Med Phys 2020; 47:3250-3255. [PMID: 32128809 DOI: 10.1002/mp.14107] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 02/17/2020] [Accepted: 02/22/2020] [Indexed: 12/16/2022] Open
Abstract
PURPOSE Automatic segmentation offers many benefits for radiotherapy treatment planning; however, the lack of publicly available benchmark datasets limits the clinical use of automatic segmentation. In this work, we present a well-curated computed tomography (CT) dataset of high-quality manually drawn contours from patients with thoracic cancer that can be used to evaluate the accuracy of thoracic normal tissue auto-segmentation systems. ACQUISITION AND VALIDATION METHODS Computed tomography scans of 60 patients undergoing treatment simulation for thoracic radiotherapy were acquired from three institutions: MD Anderson Cancer Center, Memorial Sloan Kettering Cancer Center, and the MAASTRO clinic. Each institution provided CT scans from 20 patients, including mean intensity projection four-dimensional CT (4D CT), exhale phase (4D CT), or free-breathing CT scans depending on their clinical practice. All CT scans covered the entire thoracic region with a 50-cm field of view and slice spacing of 1, 2.5, or 3 mm. Manual contours of left/right lungs, esophagus, heart, and spinal cord were retrieved from the clinical treatment plans. These contours were checked for quality and edited if necessary to ensure adherence to RTOG 1106 contouring guidelines. DATA FORMAT AND USAGE NOTES The CT images and RTSTRUCT files are available in DICOM format. The regions of interest were named according to the nomenclature recommended by American Association of Physicists in Medicine Task Group 263 as Lung_L, Lung_R, Esophagus, Heart, and SpinalCord. This dataset is available on The Cancer Imaging Archive (funded by the National Cancer Institute) under Lung CT Segmentation Challenge 2017 (http://doi.org/10.7937/K9/TCIA.2017.3r3fvz08). POTENTIAL APPLICATIONS This dataset provides CT scans with well-delineated manually drawn contours from patients with thoracic cancer that can be used to evaluate auto-segmentation systems. Additional anatomies could be supplied in the future to enhance the existing library of contours.
Collapse
Affiliation(s)
- Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Centre, New York, NY, USA
| | - Wouter van Elmpt
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht, The Netherlands
| | | | - Greg Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, USA
| |
Collapse
|
13
|
Rhee DJ, Cardenas CE, Elhalawani H, McCarroll R, Zhang L, Yang J, Garden AS, Peterson CB, Beadle BM, Court LE. Automatic detection of contouring errors using convolutional neural networks. Med Phys 2019; 46:5086-5097. [PMID: 31505046 PMCID: PMC6842055 DOI: 10.1002/mp.13814] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 08/28/2019] [Accepted: 08/30/2019] [Indexed: 12/14/2022] Open
Abstract
PURPOSE To develop a head and neck normal structures autocontouring tool that could be used to automatically detect the errors in autocontours from a clinically validated autocontouring tool. METHODS An autocontouring tool based on convolutional neural networks (CNN) was developed for 16 normal structures of the head and neck and tested to identify the contour errors from a clinically validated multiatlas-based autocontouring system (MACS). The computed tomography (CT) scans and clinical contours from 3495 patients were semiautomatically curated and used to train and validate the CNN-based autocontouring tool. The final accuracy of the tool was evaluated by calculating the Sørensen-Dice similarity coefficients (DSC) and Hausdorff distances between the automatically generated contours and physician-drawn contours on 174 internal and 24 external CT scans. Lastly, the CNN-based tool was evaluated on 60 patients' CT scans to investigate the possibility to detect contouring failures. The contouring failures on these patients were classified as either minor or major errors. The criteria to detect contouring errors were determined by analyzing the DSC between the CNN- and MACS-based contours under two independent scenarios: (a) contours with minor errors are clinically acceptable and (b) contours with minor errors are clinically unacceptable. RESULTS The average DSC and Hausdorff distance of our CNN-based tool was 98.4%/1.23 cm for brain, 89.1%/0.42 cm for eyes, 86.8%/1.28 cm for mandible, 86.4%/0.88 cm for brainstem, 83.4%/0.71 cm for spinal cord, 82.7%/1.37 cm for parotids, 80.7%/1.08 cm for esophagus, 71.7%/0.39 cm for lenses, 68.6%/0.72 for optic nerves, 66.4%/0.46 cm for cochleas, and 40.7%/0.96 cm for optic chiasm. With the error detection tool, the proportions of the clinically unacceptable MACS contours that were correctly detected were 0.99/0.80 on average except for the optic chiasm, when contours with minor errors are clinically acceptable/unacceptable, respectively. The proportions of the clinically acceptable MACS contours that were correctly detected were 0.81/0.60 on average except for the optic chiasm, when contours with minor errors are clinically acceptable/unacceptable, respectively. CONCLUSION Our CNN-based autocontouring tool performed well on both the publically available and the internal datasets. Furthermore, our results show that CNN-based algorithms are able to identify ill-defined contours from a clinically validated and used multiatlas-based autocontouring tool. Therefore, our CNN-based tool can effectively perform automatic verification of MACS contours.
Collapse
Affiliation(s)
- Dong Joo Rhee
- The University of Texas Graduate School of Biomedical Sciences at HoustonHoustonTX77030USA
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTX77030USA
| | - Carlos E. Cardenas
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTX77030USA
| | - Hesham Elhalawani
- Department of Radiation OncologyDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTX77030USA
| | - Rachel McCarroll
- Department of Radiation OncologyThe University of Maryland Medical SystemBaltimoreMD21201USA
| | - Lifei Zhang
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTX77030USA
| | - Jinzhong Yang
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTX77030USA
| | - Adam S. Garden
- Department of Radiation OncologyDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTX77030USA
| | - Christine B. Peterson
- Department of BiostatisticsDivision of Basic SciencesThe University of Texas MD Anderson Cancer CenterHoustonTX77030USA
| | - Beth M. Beadle
- Department of Radiation OncologyStanford University School of MedicineStanfordCA94305USA
| | - Laurence E. Court
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTX77030USA
| |
Collapse
|
14
|
Schipaanboord B, Boukerroui D, Peressutti D, van Soest J, Lustberg T, Dekker A, Elmpt WV, Gooding MJ. An Evaluation of Atlas Selection Methods for Atlas-Based Automatic Segmentation in Radiotherapy Treatment Planning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2654-2664. [PMID: 30969918 DOI: 10.1109/tmi.2019.2907072] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Atlas-based automatic segmentation is used in radiotherapy planning to accelerate the delineation of organs at risk (OARs). Atlas selection has been proposed as a way to improve the accuracy and execution time of segmentation, assuming that, the more similar the atlas is to the patient, the better the results will be. This paper presents an analysis of atlas selection methods in the context of radiotherapy treatment planning. For a range of commonly contoured OARs, a thorough comparison of a large class of typical atlas selection methods has been performed. For this evaluation, clinically contoured CT images of the head and neck ( N=316 ) and thorax ( N=280 ) were used. The state-of-the-art intensity and deformation similarity-based atlas selection methods were found to compare poorly to perfect atlas selection. Counter-intuitively, atlas selection methods based on a fixed set of representative atlases outperformed atlas selection methods based on the patient image. This study suggests that atlas-based segmentation with currently available selection methods compares poorly to the potential best performance, hampering the clinical utility of atlas-based segmentation. Effective atlas selection remains an open challenge in atlas-based segmentation for radiotherapy planning.
Collapse
|
15
|
Abstract
Manual image segmentation is a time-consuming task routinely performed in radiotherapy to identify each patient's targets and anatomical structures. The efficacy and safety of the radiotherapy plan requires accurate segmentations as these regions of interest are generally used to optimize and assess the quality of the plan. However, reports have shown that this process can be subject to significant inter- and intraobserver variability. Furthermore, the quality of the radiotherapy treatment, and subsequent analyses (ie, radiomics, dosimetric), can be subject to the accuracy of these manual segmentations. Automatic segmentation (or auto-segmentation) of targets and normal tissues is, therefore, preferable as it would address these challenges. Previously, auto-segmentation techniques have been clustered into 3 generations of algorithms, with multiatlas based and hybrid techniques (third generation) being considered the state-of-the-art. More recently, however, the field of medical image segmentation has seen accelerated growth driven by advances in computer vision, particularly through the application of deep learning algorithms, suggesting we have entered the fourth generation of auto-segmentation algorithm development. In this paper, the authors review traditional (nondeep learning) algorithms particularly relevant for applications in radiotherapy. Concepts from deep learning are introduced focusing on convolutional neural networks and fully-convolutional networks which are generally used for segmentation tasks. Furthermore, the authors provide a summary of deep learning auto-segmentation radiotherapy applications reported in the literature. Lastly, considerations for clinical deployment (commissioning and QA) of auto-segmentation software are provided.
Collapse
|
16
|
Luo Y, Xu Y, Liao Z, Gomez D, Wang J, Jiang W, Zhou R, Williamson R, Court LE, Yang J. Automatic segmentation of cardiac substructures from noncontrast CT images: accurate enough for dosimetric analysis? Acta Oncol 2019; 58:81-87. [PMID: 30306817 DOI: 10.1080/0284186x.2018.1521985] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
PURPOSE We evaluated the feasibility of using an automatic segmentation tool to delineate cardiac substructures from noncontrast computed tomography (CT) images for cardiac dosimetry and toxicity analyses for patients with nonsmall cell lung cancer (NSCLC) after radiotherapy. MATERIAL AND METHODS We used an in-house developed multi-atlas segmentation tool to delineate 11cardiac substructures, including the whole heart, four heart chambers, and six greater vessels, automatically from the averaged 4D-CT planning images of 49 patients with NSCLC. Two experienced radiation oncologists edited the auto-segmented contours. Times for automatic segmentation and modification were recorded. The modified contours were compared with the auto-segmented contours in terms of Dice similarity coefficient (DSC) and mean surface distance (MSD) to evaluate the extent of modification. Differences in dose-volume histogram (DVH) characteristics were also evaluated for the modified versus auto-segmented contours. RESULTS The mean automatic segmentation time for all 11 structures was 7-9 min. For the 49 patients, the mean DSC values (±SD) ranged from .73 ± .08 to .95 ± .04, and the mean MSD values ranged from 1.3 ± .6 mm to 2.9 ± 5.1 mm. Overall, the modifications were small; the largest modifications were in the pulmonary vein and the inferior vena cava. The heart V30 (volume receiving dose ≥30 Gy) and the mean dose to the whole heart and the four heart chambers were not different for the modified versus the auto-segmented contours based on the statistically significant condition of p < .05. Also, the maximum dose to the great vessels was no different except for the pulmonary vein. CONCLUSIONS Automatic segmentation of cardiac substructures did not require substantial modifications. Dosimetric evaluation showed no significant difference between the auto-segmented and modified contours for most structures, which suggests that the auto-segmented contours can be used to study cardiac dose-responses in clinical practice.
Collapse
Affiliation(s)
- Yangkun Luo
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Department of Radiation Oncology, Sichuan Cancer Hospital, Chengdu, China
| | - Yujin Xu
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Department of Radiation Oncology, Zhejiang Cancer Hospital, Hangzhou, China
| | - Zhongxing Liao
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Daniel Gomez
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Jingqian Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Wei Jiang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Rongrong Zhou
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Department of Radiation Oncology, Xiangya Hospital, Central South University, Changsha, China
| | - Ryan Williamson
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Laurence E. Court
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
17
|
Yang J, Veeraraghavan H, Armato SG, Farahani K, Kirby JS, Kalpathy‐Kramer J, van Elmpt W, Dekker A, Han X, Feng X, Aljabar P, Oliveira B, van der Heyden B, Zamdborg L, Lam D, Gooding M, Sharp GC. Autosegmentation for thoracic radiation treatment planning: A grand challenge at AAPM 2017. Med Phys 2018; 45:4568-4581. [PMID: 30144101 PMCID: PMC6714977 DOI: 10.1002/mp.13141] [Citation(s) in RCA: 127] [Impact Index Per Article: 21.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Revised: 08/15/2018] [Accepted: 08/15/2018] [Indexed: 12/25/2022] Open
Abstract
PURPOSE This report presents the methods and results of the Thoracic Auto-Segmentation Challenge organized at the 2017 Annual Meeting of American Association of Physicists in Medicine. The purpose of the challenge was to provide a benchmark dataset and platform for evaluating performance of autosegmentation methods of organs at risk (OARs) in thoracic CT images. METHODS Sixty thoracic CT scans provided by three different institutions were separated into 36 training, 12 offline testing, and 12 online testing scans. Eleven participants completed the offline challenge, and seven completed the online challenge. The OARs were left and right lungs, heart, esophagus, and spinal cord. Clinical contours used for treatment planning were quality checked and edited to adhere to the RTOG 1106 contouring guidelines. Algorithms were evaluated using the Dice coefficient, Hausdorff distance, and mean surface distance. A consolidated score was computed by normalizing the metrics against interrater variability and averaging over all patients and structures. RESULTS The interrater study revealed highest variability in Dice for the esophagus and spinal cord, and in surface distances for lungs and heart. Five out of seven algorithms that participated in the online challenge employed deep-learning methods. Although the top three participants using deep learning produced the best segmentation for all structures, there was no significant difference in the performance among them. The fourth place participant used a multi-atlas-based approach. The highest Dice scores were produced for lungs, with averages ranging from 0.95 to 0.98, while the lowest Dice scores were produced for esophagus, with a range of 0.55-0.72. CONCLUSION The results of the challenge showed that the lungs and heart can be segmented fairly accurately by various algorithms, while deep-learning methods performed better on the esophagus. Our dataset together with the manual contours for all training cases continues to be available publicly as an ongoing benchmarking resource.
Collapse
Affiliation(s)
- Jinzhong Yang
- Department of Radiation PhysicsThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | | | | | - Keyvan Farahani
- Cancer Imaging ProgramNational Cancer InstituteBethesdaMDUSA
| | - Justin S. Kirby
- Cancer Imaging ProgramFrederick National Laboratory for Cancer Research sponsored by the National Cancer InstituteFrederickMDUSA
| | | | - Wouter van Elmpt
- Department of Radiation Oncology (MAASTRO)GROW ‐ School for Oncology and Developmental BiologyMaastricht University Medical CenterMaastrichtThe Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO)GROW ‐ School for Oncology and Developmental BiologyMaastricht University Medical CenterMaastrichtThe Netherlands
| | - Xiao Han
- Elekta Inc.Maryland HeightsMOUSA
| | - Xue Feng
- Department of Biomedical EngineeringUniversity of VirginiaCharlottesvilleVAUSA
| | | | - Bruno Oliveira
- Life and Health Sciences Research Institute (ICVS), School of MedicineUniversity of MinhoBragaPortugal
- ICVS/3Bs ‐ PT Government Associaste LaboratoryBraga/GuimaresPortugal
| | - Brent van der Heyden
- Department of Radiation Oncology (MAASTRO)GROW ‐ School for Oncology and Developmental BiologyMaastricht University Medical CenterMaastrichtThe Netherlands
| | - Leonid Zamdborg
- Department of Radiation OncologyBeaumont HealthRoyal OakMIUSA
| | - Dao Lam
- Department of Radiation OncologyWashington University School of Medicine in St. LouisSt. LouisMOUSA
| | | | | |
Collapse
|
18
|
Kazemifar S, Balagopal A, Nguyen D, McGuire S, Hannan R, Jiang S, Owrangi A. Segmentation of the prostate and organs at risk in male pelvic CT images using deep learning. Biomed Phys Eng Express 2018. [DOI: 10.1088/2057-1976/aad100] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|