1
|
Xu Z, Dai Y, Liu F, Li S, Liu S, Shi L, Fu J. Parotid Gland Segmentation Using Purely Transformer-Based U-Shaped Network and Multimodal MRI. Ann Biomed Eng 2024; 52:2101-2117. [PMID: 38691234 DOI: 10.1007/s10439-024-03510-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 04/03/2024] [Indexed: 05/03/2024]
Abstract
Parotid gland tumors account for approximately 2% to 10% of head and neck tumors. Segmentation of parotid glands and tumors on magnetic resonance images is essential in accurately diagnosing and selecting appropriate surgical plans. However, segmentation of parotid glands is particularly challenging due to their variable shape and low contrast with surrounding structures. Recently, deep learning has developed rapidly, and Transformer-based networks have performed well on many computer vision tasks. However, Transformer-based networks have yet to be well used in parotid gland segmentation tasks. We collected a multi-center multimodal parotid gland MRI dataset and implemented parotid gland segmentation using a purely Transformer-based U-shaped segmentation network. We used both absolute and relative positional encoding to improve parotid gland segmentation and achieved multimodal information fusion without increasing the network computation. In addition, our novel training approach reduces the clinician's labeling workload by nearly half. Our method achieved good segmentation of both parotid glands and tumors. On the test set, our model achieved a Dice-Similarity Coefficient of 86.99%, Pixel Accuracy of 99.19%, Mean Intersection over Union of 81.79%, and Hausdorff Distance of 3.87. The purely Transformer-based U-shaped segmentation network we used outperforms other convolutional neural networks. In addition, our method can effectively fuse the information from multi-center multimodal MRI dataset, thus improving the parotid gland segmentation.
Collapse
Affiliation(s)
- Zi'an Xu
- Northeastern University, Shenyang, China
| | - Yin Dai
- Northeastern University, Shenyang, China.
| | - Fayu Liu
- China Medical University, Shenyang, China
| | - Siqi Li
- China Medical University, Shenyang, China
| | - Sheng Liu
- China Medical University, Shenyang, China
| | - Lifu Shi
- Liaoning Jiayin Medical Technology Co., Shenyang, China
| | - Jun Fu
- Northeastern University, Shenyang, China
| |
Collapse
|
2
|
Nenoff L, Amstutz F, Murr M, Archibald-Heeren B, Fusella M, Hussein M, Lechner W, Zhang Y, Sharp G, Vasquez Osorio E. Review and recommendations on deformable image registration uncertainties for radiotherapy applications. Phys Med Biol 2023; 68:24TR01. [PMID: 37972540 PMCID: PMC10725576 DOI: 10.1088/1361-6560/ad0d8a] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 10/30/2023] [Accepted: 11/15/2023] [Indexed: 11/19/2023]
Abstract
Deformable image registration (DIR) is a versatile tool used in many applications in radiotherapy (RT). DIR algorithms have been implemented in many commercial treatment planning systems providing accessible and easy-to-use solutions. However, the geometric uncertainty of DIR can be large and difficult to quantify, resulting in barriers to clinical practice. Currently, there is no agreement in the RT community on how to quantify these uncertainties and determine thresholds that distinguish a good DIR result from a poor one. This review summarises the current literature on sources of DIR uncertainties and their impact on RT applications. Recommendations are provided on how to handle these uncertainties for patient-specific use, commissioning, and research. Recommendations are also provided for developers and vendors to help users to understand DIR uncertainties and make the application of DIR in RT safer and more reliable.
Collapse
Affiliation(s)
- Lena Nenoff
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
- OncoRay—National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden—Rossendorf, Dresden Germany
- Helmholtz-Zentrum Dresden—Rossendorf, Institute of Radiooncology—OncoRay, Dresden, Germany
| | - Florian Amstutz
- Department of Physics, ETH Zurich, Switzerland
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Martina Murr
- Section for Biomedical Physics, Department of Radiation Oncology, University of Tübingen, Germany
| | | | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Mohammad Hussein
- Metrology for Medical Physics, National Physical Laboratory, Teddington, United Kingdom
| | - Wolfgang Lechner
- Department of Radiation Oncology, Medical University of Vienna, Austria
| | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
| | - Greg Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
| | - Eliana Vasquez Osorio
- Division of Cancer Sciences, The University of Manchester, Manchester, United Kingdom
| |
Collapse
|
3
|
Abbasi S, Mehdizadeh A, Boveiri HR, Mosleh Shirazi MA, Javidan R, Khayami R, Tavakoli M. Unsupervised deep learning registration model for multimodal brain images. J Appl Clin Med Phys 2023; 24:e14177. [PMID: 37823748 PMCID: PMC10647957 DOI: 10.1002/acm2.14177] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 05/29/2023] [Accepted: 09/14/2023] [Indexed: 10/13/2023] Open
Abstract
Multimodal image registration is a key for many clinical image-guided interventions. However, it is a challenging task because of complicated and unknown relationships between different modalities. Currently, deep supervised learning is the state-of-theart method at which the registration is conducted in end-to-end manner and one-shot. Therefore, a huge ground-truth data is required to improve the results of deep neural networks for registration. Moreover, supervised methods may yield models that bias towards annotated structures. Here, to deal with above challenges, an alternative approach is using unsupervised learning models. In this study, we have designed a novel deep unsupervised Convolutional Neural Network (CNN)-based model based on computer tomography/magnetic resonance (CT/MR) co-registration of brain images in an affine manner. For this purpose, we created a dataset consisting of 1100 pairs of CT/MR slices from the brain of 110 neuropsychic patients with/without tumor. At the next step, 12 landmarks were selected by a well-experienced radiologist and annotated on each slice resulting in the computation of series of metrics evaluation, target registration error (TRE), Dice similarity, Hausdorff, and Jaccard coefficients. The proposed method could register the multimodal images with TRE 9.89, Dice similarity 0.79, Hausdorff 7.15, and Jaccard 0.75 that are appreciable for clinical applications. Moreover, the approach registered the images in an acceptable time 203 ms and can be appreciable for clinical usage due to the short registration time and high accuracy. Here, the results illustrated that our proposed method achieved competitive performance against other related approaches from both reasonable computation time and the metrics evaluation.
Collapse
Affiliation(s)
- Samaneh Abbasi
- Department of Medical Physics and EngineeringSchool of MedicineShiraz University of Medical SciencesShirazIran
| | - Alireza Mehdizadeh
- Research Center for Neuromodulation and PainShiraz University of Medical SciencesShirazIran
| | - Hamid Reza Boveiri
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Mohammad Amin Mosleh Shirazi
- Ionizing and Non‐Ionizing Radiation Protection Research Center, School of Paramedical SciencesShiraz University of Medical SciencesShirazIran
| | - Reza Javidan
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Raouf Khayami
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Meysam Tavakoli
- Department of Radiation Oncologyand Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
4
|
Lucido JJ, DeWees TA, Leavitt TR, Anand A, Beltran CJ, Brooke MD, Buroker JR, Foote RL, Foss OR, Gleason AM, Hodge TL, Hughes CO, Hunzeker AE, Laack NN, Lenz TK, Livne M, Morigami M, Moseley DJ, Undahl LM, Patel Y, Tryggestad EJ, Walker MZ, Zverovitch A, Patel SH. Validation of clinical acceptability of deep-learning-based automated segmentation of organs-at-risk for head-and-neck radiotherapy treatment planning. Front Oncol 2023; 13:1137803. [PMID: 37091160 PMCID: PMC10115982 DOI: 10.3389/fonc.2023.1137803] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 03/24/2023] [Indexed: 04/09/2023] Open
Abstract
Introduction Organ-at-risk segmentation for head and neck cancer radiation therapy is a complex and time-consuming process (requiring up to 42 individual structure, and may delay start of treatment or even limit access to function-preserving care. Feasibility of using a deep learning (DL) based autosegmentation model to reduce contouring time without compromising contour accuracy is assessed through a blinded randomized trial of radiation oncologists (ROs) using retrospective, de-identified patient data. Methods Two head and neck expert ROs used dedicated time to create gold standard (GS) contours on computed tomography (CT) images. 445 CTs were used to train a custom 3D U-Net DL model covering 42 organs-at-risk, with an additional 20 CTs were held out for the randomized trial. For each held-out patient dataset, one of the eight participant ROs was randomly allocated to review and revise the contours produced by the DL model, while another reviewed contours produced by a medical dosimetry assistant (MDA), both blinded to their origin. Time required for MDAs and ROs to contour was recorded, and the unrevised DL contours, as well as the RO-revised contours by the MDAs and DL model were compared to the GS for that patient. Results Mean time for initial MDA contouring was 2.3 hours (range 1.6-3.8 hours) and RO-revision took 1.1 hours (range, 0.4-4.4 hours), compared to 0.7 hours (range 0.1-2.0 hours) for the RO-revisions to DL contours. Total time reduced by 76% (95%-Confidence Interval: 65%-88%) and RO-revision time reduced by 35% (95%-CI,-39%-91%). All geometric and dosimetric metrics computed, agreement with GS was equivalent or significantly greater (p<0.05) for RO-revised DL contours compared to the RO-revised MDA contours, including volumetric Dice similarity coefficient (VDSC), surface DSC, added path length, and the 95%-Hausdorff distance. 32 OARs (76%) had mean VDSC greater than 0.8 for the RO-revised DL contours, compared to 20 (48%) for RO-revised MDA contours, and 34 (81%) for the unrevised DL OARs. Conclusion DL autosegmentation demonstrated significant time-savings for organ-at-risk contouring while improving agreement with the institutional GS, indicating comparable accuracy of DL model. Integration into the clinical practice with a prospective evaluation is currently underway.
Collapse
Affiliation(s)
- J. John Lucido
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Todd A. DeWees
- Department of Health Sciences Research, Mayo Clinic, Phoenix, AZ, United States
| | - Todd R. Leavitt
- Department of Health Sciences Research, Mayo Clinic, Phoenix, AZ, United States
| | - Aman Anand
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, United States
| | - Chris J. Beltran
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, FL, United States
| | | | - Justine R. Buroker
- Research Services, Comprehensive Cancer Center, Mayo Clinic, Rochester, MN, United States
| | - Robert L. Foote
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Olivia R. Foss
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Angela M. Gleason
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Teresa L. Hodge
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | - Ashley E. Hunzeker
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Nadia N. Laack
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Tamra K. Lenz
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Douglas J. Moseley
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Lisa M. Undahl
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Yojan Patel
- Google Health, Mountain View, CA, United States
| | - Erik J. Tryggestad
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Samir H. Patel
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, United States
| |
Collapse
|
5
|
Zhao Q, Wang G, Lei W, Fu H, Qu Y, Lu J, Zhang S, Zhang S. Segmentation of multiple Organs-at-Risk associated with brain tumors based on coarse-to-fine stratified networks. Med Phys 2023. [PMID: 36762594 DOI: 10.1002/mp.16247] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 12/10/2022] [Accepted: 12/27/2022] [Indexed: 02/11/2023] Open
Abstract
BACKGROUND Delineation of Organs-at-Risks (OARs) is an important step in radiotherapy treatment planning. As manual delineation is time-consuming, labor-intensive and affected by inter- and intra-observer variability, a robust and efficient automatic segmentation algorithm is highly desirable for improving the efficiency and repeatability of OAR delineation. PURPOSE Automatic segmentation of OARs in medical images is challenged by low contrast, various shapes and imbalanced sizes of different organs. We aim to overcome these challenges and develop a high-performance method for automatic segmentation of 10 OARs required in radiotherapy planning for brain tumors. METHODS A novel two-stage segmentation framework is proposed, where a coarse and simultaneous localization of all the target organs is obtained in the first stage, and a fine segmentation is achieved for each organ, respectively, in the second stage. To deal with organs with various sizes and shapes, a stratified segmentation strategy is proposed, where a High- and Low-Resolution Residual Network (HLRNet) that consists of a multiresolution branch and a high-resolution branch is introduced to segment medium-sized organs, and a High-Resolution Residual Network (HRRNet) is used to segment small organs. In addition, a label fusion strategy is proposed to better deal with symmetric pairs of organs like the left and right cochleas and lacrimal glands. RESULTS Our method was validated on the dataset of MICCAI ABCs 2020 challenge for OAR segmentation. It obtained an average Dice of 75.8% for 10 OARs, and significantly outperformed several state-of-the-art models including nnU-Net (71.6%) and FocusNet (72.4%). Our proposed HLRNet and HRRNet improved the segmentation accuracy for medium-sized and small organs, respectively. The label fusion strategy led to higher accuracy for symmetric pairs of organs. CONCLUSIONS Our proposed method is effective for the segmentation of OARs of brain tumors, with a better performance than existing methods, especially on medium-sized and small organs. It has a potential for improving the efficiency of radiotherapy planning with high segmentation accuracy.
Collapse
Affiliation(s)
- Qianfei Zhao
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China.,Shanghai AI Laboratory, Shanghai, China
| | - Wenhui Lei
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Hao Fu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yijie Qu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Jiangshan Lu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Shichuan Zhang
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Shaoting Zhang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China.,Shanghai AI Laboratory, Shanghai, China
| |
Collapse
|
6
|
Önder M, Evli C, Türk E, Kazan O, Bayrakdar İŞ, Çelik Ö, Costa ALF, Gomes JPP, Ogawa CM, Jagtap R, Orhan K. Deep-Learning-Based Automatic Segmentation of Parotid Gland on Computed Tomography Images. Diagnostics (Basel) 2023; 13:581. [PMID: 36832069 PMCID: PMC9955422 DOI: 10.3390/diagnostics13040581] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/23/2023] [Accepted: 02/02/2023] [Indexed: 02/08/2023] Open
Abstract
This study aims to develop an algorithm for the automatic segmentation of the parotid gland on CT images of the head and neck using U-Net architecture and to evaluate the model's performance. In this retrospective study, a total of 30 anonymized CT volumes of the head and neck were sliced into 931 axial images of the parotid glands. Ground truth labeling was performed with the CranioCatch Annotation Tool (CranioCatch, Eskisehir, Turkey) by two oral and maxillofacial radiologists. The images were resized to 512 × 512 and split into training (80%), validation (10%), and testing (10%) subgroups. A deep convolutional neural network model was developed using U-net architecture. The automatic segmentation performance was evaluated in terms of the F1-score, precision, sensitivity, and the Area Under Curve (AUC) statistics. The threshold for a successful segmentation was determined by the intersection of over 50% of the pixels with the ground truth. The F1-score, precision, and sensitivity of the AI model in segmenting the parotid glands in the axial CT slices were found to be 1. The AUC value was 0.96. This study has shown that it is possible to use AI models based on deep learning to automatically segment the parotid gland on axial CT images.
Collapse
Affiliation(s)
- Merve Önder
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara 06000, Turkey
| | - Cengiz Evli
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara 06000, Turkey
| | - Ezgi Türk
- Dentomaxillofacial Radiology, Oral and Dental Health Center, Hatay 31040, Turkey
| | - Orhan Kazan
- Health Services Vocational School, Gazi University, Ankara 06560, Turkey
| | - İbrahim Şevki Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir 26040, Turkey
- Eskisehir Osmangazi University Center of Research and Application for Computer-Aided Diagnosis and Treatment in Health, Eskişehir 26040, Turkey
- Division of Oral and Maxillofacial Radiology, Department of Care Planning and Restorative Sciences, University of Mississippi Medical Center School of Dentistry, Jackson, MS 39216, USA
| | - Özer Çelik
- Eskisehir Osmangazi University Center of Research and Application for Computer-Aided Diagnosis and Treatment in Health, Eskişehir 26040, Turkey
- Department of Mathematics-Computer, Faculty of Science, Eskisehir Osmangazi University, Eskişehir 26040, Turkey
| | - Andre Luiz Ferreira Costa
- Postgraduate Program in Dentistry, Cruzeiro do Sul University (UNICSUL), São Paulo 01506-000, SP, Brazil
| | - João Pedro Perez Gomes
- Department of Stomatology, Division of General Pathology, School of Dentistry, University of São Paulo (USP), São Paulo 13560-970, SP, Brazil
| | - Celso Massahiro Ogawa
- Postgraduate Program in Dentistry, Cruzeiro do Sul University (UNICSUL), São Paulo 01506-000, SP, Brazil
| | - Rohan Jagtap
- Division of Oral and Maxillofacial Radiology, Department of Care Planning and Restorative Sciences, University of Mississippi Medical Center School of Dentistry, Jackson, MS 39216, USA
| | - Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara 06000, Turkey
- Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, 20-093 Lublin, Poland
- Ankara University Medical Design Application and Research Center (MEDITAM), Ankara 06000, Turkey
| |
Collapse
|
7
|
Volpe S, Pepa M, Zaffaroni M, Bellerba F, Santamaria R, Marvaso G, Isaksson LJ, Gandini S, Starzyńska A, Leonardi MC, Orecchia R, Alterio D, Jereczek-Fossa BA. Machine Learning for Head and Neck Cancer: A Safe Bet?-A Clinically Oriented Systematic Review for the Radiation Oncologist. Front Oncol 2021; 11:772663. [PMID: 34869010 PMCID: PMC8637856 DOI: 10.3389/fonc.2021.772663] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 10/25/2021] [Indexed: 12/17/2022] Open
Abstract
BACKGROUND AND PURPOSE Machine learning (ML) is emerging as a feasible approach to optimize patients' care path in Radiation Oncology. Applications include autosegmentation, treatment planning optimization, and prediction of oncological and toxicity outcomes. The purpose of this clinically oriented systematic review is to illustrate the potential and limitations of the most commonly used ML models in solving everyday clinical issues in head and neck cancer (HNC) radiotherapy (RT). MATERIALS AND METHODS Electronic databases were screened up to May 2021. Studies dealing with ML and radiomics were considered eligible. The quality of the included studies was rated by an adapted version of the qualitative checklist originally developed by Luo et al. All statistical analyses were performed using R version 3.6.1. RESULTS Forty-eight studies (21 on autosegmentation, four on treatment planning, 12 on oncological outcome prediction, 10 on toxicity prediction, and one on determinants of postoperative RT) were included in the analysis. The most common imaging modality was computed tomography (CT) (40%) followed by magnetic resonance (MR) (10%). Quantitative image features were considered in nine studies (19%). No significant differences were identified in global and methodological scores when works were stratified per their task (i.e., autosegmentation). DISCUSSION AND CONCLUSION The range of possible applications of ML in the field of HN Radiation Oncology is wide, albeit this area of research is relatively young. Overall, if not safe yet, ML is most probably a bet worth making.
Collapse
Affiliation(s)
- Stefania Volpe
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Matteo Pepa
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Mattia Zaffaroni
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Federica Bellerba
- Molecular and Pharmaco-Epidemiology Unit, Department of Experimental Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Riccardo Santamaria
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Giulia Marvaso
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Lars Johannes Isaksson
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Sara Gandini
- Molecular and Pharmaco-Epidemiology Unit, Department of Experimental Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Anna Starzyńska
- Department of Oral Surgery, Medical University of Gdańsk, Gdańsk, Poland
| | - Maria Cristina Leonardi
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Roberto Orecchia
- Scientific Directorate, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Daniela Alterio
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Barbara Alicja Jereczek-Fossa
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| |
Collapse
|
8
|
Korte JC, Hardcastle N, Ng SP, Clark B, Kron T, Jackson P. Cascaded deep learning-based auto-segmentation for head and neck cancer patients: Organs at risk on T2-weighted magnetic resonance imaging. Med Phys 2021; 48:7757-7772. [PMID: 34676555 DOI: 10.1002/mp.15290] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 08/30/2021] [Accepted: 09/24/2021] [Indexed: 12/09/2022] Open
Abstract
PURPOSE To investigate multiple deep learning methods for automated segmentation (auto-segmentation) of the parotid glands, submandibular glands, and level II and level III lymph nodes on magnetic resonance imaging (MRI). Outlining radiosensitive organs on images used to assist radiation therapy (radiotherapy) of patients with head and neck cancer (HNC) is a time-consuming task, in which variability between observers may directly impact on patient treatment outcomes. Auto-segmentation on computed tomography imaging has been shown to result in significant time reductions and more consistent outlines of the organs at risk. METHODS Three convolutional neural network (CNN)-based auto-segmentation architectures were developed using manual segmentations and T2-weighted MRI images provided from the American Association of Physicists in Medicine (AAPM) radiotherapy MRI auto-contouring (RT-MAC) challenge dataset (n = 31). Auto-segmentation performance was evaluated with segmentation similarity and surface distance metrics on the RT-MAC dataset with institutional manual segmentations (n = 10). The generalizability of the auto-segmentation methods was assessed on an institutional MRI dataset (n = 10). RESULTS Auto-segmentation performance on the RT-MAC images with institutional segmentations was higher than previously reported MRI methods for the parotid glands (Dice: 0.860 ± 0.067, mean surface distance [MSD]: 1.33 ± 0.40 mm) and the first report of MRI performance for submandibular glands (Dice: 0.830 ± 0.032, MSD: 1.16 ± 0.47 mm). We demonstrate that high-resolution auto-segmentations with improved geometric accuracy can be generated for the parotid and submandibular glands by cascading a localizer CNN and a cropped high-resolution CNN. Improved MSDs were observed between automatic and manual segmentations of the submandibular glands when a low-resolution auto-segmentation was used as prior knowledge in the second-stage CNN. Reduced auto-segmentation performance was observed on our institutional MRI dataset when trained on external RT-MAC images; only the parotid gland auto-segmentations were considered clinically feasible for manual correction (Dice: 0.775 ± 0.105, MSD: 1.20 ± 0.60 mm). CONCLUSIONS This work demonstrates that CNNs are a suitable method to auto-segment the parotid and submandibular glands on MRI images of patients with HNC, and that cascaded CNNs can generate high-resolution segmentations with improved geometric accuracy. Deep learning methods may be suitable for auto-segmentation of the parotid glands on T2-weighted MRI images from different scanners, but further work is required to improve the performance and generalizability of these methods for auto-segmentation of the submandibular glands and lymph nodes.
Collapse
Affiliation(s)
- James C Korte
- Department of Physical Science, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia.,Department of Biomedical Engineering, University of Melbourne, Melbourne, Victoria, Australia
| | - Nicholas Hardcastle
- Department of Physical Science, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales, Australia.,Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia
| | - Sweet Ping Ng
- Department of Radiation Oncology, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia.,Department of Radiation Oncology, Olivia Newton-John Cancer and Wellness Centre, Austin Health, Melbourne, Victoria, Australia
| | - Brett Clark
- Department of Physical Science, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia.,Department of Biomedical Engineering, University of Melbourne, Melbourne, Victoria, Australia
| | - Tomas Kron
- Department of Physical Science, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia.,Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia
| | - Price Jackson
- Department of Physical Science, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia.,Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia
| |
Collapse
|
9
|
Computerized Tomography Image Feature under Convolutional Neural Network Algorithm Evaluated for Therapeutic Effect of Clarithromycin Combined with Salmeterol/Fluticasone on Chronic Obstructive Pulmonary Disease. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:8563181. [PMID: 34381586 PMCID: PMC8352704 DOI: 10.1155/2021/8563181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Revised: 07/13/2021] [Accepted: 07/22/2021] [Indexed: 11/30/2022]
Abstract
This study was to explore the use of convolutional neural network (CNN) for the classification and recognition of computerized tomography (CT) images of chronic obstructive pulmonary disease (COPD) and the therapeutic effect of clarithromycin combined with salmeterol/fluticasone. First, the clinical data of COPD patients treated in hospital from September 2018 to December 2020 were collected, and CT and X-ray images were also collected. CT-CNN and X ray-CNN single modal models were constructed based on the LeNet-5 model. The randomized fusion algorithm was introduced to construct a fused CNN model for the diagnosis of COPD patients, and the recognition effect of the model was verified. Subsequently, the three-dimensional reconstruction of the patient's bronchus was performed using the classified CT images, and the changes of CT quantitative parameters in COPD patients were compared and analyzed. Finally, COPD patients were treated with salmeterol/fluticasone (COPD-C) and combined with clarithromycin (COPD-T). In addition, the differences between patients' lung function indexes, blood gas indexes, St. George respiratory questionnaire (SGRQ) scores, and the number of acute exacerbations (AECOPD) before and after treatment were evaluated. The results showed that the randomized fusion model under different iteration times and batch sizes always had the highest recognition rate, sensitivity, and specificity compared to the two single modal CNN models, but it also had longer training time. After CT images were used to quantitatively evaluate the changes of the patient's bronchus, it was found that the area of the upper and lower lung lobes of the affected side of COPD patients and the ratio of the area of the tube wall to the bronchus were significantly changed. The lung function, blood gas index, and SGRQ score of COPD-T patients were significantly improved compared with the COPD-C group (P < 0.05), but there was no considerable difference in AECOPD (P > 0.05). In summary, the randomized fusion-based CNN model can improve the recognition rate of COPD, and salmeterol/fluticasone combined with clarithromycin therapy can significantly improve the clinical treatment effect of COPD patients.
Collapse
|
10
|
Nikolov S, Blackwell S, Zverovitch A, Mendes R, Livne M, De Fauw J, Patel Y, Meyer C, Askham H, Romera-Paredes B, Kelly C, Karthikesalingam A, Chu C, Carnell D, Boon C, D'Souza D, Moinuddin SA, Garie B, McQuinlan Y, Ireland S, Hampton K, Fuller K, Montgomery H, Rees G, Suleyman M, Back T, Hughes CO, Ledsam JR, Ronneberger O. Clinically Applicable Segmentation of Head and Neck Anatomy for Radiotherapy: Deep Learning Algorithm Development and Validation Study. J Med Internet Res 2021; 23:e26151. [PMID: 34255661 PMCID: PMC8314151 DOI: 10.2196/26151] [Citation(s) in RCA: 138] [Impact Index Per Article: 34.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 02/10/2021] [Accepted: 04/30/2021] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Over half a million individuals are diagnosed with head and neck cancer each year globally. Radiotherapy is an important curative treatment for this disease, but it requires manual time to delineate radiosensitive organs at risk. This planning process can delay treatment while also introducing interoperator variability, resulting in downstream radiation dose differences. Although auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying, and achieving expert performance remain. OBJECTIVE Adopting a deep learning approach, we aim to demonstrate a 3D U-Net architecture that achieves expert-level performance in delineating 21 distinct head and neck organs at risk commonly segmented in clinical practice. METHODS The model was trained on a data set of 663 deidentified computed tomography scans acquired in routine clinical practice and with both segmentations taken from clinical practice and segmentations created by experienced radiographers as part of this research, all in accordance with consensus organ at risk definitions. RESULTS We demonstrated the model's clinical applicability by assessing its performance on a test set of 21 computed tomography scans from clinical practice, each with 21 organs at risk segmented by 2 independent experts. We also introduced surface Dice similarity coefficient, a new metric for the comparison of organ delineation, to quantify the deviation between organ at risk surface contours rather than volumes, better reflecting the clinical task of correcting errors in automated organ segmentations. The model's generalizability was then demonstrated on 2 distinct open-source data sets, reflecting different centers and countries to model training. CONCLUSIONS Deep learning is an effective and clinically applicable technique for the segmentation of the head and neck anatomy for radiotherapy. With appropriate validation studies and regulatory approvals, this system could improve the efficiency, consistency, and safety of radiotherapy pathways.
Collapse
Affiliation(s)
| | | | | | - Ruheena Mendes
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | | | | | | | | | | | | | | | | | | | - Dawn Carnell
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Cheng Boon
- Clatterbridge Cancer Centre NHS Foundation Trust, Liverpool, United Kingdom
| | - Derek D'Souza
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Syed Ali Moinuddin
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | | | | | | | | | | | | | - Geraint Rees
- University College London, London, United Kingdom
| | | | | | | | | | | |
Collapse
|
11
|
Liu L, Wolterink JM, Brune C, Veldhuis RNJ. Anatomy-aided deep learning for medical image segmentation: a review. Phys Med Biol 2021; 66. [PMID: 33906186 DOI: 10.1088/1361-6560/abfbf4] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 04/27/2021] [Indexed: 01/17/2023]
Abstract
Deep learning (DL) has become widely used for medical image segmentation in recent years. However, despite these advances, there are still problems for which DL-based segmentation fails. Recently, some DL approaches had a breakthrough by using anatomical information which is the crucial cue for manual segmentation. In this paper, we provide a review of anatomy-aided DL for medical image segmentation which covers systematically summarized anatomical information categories and corresponding representation methods. We address known and potentially solvable challenges in anatomy-aided DL and present a categorized methodology overview on using anatomical information with DL from over 70 papers. Finally, we discuss the strengths and limitations of the current anatomy-aided DL approaches and suggest potential future work.
Collapse
Affiliation(s)
- Lu Liu
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands.,Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Jelmer M Wolterink
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Christoph Brune
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Raymond N J Veldhuis
- Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| |
Collapse
|
12
|
Hague C, McPartlin A, Lee LW, Hughes C, Mullan D, Beasley W, Green A, Price G, Whitehurst P, Slevin N, van Herk M, West C, Chuter R. An evaluation of MR based deep learning auto-contouring for planning head and neck radiotherapy. Radiother Oncol 2021; 158:112-117. [PMID: 33636229 DOI: 10.1016/j.radonc.2021.02.018] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 02/02/2021] [Accepted: 02/15/2021] [Indexed: 10/22/2022]
Abstract
INTRODUCTION Auto contouring models help consistently define volumes and reduce clinical workload. This study aimed to evaluate the cross acquisition of a Magnetic Resonance (MR) deep learning auto contouring model for organ at risk (OAR) delineation in head and neck radiotherapy. METHODS Two auto contouring models were evaluated using deep learning contouring expert (DLCExpert) for OAR delineation: a CT model (modelCT) and an MR model (modelMRI). Models were trained to generate auto contours for the bilateral parotid glands and submandibular glands. Auto-contours for modelMRI were trained on diagnostic images and tested on 10 diagnostic, 10 MR radiotherapy planning (RTP), eight MR-Linac (MRL) scans and, by modelCT, on 10 CT planning scans. Goodness of fit scores, dice similarity coefficient (DSC) and distance to agreement (DTA) were calculated for comparison. RESULTS ModelMRI contours improved the mean DSC and DTA compared with manual contours for the bilateral parotid glands and submandibular glands on the diagnostic and RTP MRs compared with the MRL sequence. There were statistically significant differences seen for modelMRI compared to modelCT for the left parotid (mean DTA 2.3 v 2.8 mm), right parotid (mean DTA 1.9 v 2.7 mm), left submandibular gland (mean DTA 2.2 v 2.4 mm) and right submandibular gland (mean DTA 1.6 v 3.2 mm). CONCLUSION A deep learning MR auto-contouring model shows promise for OAR auto-contouring with statistically improved performance vs a CT based model. Performance is affected by the method of MR acquisition and further work is needed to improve its use with MRL images.
Collapse
Affiliation(s)
- C Hague
- Department of Head and Neck Clinical Oncology, The Christie NHS Foundation Trust, Manchester, UK.
| | - A McPartlin
- Department of Head and Neck Clinical Oncology, The Christie NHS Foundation Trust, Manchester, UK.
| | - L W Lee
- Department of Head and Neck Clinical Oncology, The Christie NHS Foundation Trust, Manchester, UK.
| | - C Hughes
- Department of Head and Neck Clinical Oncology, The Christie NHS Foundation Trust, Manchester, UK.
| | - D Mullan
- Department of Radiology, The Christie NHS Foundation Trust, Manchester, UK.
| | - W Beasley
- Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Manchester, UK.
| | - A Green
- Division of Cancer Sciences, Faculty of Biology, Medicine and Heath, University of Manchester, Manchester Academic Health Science Centre, The Christie NHS Foundation Trust, Manchester, UK.
| | - G Price
- Division of Cancer Sciences, Faculty of Biology, Medicine and Heath, University of Manchester, Manchester Academic Health Science Centre, The Christie NHS Foundation Trust, Manchester, UK.
| | - P Whitehurst
- Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Manchester, UK.
| | - N Slevin
- Department of Head and Neck Clinical Oncology, The Christie NHS Foundation Trust, Manchester, UK
| | - M van Herk
- Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Manchester, UK; Division of Cancer Sciences, Faculty of Biology, Medicine and Heath, University of Manchester, Manchester Academic Health Science Centre, The Christie NHS Foundation Trust, Manchester, UK.
| | - C West
- Division of Cancer Sciences, Faculty of Biology, Medicine and Heath, University of Manchester, Manchester Academic Health Science Centre, The Christie NHS Foundation Trust, Manchester, UK.
| | - R Chuter
- Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Manchester, UK; Division of Cancer Sciences, Faculty of Biology, Medicine and Heath, University of Manchester, Manchester Academic Health Science Centre, The Christie NHS Foundation Trust, Manchester, UK.
| |
Collapse
|
13
|
Nijhuis H, van Rooij W, Gregoire V, Overgaard J, Slotman BJ, Verbakel WF, Dahele M. Investigating the potential of deep learning for patient-specific quality assurance of salivary gland contours using EORTC-1219-DAHANCA-29 clinical trial data. Acta Oncol 2021; 60:575-581. [PMID: 33427555 DOI: 10.1080/0284186x.2020.1863463] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
INTRODUCTION Manual quality assurance (QA) of radiotherapy contours for clinical trials is time and labor intensive and subject to inter-observer variability. Therefore, we investigated whether deep-learning (DL) can provide an automated solution to salivary gland contour QA. MATERIAL AND METHODS DL-models were trained to generate contours for parotid (PG) and submandibular glands (SMG). Sørensen-Dice coefficient (SDC) and Hausdorff distance (HD) were used to assess agreement between DL and clinical contours and thresholds were defined to highlight cases as potentially sub-optimal. 3 types of deliberate errors (expansion, contraction and displacement) were gradually applied to a test set, to confirm that SDC and HD were suitable QA metrics. DL-based QA was performed on 62 patients from the EORTC-1219-DAHANCA-29 trial. All highlighted contours were visually inspected. RESULTS Increasing the magnitude of all 3 types of errors resulted in progressively severe deterioration/increase in average SDC/HD. 19/124 clinical PG contours were highlighted as potentially sub-optimal, of which 5 (26%) were actually deemed clinically sub-optimal. 2/19 non-highlighted contours were false negatives (11%). 15/69 clinical SMG contours were highlighted, with 7 (47%) deemed clinically sub-optimal and 2/15 non-highlighted contours were false negatives (13%). For most incorrectly highlighted contours causes for low agreement could be identified. CONCLUSION Automated DL-based contour QA is feasible but some visual inspection remains essential. The substantial number of false positives were caused by sub-optimal performance of the DL-model. Improvements to the model will increase the extent of automation and reliability, facilitating the adoption of DL-based contour QA in clinical trials and routine practice.
Collapse
Affiliation(s)
- Hanne Nijhuis
- Department of Radiation Oncology, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Ward van Rooij
- Department of Radiation Oncology, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Vincent Gregoire
- Department of Radiation Oncology, Centre Leon Berard, Lyon, France
| | - Jens Overgaard
- Department of Clinical Medicine – Department of Experimental Clinical Oncology, Aarhus University, Aarhus N, Denmark
| | - Berend J. Slotman
- Department of Radiation Oncology, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Wilko F. Verbakel
- Department of Radiation Oncology, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Max Dahele
- Department of Radiation Oncology, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
14
|
Kieselmann JP, Fuller CD, Gurney-Champion OJ, Oelfke U. Cross-modality deep learning: Contouring of MRI data from annotated CT data only. Med Phys 2021; 48:1673-1684. [PMID: 33251619 PMCID: PMC8058228 DOI: 10.1002/mp.14619] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 08/03/2020] [Accepted: 11/02/2020] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Online adaptive radiotherapy would greatly benefit from the development of reliable auto-segmentation algorithms for organs-at-risk and radiation targets. Current practice of manual segmentation is subjective and time-consuming. While deep learning-based algorithms offer ample opportunities to solve this problem, they typically require large datasets. However, medical imaging data are generally sparse, in particular annotated MR images for radiotherapy. In this study, we developed a method to exploit the wealth of publicly available, annotated CT images to generate synthetic MR images, which could then be used to train a convolutional neural network (CNN) to segment the parotid glands on MR images of head and neck cancer patients. METHODS Imaging data comprised 202 annotated CT and 27 annotated MR images. The unpaired CT and MR images were fed into a 2D CycleGAN network to generate synthetic MR images from the CT images. Annotations of axial slices of the synthetic images were generated by propagating the CT contours. These were then used to train a 2D CNN. We assessed the segmentation accuracy using the real MR images as test dataset. The accuracy was quantified with the 3D Dice similarity coefficient (DSC), Hausdorff distance (HD), and mean surface distance (MSD) between manual and auto-generated contours. We benchmarked the approach by a comparison to the interobserver variation determined for the real MR images, as well as to the accuracy when training the 2D CNN to segment the CT images. RESULTS The determined accuracy (DSC: 0.77±0.07, HD: 18.04±12.59mm, MSD: 2.51±1.47mm) was close to the interobserver variation (DSC: 0.84±0.06, HD: 10.85±5.74mm, MSD: 1.50±0.77mm), as well as to the accuracy when training the 2D CNN to segment the CT images (DSC: 0.81±0.07, HD: 13.00±7.61mm, MSD: 1.87±0.84mm). CONCLUSIONS The introduced cross-modality learning technique can be of great value for segmentation problems with sparse training data. We anticipate using this method with any nonannotated MRI dataset to generate annotated synthetic MR images of the same type via image style transfer from annotated CT images. Furthermore, as this technique allows for fast adaptation of annotated datasets from one imaging modality to another, it could prove useful for translating between large varieties of MRI contrasts due to differences in imaging protocols within and between institutions.
Collapse
Affiliation(s)
- Jennifer P. Kieselmann
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG, UK
| | - Clifton D. Fuller
- Department of Radiation Oncology, The University of Texas M. D. Anderson Cancer Center, Houston, Texas 77030, USA
| | - Oliver J. Gurney-Champion
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG, UK
| | - Uwe Oelfke
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG, UK
| |
Collapse
|
15
|
Kim N, Chun J, Chang JS, Lee CG, Keum KC, Kim JS. Feasibility of Continual Deep Learning-Based Segmentation for Personalized Adaptive Radiation Therapy in Head and Neck Area. Cancers (Basel) 2021; 13:cancers13040702. [PMID: 33572310 PMCID: PMC7915955 DOI: 10.3390/cancers13040702] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2020] [Revised: 02/02/2021] [Accepted: 02/06/2021] [Indexed: 11/16/2022] Open
Abstract
Simple Summary We analyzed the contouring data of 23 organs-at-risk from 100 patients with head and neck cancer who underwent definitive radiation therapy (RT). Deep learning-based segmentation (DLS) with continual training was compared to DLS with conventional training and deformable image registration (DIR) in both quantitative and qualitative (Turing’s test) methods. Results indicate the effectiveness of DLS over DIR and that of DLS with continual training over DLS with conventional training in contouring for head and neck region, especially for glandular structures. DLS with continual training might be beneficial for optimizing personalized adaptive RT in head and neck region. Abstract This study investigated the feasibility of deep learning-based segmentation (DLS) and continual training for adaptive radiotherapy (RT) of head and neck (H&N) cancer. One-hundred patients treated with definitive RT were included. Based on 23 organs-at-risk (OARs) manually segmented in initial planning computed tomography (CT), modified FC-DenseNet was trained for DLS: (i) using data obtained from 60 patients, with 20 matched patients in the test set (DLSm); (ii) using data obtained from 60 identical patients with 20 unmatched patients in the test set (DLSu). Manually contoured OARs in adaptive planning CT for independent 20 patients were provided as test sets. Deformable image registration (DIR) was also performed. All 23 OARs were compared using quantitative measurements, and nine OARs were also evaluated via subjective assessment from 26 observers using the Turing test. DLSm achieved better performance than both DLSu and DIR (mean Dice similarity coefficient; 0.83 vs. 0.80 vs. 0.70), mainly for glandular structures, whose volume significantly reduced during RT. Based on subjective measurements, DLS is often perceived as a human (49.2%). Furthermore, DLSm is preferred over DLSu (67.2%) and DIR (96.7%), with a similar rate of required revision to that of manual segmentation (28.0% vs. 29.7%). In conclusion, DLS was effective and preferred over DIR. Additionally, continual DLS training is required for an effective optimization and robustness in personalized adaptive RT.
Collapse
|
16
|
Vukicevic AM, Radovic M, Zabotti A, Milic V, Hocevar A, Callegher SZ, De Lucia O, De Vita S, Filipovic N. Deep learning segmentation of Primary Sjögren's syndrome affected salivary glands from ultrasonography images. Comput Biol Med 2020; 129:104154. [PMID: 33260099 DOI: 10.1016/j.compbiomed.2020.104154] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 11/23/2020] [Accepted: 11/23/2020] [Indexed: 11/17/2022]
Abstract
Salivary gland ultrasonography (SGUS) has proven to be a promising tool for diagnosing various diseases manifesting with abnormalities in salivary glands (SGs), including primary Sjögren's syndrome (pSS). At present, the major obstacle for establishing SUGS as a standardized tool for pSS diagnosis is its low inter/intra observer reliability. The aim of this study was to address this problem by proposing a robust deep learning-based solution for the automated segmentation of SGUS images. For these purposes, four architectures were considered: a fully convolutional neural network, fully convolutional "DenseNets" (FCN-DenseNet) network, U-Net, and LinkNet. During the course of the study, the growing HarmonicSS cohort included 1184 annotated SGUS images. Accordingly, the algorithms were trained using a transfer learning approach. With regard to the intersection-over-union (IoU), the top-performing FCN-DenseNet (IoU = 0.85) network showed a considerable margin above the inter-observer agreement (IoU = 0.76) and slightly above the intra-observer agreement (IoU = 0.84) between clinical experts. Considering its accuracy and speed (24.5 frames per second), it was concluded that the FCN-DenseNet could have wider applications in clinical practice. Further work on the topic will consider the integration of methods for pSS scoring, with the end goal of establishing SGUS as an effective noninvasive pSS diagnostic tool. To aid this progress, we created inference (frozen models) files for the developed models, and made them publicly available.
Collapse
Affiliation(s)
- Arso M Vukicevic
- Faculty of Engineering, University of Kragujevac, Sestre Janjic 6, Kragujevac, Serbia; BioIRC R&D Center, Prvoslava Stojanovica 6, Kragujevac, Serbia.
| | - Milos Radovic
- BioIRC R&D Center, Prvoslava Stojanovica 6, Kragujevac, Serbia; Everseen, Milutina Milankovica 1z, Belgrade, Serbia.
| | - Alen Zabotti
- Azienda Ospedaliero Universitaria, Santa Maria Della Misericordia di Udine, Udine, Italy
| | - Vera Milic
- Institute of Rheumatology, School of Medicine, University of Belgrade, Serbia
| | - Alojzija Hocevar
- Department of Rheumatology, Ljubljana University Medical Centre, Ljubljana, Slovenia
| | | | - Orazio De Lucia
- Department of Rheumatology, ASST Centro Traumatologico Ortopedico G. Pini-CTO, Milano, Italy
| | - Salvatore De Vita
- Azienda Ospedaliero Universitaria, Santa Maria Della Misericordia di Udine, Udine, Italy
| | - Nenad Filipovic
- Faculty of Engineering, University of Kragujevac, Sestre Janjic 6, Kragujevac, Serbia; BioIRC R&D Center, Prvoslava Stojanovica 6, Kragujevac, Serbia
| |
Collapse
|
17
|
Sultana S, Robinson A, Song DY, Lee J. Automatic multi-organ segmentation in computed tomography images using hierarchical convolutional neural network. JOURNAL OF MEDICAL IMAGING (BELLINGHAM, WASH.) 2020; 7:055001. [PMID: 33102622 DOI: 10.1117/1.jmi.7.5.055001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 09/28/2020] [Indexed: 01/17/2023]
Abstract
Purpose: Accurate segmentation of treatment planning computed tomography (CT) images is important for radiation therapy (RT) planning. However, low soft tissue contrast in CT makes the segmentation task challenging. We propose a two-step hierarchical convolutional neural network (CNN) segmentation strategy to automatically segment multiple organs from CT. Approach: The first step generates a coarse segmentation from which organ-specific regions of interest (ROIs) are produced. The second step produces detailed segmentation of each organ. The ROIs are generated using UNet, which automatically identifies the area of each organ and improves computational efficiency by eliminating irrelevant background information. For the fine segmentation step, we combined UNet with a generative adversarial network. The generator is designed as a UNet that is trained to segment organ structures and the discriminator is a fully convolutional network, which distinguishes whether the segmentation is real or generator-predicted, thus improving the segmentation accuracy. We validated the proposed method on male pelvic and head and neck (H&N) CTs used for RT planning of prostate and H&N cancer, respectively. For the pelvic structure segmentation, the network was trained to segment the prostate, bladder, and rectum. For H&N, the network was trained to segment the parotid glands (PG) and submandibular glands (SMG). Results: The trained segmentation networks were tested on 15 pelvic and 20 H&N independent datasets. The H&N segmentation network was also tested on a public domain dataset ( N = 38 ) and showed similar performance. The average dice similarity coefficients ( mean ± SD ) of pelvic structures are 0.91 ± 0.05 (prostate), 0.95 ± 0.06 (bladder), 0.90 ± 0.09 (rectum), and H&N structures are 0.87 ± 0.04 (PG) and 0.86 ± 0.05 (SMG). The segmentation for each CT takes < 10 s on average. Conclusions: Experimental results demonstrate that the proposed method can produce fast, accurate, and reproducible segmentation of multiple organs of different sizes and shapes and show its potential to be applicable to different disease sites.
Collapse
Affiliation(s)
- Sharmin Sultana
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Adam Robinson
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Daniel Y Song
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Junghoon Lee
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| |
Collapse
|
18
|
Aro K, Korpi J, Tarkkanen J, Mäkitie A, Atula T. Preoperative evaluation and treatment consideration of parotid gland tumors. Laryngoscope Investig Otolaryngol 2020; 5:694-702. [PMID: 32864441 PMCID: PMC7444776 DOI: 10.1002/lio2.433] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 05/28/2020] [Accepted: 07/02/2020] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND The nature of parotid tumors often remains unknown preoperatively and final histopathology may reveal unexpected malignancy. Still, the use of fine-needle aspiration cytology (FNAC) and imaging varies in the management of these tumors. METHODS We evaluated the preoperative examinations and management of all 195 parotid gland tumors diagnosed within our catchment area of 1.6 million people during 2015. RESULTS Altogether 171 (88%) tumors were classified as true salivary gland neoplasms. FNAC showed no false malignant findings, but it was false benign in 5 (2.6%) cases. Preoperative MRI was utilized in 48 patients (25%). Twenty (10%) malignancies included 16 salivary gland carcinomas. Pleomorphic adenomas accounted for 52% of all adenomas. For 24 (40%) Warthin tumors, surgery was omitted. CONCLUSION The proportion of malignancies was lower than generally presented. Our proposed guidelines include ultrasound-guided FNAC with certain limitations. MRI is warranted in selected cases, but seems unnecessary routinely.
Collapse
Affiliation(s)
- Katri Aro
- Department of Otorhinolaryngology—Head and Neck SurgeryUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | - Jarkko Korpi
- Department of Otorhinolaryngology—Head and Neck SurgeryUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | - Jussi Tarkkanen
- Department of PathologyHUSLAB, University of Helsinki and Helsinki University HospitalHelsinkiFinland
| | - Antti Mäkitie
- Department of Otorhinolaryngology—Head and Neck SurgeryUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
- Research Program in Systems Oncology, Faculty of MedicineUniversity of HelsinkiHelsinkiFinland
- Division of Ear, Nose and Throat Diseases, Department of Clinical Sciences, Intervention and TechnologyKarolinska Institutet and Karolinska HospitalStockholmSweden
| | - Timo Atula
- Department of Otorhinolaryngology—Head and Neck SurgeryUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| |
Collapse
|
19
|
Vrtovec T, Močnik D, Strojan P, Pernuš F, Ibragimov B. Auto-segmentation of organs at risk for head and neck radiotherapy planning: From atlas-based to deep learning methods. Med Phys 2020; 47:e929-e950. [PMID: 32510603 DOI: 10.1002/mp.14320] [Citation(s) in RCA: 87] [Impact Index Per Article: 17.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2019] [Revised: 05/27/2020] [Accepted: 05/29/2020] [Indexed: 02/06/2023] Open
Abstract
Radiotherapy (RT) is one of the basic treatment modalities for cancer of the head and neck (H&N), which requires a precise spatial description of the target volumes and organs at risk (OARs) to deliver a highly conformal radiation dose to the tumor cells while sparing the healthy tissues. For this purpose, target volumes and OARs have to be delineated and segmented from medical images. As manual delineation is a tedious and time-consuming task subjected to intra/interobserver variability, computerized auto-segmentation has been developed as an alternative. The field of medical imaging and RT planning has experienced an increased interest in the past decade, with new emerging trends that shifted the field of H&N OAR auto-segmentation from atlas-based to deep learning-based approaches. In this review, we systematically analyzed 78 relevant publications on auto-segmentation of OARs in the H&N region from 2008 to date, and provided critical discussions and recommendations from various perspectives: image modality - both computed tomography and magnetic resonance image modalities are being exploited, but the potential of the latter should be explored more in the future; OAR - the spinal cord, brainstem, and major salivary glands are the most studied OARs, but additional experiments should be conducted for several less studied soft tissue structures; image database - several image databases with the corresponding ground truth are currently available for methodology evaluation, but should be augmented with data from multiple observers and multiple institutions; methodology - current methods have shifted from atlas-based to deep learning auto-segmentation, which is expected to become even more sophisticated; ground truth - delineation guidelines should be followed and participation of multiple experts from multiple institutions is recommended; performance metrics - the Dice coefficient as the standard volumetric overlap metrics should be accompanied with at least one distance metrics, and combined with clinical acceptability scores and risk assessments; segmentation performance - the best performing methods achieve clinically acceptable auto-segmentation for several OARs, however, the dosimetric impact should be also studied to provide clinically relevant endpoints for RT planning.
Collapse
Affiliation(s)
- Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Domen Močnik
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Primož Strojan
- Institute of Oncology Ljubljana, Zaloška cesta 2, Ljubljana, SI-1000, Slovenia
| | - Franjo Pernuš
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia.,Department of Computer Science, University of Copenhagen, Universitetsparken 1, Copenhagen, D-2100, Denmark
| |
Collapse
|
20
|
Convolutional Neural Networks for Immediate Surgical Needle Automatic Detection in Craniofacial X-Ray Images. J Craniofac Surg 2020; 31:1647-1650. [PMID: 32516217 DOI: 10.1097/scs.0000000000006594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
PURPOSE Immediate X-ray examination is necessary while the surgical needle falls off during operation. In this study, one convolutional neural network (CNN) model was introduced for automatically surgical needle detection in craniofacial X-ray images. MATERIALS AND METHODS The craniofacial surgical needle (5-0, ETHICON, USA) was localized in 8 different anatomic regions of 2 pig heads for bilateral X-ray examination separately. Thirty-two images were obtained finally which were cropped into fragmented images and divided into the training dataset and the test dataset. Then, one immediate needle detection CNN model was developed and trained. Its performance was quantitatively evaluated using the precision rate, the recall rate, and the f2-score. One 8-fold cross-validation experiment was performed. The detection rate and the time it took were calculated to quantify the degree of difference between the automatic detection and the manual detection by 3 experienced clinicians. RESULTS The precision rate, the recall rate, and the f2-score of the CNN model on fragmented images were 98.99%, 92.67%, and 93.85% respectively. For the 8-fold cross-validation experiments, 26 cases of all the 32 X-ray images were automatically marked the right position of the needle (detection rate of 81.25%). The average time of automatically detecting one image was 5.8 seconds. For the 3 clinicians, 65 images of all the 32× 3 images were checked right (detection rate of 67.7%) with the average time-consuming of 33 seconds. CONCLUSION In summary, after training with a large dataset, the CNN model showed potential for immediate surgical needle automatic detection in craniofacial X-ray images with better detection accuracy and efficiency than the conventional manual method.
Collapse
|
21
|
Tang H, Chen X, Liu Y, Lu Z, You J, Yang M, Yao S, Zhao G, Xu Y, Chen T, Liu Y, Xie X. Clinically applicable deep learning framework for organs at risk delineation in CT images. NAT MACH INTELL 2019. [DOI: 10.1038/s42256-019-0099-z] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
22
|
Speight R, Schmidt MA, Liney GP, Johnstone RI, Eccles CL, Dubec M, George B, Henry A, McCallum H. IPEM Topical Report: A 2018 IPEM survey of MRI use for external beam radiotherapy treatment planning in the UK. Phys Med Biol 2019; 64:175021. [PMID: 31239419 DOI: 10.1088/1361-6560/ab2c7c] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
The benefits of integrating MRI into the radiotherapy pathway are well published, however there is little consensus in guidance on how to commission or implement its use. With a view to developing consensus guidelines for the use of MRI in external beam radiotherapy (EBRT) treatment planning in the UK, a survey was undertaken by an Institute of Physics and Engineering in Medicine (IPEM) working-party to assess the current landscape of MRI use in EBRT in the UK. A multi-disciplinary working-party developed a survey to understand current practice using MRI for EBRT treatment planning; investigate how MRI is currently used and managed; and identify knowledge gaps. The survey was distributed electronically to radiotherapy service managers and physics leads in 71 UK radiotherapy (RT) departments (all NHS and private groups). The survey response rate was 87% overall, with 89% of NHS and 75% of private centres responding. All responding centres include EBRT in some RT pathways: 94% using Picture Archiving and Communication System (PACS) images potentially acquired without any input from RT departments, and 69% had some form of MRI access for planning EBRT. Most centres reporting direct access use a radiology scanner within the same hospital in dedicated (26%) or non-dedicated (52%) RT scanning sessions. Only two centres reported having dedicated RT MRI scanners in the UK, lower than reported in other countries. Six percent of radiotherapy patients in England (data not publically available outside of England) have MRI as part of their treatment, which again is lower than reported elsewhere. Although a substantial number of centres acquire MRI scans for treatment planning purposes, most centres acquire less than five patient scans per month for each treatment site. Commissioning and quality assurance of both image registration and MRI scanners was found to be variable across the UK. In addition, staffing models and training given to different staff groups varied considerably across the UK, reflecting the current lack of national guidelines. The primary barriers reported to MRI implementation in EBRT planning included costs (e.g. lack of a national tariff for planning MRI), lack of MRI access and/or capacity within hospitals. Despite these challenges, significant interest remains in increasing MRI-assisted EBRT planning over the next five years.
Collapse
Affiliation(s)
- Richard Speight
- Leeds Cancer Centre, Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom. Author to whom correspondence should be addressed
| | | | | | | | | | | | | | | | | |
Collapse
|
23
|
Kosmin M, Ledsam J, Romera-Paredes B, Mendes R, Moinuddin S, de Souza D, Gunn L, Kelly C, Hughes C, Karthikesalingam A, Nutting C, Sharma R. Rapid advances in auto-segmentation of organs at risk and target volumes in head and neck cancer. Radiother Oncol 2019; 135:130-140. [DOI: 10.1016/j.radonc.2019.03.004] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Revised: 02/10/2019] [Accepted: 03/04/2019] [Indexed: 11/25/2022]
|
24
|
van Rooij W, Dahele M, Ribeiro Brandao H, Delaney AR, Slotman BJ, Verbakel WF. Deep Learning-Based Delineation of Head and Neck Organs at Risk: Geometric and Dosimetric Evaluation. Int J Radiat Oncol Biol Phys 2019; 104:677-684. [PMID: 30836167 DOI: 10.1016/j.ijrobp.2019.02.040] [Citation(s) in RCA: 74] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Revised: 01/23/2019] [Accepted: 02/20/2019] [Indexed: 12/11/2022]
Abstract
PURPOSE Organ-at-risk (OAR) delineation is a key step in treatment planning but can be time consuming, resource intensive, subject to variability, and dependent on anatomical knowledge. We studied deep learning (DL) for automated delineation of multiple OARs; in addition to geometric evaluation, the dosimetric impact of using DL contours for treatment planning was investigated. METHODS AND MATERIALS The following OARs were delineated with DL developed in-house: both submandibular and parotid glands, larynx, cricopharynx, pharyngeal constrictor muscle (PCM), upper esophageal sphincter, brain stem, oral cavity, and esophagus. DL contours were benchmarked against the manual delineation (MD) clinical contours using the Sørensen-Dice similarity coefficient. Automated knowledge-based treatment plans were used. The mean dose to the manually delineated OAR structures was reported for the MD and DL plans. RESULTS DL delineation of all OARs took <10 seconds per patient. For 7 of 11 OARs, the average Sørensen-Dice similarity coefficient was good (0.78-0.83). However, performance was lower for the esophagus (0.60), brainstem (0.64), PCM (0.68), and cricopharynx (0.73), often because of variations in MD. Although the average dose was statistically significantly higher in the DL plans for the inferior PCM (1.4 Gy) and esophagus (2.2 Gy), these average differences were not clinically significant. Dose to 28 of 209 (13.4%) and 7 of 209 (3.3%) OARs was >2 Gy higher and >2 Gy lower, respectively, in the DL plans. CONCLUSIONS DL-based segmentation for head and neck OARs is fast; for most organs and most patients, it performs sufficiently well for treatment-planning purposes. It has the potential to increase efficiency and facilitate online adaptive radiation therapy.
Collapse
Affiliation(s)
- Ward van Rooij
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Radiation Oncology, Cancer Center Amsterdam, Amsterdam, the Netherlands.
| | - Max Dahele
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Radiation Oncology, Cancer Center Amsterdam, Amsterdam, the Netherlands
| | - Hugo Ribeiro Brandao
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Radiation Oncology, Cancer Center Amsterdam, Amsterdam, the Netherlands
| | - Alexander R Delaney
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Radiation Oncology, Cancer Center Amsterdam, Amsterdam, the Netherlands
| | - Berend J Slotman
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Radiation Oncology, Cancer Center Amsterdam, Amsterdam, the Netherlands
| | - Wilko F Verbakel
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Radiation Oncology, Cancer Center Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
25
|
Li X, Li C, Liu H, Yang X. A modified level set algorithm based on point distance shape constraint for lesion and organ segmentation. Phys Med 2019; 57:123-136. [PMID: 30738516 DOI: 10.1016/j.ejmp.2018.12.032] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Revised: 10/02/2018] [Accepted: 12/23/2018] [Indexed: 11/27/2022] Open
|