1
|
Xu Z, Dai Y, Liu F, Li S, Liu S, Shi L, Fu J. Parotid Gland Segmentation Using Purely Transformer-Based U-Shaped Network and Multimodal MRI. Ann Biomed Eng 2024:10.1007/s10439-024-03510-3. [PMID: 38691234 DOI: 10.1007/s10439-024-03510-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 04/03/2024] [Indexed: 05/03/2024]
Abstract
Parotid gland tumors account for approximately 2% to 10% of head and neck tumors. Segmentation of parotid glands and tumors on magnetic resonance images is essential in accurately diagnosing and selecting appropriate surgical plans. However, segmentation of parotid glands is particularly challenging due to their variable shape and low contrast with surrounding structures. Recently, deep learning has developed rapidly, and Transformer-based networks have performed well on many computer vision tasks. However, Transformer-based networks have yet to be well used in parotid gland segmentation tasks. We collected a multi-center multimodal parotid gland MRI dataset and implemented parotid gland segmentation using a purely Transformer-based U-shaped segmentation network. We used both absolute and relative positional encoding to improve parotid gland segmentation and achieved multimodal information fusion without increasing the network computation. In addition, our novel training approach reduces the clinician's labeling workload by nearly half. Our method achieved good segmentation of both parotid glands and tumors. On the test set, our model achieved a Dice-Similarity Coefficient of 86.99%, Pixel Accuracy of 99.19%, Mean Intersection over Union of 81.79%, and Hausdorff Distance of 3.87. The purely Transformer-based U-shaped segmentation network we used outperforms other convolutional neural networks. In addition, our method can effectively fuse the information from multi-center multimodal MRI dataset, thus improving the parotid gland segmentation.
Collapse
Affiliation(s)
- Zi'an Xu
- Northeastern University, Shenyang, China
| | - Yin Dai
- Northeastern University, Shenyang, China.
| | - Fayu Liu
- China Medical University, Shenyang, China
| | - Siqi Li
- China Medical University, Shenyang, China
| | - Sheng Liu
- China Medical University, Shenyang, China
| | - Lifu Shi
- Liaoning Jiayin Medical Technology Co., Shenyang, China
| | - Jun Fu
- Northeastern University, Shenyang, China
| |
Collapse
|
2
|
Wen X, Zhao C, Zhao B, Yuan M, Chang J, Liu W, Meng J, Shi L, Yang S, Zeng J, Yang Y. Application of deep learning in radiation therapy for cancer. Cancer Radiother 2024; 28:208-217. [PMID: 38519291 DOI: 10.1016/j.canrad.2023.07.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 07/17/2023] [Accepted: 07/18/2023] [Indexed: 03/24/2024]
Abstract
In recent years, with the development of artificial intelligence, deep learning has been gradually applied to clinical treatment and research. It has also found its way into the applications in radiotherapy, a crucial method for cancer treatment. This study summarizes the commonly used and latest deep learning algorithms (including transformer, and diffusion models), introduces the workflow of different radiotherapy, and illustrates the application of different algorithms in different radiotherapy modules, as well as the defects and challenges of deep learning in the field of radiotherapy, so as to provide some help for the development of automatic radiotherapy for cancer.
Collapse
Affiliation(s)
- X Wen
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - C Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Minhang District, Shanghai, China
| | - B Zhao
- Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - M Yuan
- Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - J Chang
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - W Liu
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - J Meng
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - L Shi
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - S Yang
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - J Zeng
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - Y Yang
- Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China.
| |
Collapse
|
3
|
Boldrini L, D'Aviero A, De Felice F, Desideri I, Grassi R, Greco C, Iorio GC, Nardone V, Piras A, Salvestrini V. Artificial intelligence applied to image-guided radiation therapy (IGRT): a systematic review by the Young Group of the Italian Association of Radiotherapy and Clinical Oncology (yAIRO). LA RADIOLOGIA MEDICA 2024; 129:133-151. [PMID: 37740838 DOI: 10.1007/s11547-023-01708-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/16/2023] [Indexed: 09/25/2023]
Abstract
INTRODUCTION The advent of image-guided radiation therapy (IGRT) has recently changed the workflow of radiation treatments by ensuring highly collimated treatments. Artificial intelligence (AI) and radiomics are tools that have shown promising results for diagnosis, treatment optimization and outcome prediction. This review aims to assess the impact of AI and radiomics on modern IGRT modalities in RT. METHODS A PubMed/MEDLINE and Embase systematic review was conducted to investigate the impact of radiomics and AI to modern IGRT modalities. The search strategy was "Radiomics" AND "Cone Beam Computed Tomography"; "Radiomics" AND "Magnetic Resonance guided Radiotherapy"; "Radiomics" AND "on board Magnetic Resonance Radiotherapy"; "Artificial Intelligence" AND "Cone Beam Computed Tomography"; "Artificial Intelligence" AND "Magnetic Resonance guided Radiotherapy"; "Artificial Intelligence" AND "on board Magnetic Resonance Radiotherapy" and only original articles up to 01.11.2022 were considered. RESULTS A total of 402 studies were obtained using the previously mentioned search strategy on PubMed and Embase. The analysis was performed on a total of 84 papers obtained following the complete selection process. Radiomics application to IGRT was analyzed in 23 papers, while a total 61 papers were focused on the impact of AI on IGRT techniques. DISCUSSION AI and radiomics seem to significantly impact IGRT in all the phases of RT workflow, even if the evidence in the literature is based on retrospective data. Further studies are needed to confirm these tools' potential and provide a stronger correlation with clinical outcomes and gold-standard treatment strategies.
Collapse
Affiliation(s)
- Luca Boldrini
- UOC Radioterapia Oncologica, Fondazione Policlinico Universitario IRCCS "A. Gemelli", Rome, Italy
- Università Cattolica del Sacro Cuore, Rome, Italy
| | - Andrea D'Aviero
- Radiation Oncology, Mater Olbia Hospital, Olbia, Sassari, Italy
| | - Francesca De Felice
- Radiation Oncology, Department of Radiological, Policlinico Umberto I, Rome, Italy
- Oncological and Pathological Sciences, "Sapienza" University of Rome, Rome, Italy
| | - Isacco Desideri
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
| | - Roberta Grassi
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Carlo Greco
- Department of Radiation Oncology, Università Campus Bio-Medico di Roma, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
| | | | - Valerio Nardone
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Antonio Piras
- UO Radioterapia Oncologica, Villa Santa Teresa, Bagheria, Palermo, Italy.
| | - Viola Salvestrini
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
- Cyberknife Center, Istituto Fiorentino di Cura e Assistenza (IFCA), 50139, Florence, Italy
| |
Collapse
|
4
|
McDonald BA, Dal Bello R, Fuller CD, Balermpas P. The Use of MR-Guided Radiation Therapy for Head and Neck Cancer and Recommended Reporting Guidance. Semin Radiat Oncol 2024; 34:69-83. [PMID: 38105096 DOI: 10.1016/j.semradonc.2023.10.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Although magnetic resonance imaging (MRI) has become standard diagnostic workup for head and neck malignancies and is currently recommended by most radiological societies for pharyngeal and oral carcinomas, its utilization in radiotherapy has been heterogeneous during the last decades. However, few would argue that implementing MRI for annotation of target volumes and organs at risk provides several advantages, so that implementation of the modality for this purpose is widely accepted. Today, the term MR-guidance has received a much broader meaning, including MRI for adaptive treatments, MR-gating and tracking during radiotherapy application, MR-features as biomarkers and finally MR-only workflows. First studies on treatment of head and neck cancer on commercially available dedicated hybrid-platforms (MR-linacs), with distinct common features but also differences amongst them, have also been recently reported, as well as "biological adaptation" based on evaluation of early treatment response via functional MRI-sequences such as diffusion weighted ones. Yet, all of these approaches towards head and neck treatment remain at their infancy, especially when compared to other radiotherapy indications. Moreover, the lack of standardization for reporting MR-guided radiotherapy is a major obstacle both to further progress in the field and to conduct and compare clinical trials. Goals of this article is to present and explain all different aspects of MR-guidance for radiotherapy of head and neck cancer, summarize evidence, as well as possible advantages and challenges of the method and finally provide a comprehensive reporting guidance for use in clinical routine and trials.
Collapse
Affiliation(s)
- Brigid A McDonald
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Riccardo Dal Bello
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Panagiotis Balermpas
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland.
| |
Collapse
|
5
|
Hoque SMH, Pirrone G, Matrone F, Donofrio A, Fanetti G, Caroli A, Rista RS, Bortolus R, Avanzo M, Drigo A, Chiovati P. Clinical Use of a Commercial Artificial Intelligence-Based Software for Autocontouring in Radiation Therapy: Geometric Performance and Dosimetric Impact. Cancers (Basel) 2023; 15:5735. [PMID: 38136281 PMCID: PMC10741804 DOI: 10.3390/cancers15245735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 11/30/2023] [Accepted: 12/01/2023] [Indexed: 12/24/2023] Open
Abstract
PURPOSE When autocontouring based on artificial intelligence (AI) is used in the radiotherapy (RT) workflow, the contours are reviewed and eventually adjusted by a radiation oncologist before an RT treatment plan is generated, with the purpose of improving dosimetry and reducing both interobserver variability and time for contouring. The purpose of this study was to evaluate the results of application of a commercial AI-based autocontouring for RT, assessing both geometric accuracies and the influence on optimized dose from automatically generated contours after review by human operator. MATERIALS AND METHODS A commercial autocontouring system was applied to a retrospective database of 40 patients, of which 20 were treated with radiotherapy for prostate cancer (PCa) and 20 for head and neck cancer (HNC). Contours resulting from AI were compared against AI contours reviewed by human operator and human-only contours using Dice similarity coefficient (DSC), Hausdorff distance (HD), and relative volume difference (RVD). Dosimetric indices such as Dmean, D0.03cc, and normalized plan quality metrics were used to compare dose distributions from RT plans generated from structure sets contoured by humans assisted by AI against plans from manual contours. The reduction in contouring time obtained by using automated tools was also assessed. A Wilcoxon rank sum test was computed to assess the significance of differences. Interobserver variability of the comparison of manual vs. AI-assisted contours was also assessed among two radiation oncologists for PCa. RESULTS For PCa, AI-assisted segmentation showed good agreement with expert radiation oncologist structures with average DSC among patients ≥ 0.7 for all structures, and minimal radiation oncology adjustment of structures (DSC of adjusted versus AI structures ≥ 0.91). For HNC, results of comparison between manual and AI contouring varied considerably e.g., 0.77 for oral cavity and 0.11-0.13 for brachial plexus, but again, adjustment was generally minimal (DSC of adjusted against AI contours 0.97 for oral cavity, 0.92-0.93 for brachial plexus). The difference in dose for the target and organs at risk were not statistically significant between human and AI-assisted, with the only exceptions of D0.03cc to the anal canal and Dmean to the brachial plexus. The observed average differences in plan quality for PCa and HNC cases were 8% and 6.7%, respectively. The dose parameter changes due to interobserver variability in PCa were small, with the exception of the anal canal, where large dose variations were observed. The reduction in time required for contouring was 72% for PCa and 84% for HNC. CONCLUSIONS When an autocontouring system is used in combination with human review, the time of the RT workflow is significantly reduced without affecting dose distribution and plan quality.
Collapse
Affiliation(s)
- S M Hasibul Hoque
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (S.M.H.H.); (G.P.); (R.S.R.); (M.A.); (A.D.)
| | - Giovanni Pirrone
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (S.M.H.H.); (G.P.); (R.S.R.); (M.A.); (A.D.)
| | - Fabio Matrone
- Radiation Oncology Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (F.M.); (A.D.); (G.F.); (A.C.); (R.B.)
| | - Alessandra Donofrio
- Radiation Oncology Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (F.M.); (A.D.); (G.F.); (A.C.); (R.B.)
| | - Giuseppe Fanetti
- Radiation Oncology Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (F.M.); (A.D.); (G.F.); (A.C.); (R.B.)
| | - Angela Caroli
- Radiation Oncology Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (F.M.); (A.D.); (G.F.); (A.C.); (R.B.)
| | - Rahnuma Shahrin Rista
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (S.M.H.H.); (G.P.); (R.S.R.); (M.A.); (A.D.)
| | - Roberto Bortolus
- Radiation Oncology Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (F.M.); (A.D.); (G.F.); (A.C.); (R.B.)
| | - Michele Avanzo
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (S.M.H.H.); (G.P.); (R.S.R.); (M.A.); (A.D.)
| | - Annalisa Drigo
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (S.M.H.H.); (G.P.); (R.S.R.); (M.A.); (A.D.)
| | - Paola Chiovati
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (S.M.H.H.); (G.P.); (R.S.R.); (M.A.); (A.D.)
| |
Collapse
|
6
|
Liu P, Sun Y, Zhao X, Yan Y. Deep learning algorithm performance in contouring head and neck organs at risk: a systematic review and single-arm meta-analysis. Biomed Eng Online 2023; 22:104. [PMID: 37915046 PMCID: PMC10621161 DOI: 10.1186/s12938-023-01159-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 09/21/2023] [Indexed: 11/03/2023] Open
Abstract
PURPOSE The contouring of organs at risk (OARs) in head and neck cancer radiation treatment planning is a crucial, yet repetitive and time-consuming process. Recent studies have applied deep learning (DL) algorithms to automatically contour head and neck OARs. This study aims to conduct a systematic review and meta-analysis to summarize and analyze the performance of DL algorithms in contouring head and neck OARs. The objective is to assess the advantages and limitations of DL algorithms in contour planning of head and neck OARs. METHODS This study conducted a literature search of Pubmed, Embase and Cochrane Library databases, to include studies related to DL contouring head and neck OARs, and the dice similarity coefficient (DSC) of four categories of OARs from the results of each study are selected as effect sizes for meta-analysis. Furthermore, this study conducted a subgroup analysis of OARs characterized by image modality and image type. RESULTS 149 articles were retrieved, and 22 studies were included in the meta-analysis after excluding duplicate literature, primary screening, and re-screening. The combined effect sizes of DSC for brainstem, spinal cord, mandible, left eye, right eye, left optic nerve, right optic nerve, optic chiasm, left parotid, right parotid, left submandibular, and right submandibular are 0.87, 0.83, 0.92, 0.90, 0.90, 0.71, 0.74, 0.62, 0.85, 0.85, 0.82, and 0.82, respectively. For subgroup analysis, the combined effect sizes for segmentation of the brainstem, mandible, left optic nerve, and left parotid gland using CT and MRI images are 0.86/0.92, 0.92/0.90, 0.71/0.73, and 0.84/0.87, respectively. Pooled effect sizes using 2D and 3D images of the brainstem, mandible, left optic nerve, and left parotid gland for contouring are 0.88/0.87, 0.92/0.92, 0.75/0.71 and 0.87/0.85. CONCLUSIONS The use of automated contouring technology based on DL algorithms is an essential tool for contouring head and neck OARs, achieving high accuracy, reducing the workload of clinical radiation oncologists, and providing individualized, standardized, and refined treatment plans for implementing "precision radiotherapy". Improving DL performance requires the construction of high-quality data sets and enhancing algorithm optimization and innovation.
Collapse
Affiliation(s)
- Peiru Liu
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China
- Beifang Hospital of China Medical University, Shenyang, China
| | - Ying Sun
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China
| | - Xinzhuo Zhao
- Shenyang University of Technology, School of Electrical Engineering,, Shenyang, China
| | - Ying Yan
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China.
| |
Collapse
|
7
|
Turcas A, Leucuta D, Balan C, Clementel E, Gheara C, Kacso A, Kelly SM, Tanasa D, Cernea D, Achimas-Cadariu P. Deep-learning magnetic resonance imaging-based automatic segmentation for organs-at-risk in the brain: Accuracy and impact on dose distribution. Phys Imaging Radiat Oncol 2023; 27:100454. [PMID: 37333894 PMCID: PMC10276287 DOI: 10.1016/j.phro.2023.100454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 05/27/2023] [Accepted: 05/31/2023] [Indexed: 06/20/2023] Open
Abstract
Background and purpose Normal tissue sparing in radiotherapy relies on proper delineation. While manual contouring is time consuming and subject to inter-observer variability, auto-contouring could optimize workflows and harmonize practice. We assessed the accuracy of a commercial, deep-learning, MRI-based tool for brain organs-at-risk delineation. Materials and methods Thirty adult brain tumor patients were retrospectively manually recontoured. Two additional structure sets were obtained: AI (artificial intelligence) and AIedit (manually corrected auto-contours). For 15 selected cases, identical plans were optimized for each structure set. We used Dice Similarity Coefficient (DSC) and mean surface-distance (MSD) for geometric comparison and gamma analysis and dose-volume-histogram comparison for dose metrics evaluation. Wilcoxon signed-ranks test was used for paired data, Spearman coefficient(ρ) for correlations and Bland-Altman plots to assess level of agreement. Results Auto-contouring was significantly faster than manual (1.1/20 min, p < 0.01). Median DSC and MSD were 0.7/0.9 mm for AI and 0.8/0.5 mm for AIedit. DSC was significantly correlated with structure size (ρ = 0.76, p < 0.01), with higher DSC for large structures. Median gamma pass rate was 74% (71-81%) for Plan_AI and 82% (75-86%) for Plan_AIedit, with no correlation with DSC or MSD. Differences between Dmean_AI and Dmean_Ref were ≤ 0.2 Gy (p < 0.05). The dose difference was moderately correlated with DSC. Bland Altman plot showed minimal discrepancy (0.1/0) between AI and reference Dmean/Dmax. Conclusions The AI-model showed good accuracy for large structures, but developments are required for smaller ones. Auto-segmentation was significantly faster, with minor differences in dose distribution caused by geometric variations.
Collapse
Affiliation(s)
- Andrada Turcas
- The European Organisation for Research and Treatment of Cancer (EORTC) Headquarters, RTQA, Brussels, Belgium
- SIOP Europe, The European Society for Paediatric Oncology (SIOPE), QUARTET Project, Brussels, Belgium
- University of Medicine and Pharmacy and Medicine “Iuliu Hatieganu”, Oncology Department, Cluj-Napoca, Romania
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
| | - Daniel Leucuta
- University of Medicine and Pharmacy “Iuliu Hatieganu”, Department of Medical Informatics and Biostatistics, Cluj-Napoca, Romania
| | - Cristina Balan
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
- “Babes-Bolyai” University, Faculty of Physics, Cluj-Napoca, Romania
| | - Enrico Clementel
- The European Organisation for Research and Treatment of Cancer (EORTC) Headquarters, RTQA, Brussels, Belgium
| | - Cristina Gheara
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
- “Babes-Bolyai” University, Faculty of Physics, Cluj-Napoca, Romania
| | - Alex Kacso
- University of Medicine and Pharmacy and Medicine “Iuliu Hatieganu”, Oncology Department, Cluj-Napoca, Romania
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
| | - Sarah M. Kelly
- The European Organisation for Research and Treatment of Cancer (EORTC) Headquarters, RTQA, Brussels, Belgium
- SIOP Europe, The European Society for Paediatric Oncology (SIOPE), QUARTET Project, Brussels, Belgium
| | - Delia Tanasa
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
| | - Dana Cernea
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
| | - Patriciu Achimas-Cadariu
- University of Medicine and Pharmacy and Medicine “Iuliu Hatieganu”, Oncology Department, Cluj-Napoca, Romania
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Surgery Department, Cluj-Napoca, Romania
| |
Collapse
|
8
|
Jin R, Cai Y, Zhang S, Yang T, Feng H, Jiang H, Zhang X, Hu Y, Liu J. Computational approaches for the reconstruction of optic nerve fibers along the visual pathway from medical images: a comprehensive review. Front Neurosci 2023; 17:1191999. [PMID: 37304011 PMCID: PMC10250625 DOI: 10.3389/fnins.2023.1191999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 05/09/2023] [Indexed: 06/13/2023] Open
Abstract
Optic never fibers in the visual pathway play significant roles in vision formation. Damages of optic nerve fibers are biomarkers for the diagnosis of various ophthalmological and neurological diseases; also, there is a need to prevent the optic nerve fibers from getting damaged in neurosurgery and radiation therapy. Reconstruction of optic nerve fibers from medical images can facilitate all these clinical applications. Although many computational methods are developed for the reconstruction of optic nerve fibers, a comprehensive review of these methods is still lacking. This paper described both the two strategies for optic nerve fiber reconstruction applied in existing studies, i.e., image segmentation and fiber tracking. In comparison to image segmentation, fiber tracking can delineate more detailed structures of optic nerve fibers. For each strategy, both conventional and AI-based approaches were introduced, and the latter usually demonstrates better performance than the former. From the review, we concluded that AI-based methods are the trend for optic nerve fiber reconstruction and some new techniques like generative AI can help address the current challenges in optic nerve fiber reconstruction.
Collapse
Affiliation(s)
- Richu Jin
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, China
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Yongning Cai
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, China
| | - Shiyang Zhang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Ting Yang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Haibo Feng
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Hongyang Jiang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Xiaoqing Zhang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Yan Hu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Jiang Liu
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, China
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
- Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| |
Collapse
|
9
|
Lucido JJ, DeWees TA, Leavitt TR, Anand A, Beltran CJ, Brooke MD, Buroker JR, Foote RL, Foss OR, Gleason AM, Hodge TL, Hughes CO, Hunzeker AE, Laack NN, Lenz TK, Livne M, Morigami M, Moseley DJ, Undahl LM, Patel Y, Tryggestad EJ, Walker MZ, Zverovitch A, Patel SH. Validation of clinical acceptability of deep-learning-based automated segmentation of organs-at-risk for head-and-neck radiotherapy treatment planning. Front Oncol 2023; 13:1137803. [PMID: 37091160 PMCID: PMC10115982 DOI: 10.3389/fonc.2023.1137803] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 03/24/2023] [Indexed: 04/09/2023] Open
Abstract
IntroductionOrgan-at-risk segmentation for head and neck cancer radiation therapy is a complex and time-consuming process (requiring up to 42 individual structure, and may delay start of treatment or even limit access to function-preserving care. Feasibility of using a deep learning (DL) based autosegmentation model to reduce contouring time without compromising contour accuracy is assessed through a blinded randomized trial of radiation oncologists (ROs) using retrospective, de-identified patient data.MethodsTwo head and neck expert ROs used dedicated time to create gold standard (GS) contours on computed tomography (CT) images. 445 CTs were used to train a custom 3D U-Net DL model covering 42 organs-at-risk, with an additional 20 CTs were held out for the randomized trial. For each held-out patient dataset, one of the eight participant ROs was randomly allocated to review and revise the contours produced by the DL model, while another reviewed contours produced by a medical dosimetry assistant (MDA), both blinded to their origin. Time required for MDAs and ROs to contour was recorded, and the unrevised DL contours, as well as the RO-revised contours by the MDAs and DL model were compared to the GS for that patient.ResultsMean time for initial MDA contouring was 2.3 hours (range 1.6-3.8 hours) and RO-revision took 1.1 hours (range, 0.4-4.4 hours), compared to 0.7 hours (range 0.1-2.0 hours) for the RO-revisions to DL contours. Total time reduced by 76% (95%-Confidence Interval: 65%-88%) and RO-revision time reduced by 35% (95%-CI,-39%-91%). All geometric and dosimetric metrics computed, agreement with GS was equivalent or significantly greater (p<0.05) for RO-revised DL contours compared to the RO-revised MDA contours, including volumetric Dice similarity coefficient (VDSC), surface DSC, added path length, and the 95%-Hausdorff distance. 32 OARs (76%) had mean VDSC greater than 0.8 for the RO-revised DL contours, compared to 20 (48%) for RO-revised MDA contours, and 34 (81%) for the unrevised DL OARs.ConclusionDL autosegmentation demonstrated significant time-savings for organ-at-risk contouring while improving agreement with the institutional GS, indicating comparable accuracy of DL model. Integration into the clinical practice with a prospective evaluation is currently underway.
Collapse
Affiliation(s)
- J. John Lucido
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
- *Correspondence: J. John Lucido,
| | - Todd A. DeWees
- Department of Health Sciences Research, Mayo Clinic, Phoenix, AZ, United States
| | - Todd R. Leavitt
- Department of Health Sciences Research, Mayo Clinic, Phoenix, AZ, United States
| | - Aman Anand
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, United States
| | - Chris J. Beltran
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, FL, United States
| | | | - Justine R. Buroker
- Research Services, Comprehensive Cancer Center, Mayo Clinic, Rochester, MN, United States
| | - Robert L. Foote
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Olivia R. Foss
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Angela M. Gleason
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Teresa L. Hodge
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | - Ashley E. Hunzeker
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Nadia N. Laack
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Tamra K. Lenz
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Douglas J. Moseley
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Lisa M. Undahl
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Yojan Patel
- Google Health, Mountain View, CA, United States
| | - Erik J. Tryggestad
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Samir H. Patel
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, United States
| |
Collapse
|
10
|
Podobnik G, Strojan P, Peterlin P, Ibragimov B, Vrtovec T. HaN-Seg: The head and neck organ-at-risk CT and MR segmentation dataset. Med Phys 2023; 50:1917-1927. [PMID: 36594372 DOI: 10.1002/mp.16197] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 11/17/2022] [Accepted: 12/07/2022] [Indexed: 01/04/2023] Open
Abstract
PURPOSE For the cancer in the head and neck (HaN), radiotherapy (RT) represents an important treatment modality. Segmentation of organs-at-risk (OARs) is the starting point of RT planning, however, existing approaches are focused on either computed tomography (CT) or magnetic resonance (MR) images, while multimodal segmentation has not been thoroughly explored yet. We present a dataset of CT and MR images of the same patients with curated reference HaN OAR segmentations for an objective evaluation of segmentation methods. ACQUISITION AND VALIDATION METHODS The cohort consists of HaN images of 56 patients that underwent both CT and T1-weighted MR imaging for image-guided RT. For each patient, reference segmentations of up to 30 OARs were obtained by experts performing manual pixel-wise image annotation. By maintaining the distribution of patient age and gender, and annotation type, the patients were randomly split into training Set 1 (42 cases or 75%) and test Set 2 (14 cases or 25%). Baseline auto-segmentation results are also provided by training the publicly available deep nnU-Net architecture on Set 1, and evaluating its performance on Set 2. DATA FORMAT AND USAGE NOTES The data are publicly available through an open-access repository under the name HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Dataset. Images and reference segmentations are stored in the NRRD file format, where the OAR filenames correspond to the nomenclature recommended by the American Association of Physicists in Medicine, and OAR and demographics information is stored in separate comma-separated value files. POTENTIAL APPLICATIONS The HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge is launched in parallel with the dataset release to promote the development of automated techniques for OAR segmentation in the HaN. Other potential applications include out-of-challenge algorithm development and benchmarking, as well as external validation of the developed algorithms.
Collapse
Affiliation(s)
- Gašper Podobnik
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | | | | | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
11
|
Fully automated CT-based adiposity assessment: comparison of the L1 and L3 vertebral levels for opportunistic prediction. Abdom Radiol (NY) 2023; 48:787-795. [PMID: 36369528 DOI: 10.1007/s00261-022-03728-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 11/13/2022]
Abstract
PURPOSE The purpose of this study is to compare fully automated CT-based measures of adipose tissue at the L1 level versus the standard L3 level for predicting mortality, which would allow for use at both chest (L1) and abdominal (L3) CT. METHODS This retrospective study of 9066 asymptomatic adults (mean age, 57.1 ± 7.8 [SD] years; 4020 men, 5046 women) undergoing unenhanced low-dose abdominal CT for colorectal cancer screening. A previously validated artificial intelligence (AI) tool was used to assess cross-sectional visceral and subcutaneous adipose tissue areas (SAT and VAT), as well as their ratio (VSR) at the L1 and L3 levels. Post-CT survival prediction was compared using area under the ROC curve (ROC AUC) and hazard ratios (HRs). RESULTS Median clinical follow-up interval after CT was 8.8 years (interquartile range, 5.2-11.6 years), during which 5.9% died (532/9066). No significant difference (p > 0.05) for mortality was observed between L1 and L3 VAT and SAT at 10-year ROC AUC. However, L3 measures were significantly better for VSR at 10-year AUC (p < 0.001). HRs comparing worst-to-best quartiles for mortality at L1 vs. L3 were 2.12 (95% CI, 1.65-2.72) and 2.22 (1.74-2.83) for VAT; 1.20 (0.95-1.52) and 1.16 (0.92-1.46) for SAT; and 2.26 (1.7-2.93) and 3.05 (2.32-4.01) for VSR. In women, the corresponding HRs for VSR were 2.58 (1.80-3.69) (L1) and 4.49 (2.98-6.78) (L3). CONCLUSION Automated CT-based measures of visceral fat (VAT and VSR) at L1 are predictive of survival, although overall measures of adiposity at L1 level are somewhat inferior to the standard L3-level measures. Utilizing predictive L1-level fat measures could expand opportunistic screening to chest CT imaging.
Collapse
|
12
|
Ma W, Li X, Zou L, Fan C, Wu M. Symmetrical awareness network for cross-site ultrasound thyroid nodule segmentation. Front Public Health 2023; 11:1055815. [PMID: 36969643 PMCID: PMC10031019 DOI: 10.3389/fpubh.2023.1055815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 02/17/2023] [Indexed: 03/29/2023] Open
Abstract
Recent years have seen remarkable progress of learning-based methods on Ultrasound Thyroid Nodules segmentation. However, with very limited annotations, the multi-site training data from different domains makes the task remain challenging. Due to domain shift, the existing methods cannot be well generalized to the out-of-set data, which limits the practical application of deep learning in the field of medical imaging. In this work, we propose an effective domain adaptation framework which consists of a bidirectional image translation module and two symmetrical image segmentation modules. The framework improves the generalization ability of deep neural networks in medical image segmentation. The image translation module conducts the mutual conversion between the source domain and the target domain, while the symmetrical image segmentation modules perform image segmentation tasks in both domains. Besides, we utilize adversarial constraint to further bridge the domain gap in feature space. Meanwhile, a consistency loss is also utilized to make the training process more stable and efficient. Experiments on a multi-site ultrasound thyroid nodule dataset achieve 96.22% for PA and 87.06% for DSC in average, demonstrating that our method performs competitively in cross-domain generalization ability with state-of-the-art segmentation methods.
Collapse
Affiliation(s)
- Wenxuan Ma
- Electronic Information School, Wuhan University, Wuhan, China
| | - Xiaopeng Li
- Electronic Information School, Wuhan University, Wuhan, China
| | - Lian Zou
- Electronic Information School, Wuhan University, Wuhan, China
| | - Cien Fan
- Electronic Information School, Wuhan University, Wuhan, China
- *Correspondence: Cien Fan
| | - Meng Wu
- Department of Ultrasound, Zhongnan Hospital of Wuhan University, Wuhan, China
- Meng Wu
| |
Collapse
|
13
|
A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. FUTURE INTERNET 2022. [DOI: 10.3390/fi14120351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction process. Generative adversarial network (GAN)-synthesized images have many applications in this field besides augmentation, such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. The existing literature was reviewed systematically to understand the role of GAN-synthesized dummy images in brain disease diagnosis. Web of Science and Scopus databases were extensively searched to find relevant studies from the last 6 years to write this systematic literature review (SLR). Predefined inclusion and exclusion criteria helped in filtering the search results. Data extraction is based on related research questions (RQ). This SLR identifies various loss functions used in the above applications and software to process brain MRIs. A comparative study of existing evaluation metrics for GAN-synthesized images helps choose the proper metric for an application. GAN-synthesized images will have a crucial role in the clinical sector in the coming years, and this paper gives a baseline for other researchers in the field.
Collapse
|
14
|
Xia X, Wang J, Liang S, Ye F, Tian MM, Hu W, Xu L. An attention base U-net for parotid tumor autosegmentation. Front Oncol 2022; 12:1028382. [PMID: 36505865 PMCID: PMC9730401 DOI: 10.3389/fonc.2022.1028382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 10/26/2022] [Indexed: 11/25/2022] Open
Abstract
A parotid neoplasm is an uncommon condition that only accounts for less than 3% of all head and neck cancers, and they make up less than 0.3% of all new cancers diagnosed annually. Due to their nonspecific imaging features and heterogeneous nature, accurate preoperative diagnosis remains a challenge. Automatic parotid tumor segmentation may help physicians evaluate these tumors. Two hundred eighty-five patients diagnosed with benign or malignant parotid tumors were enrolled in this study. Parotid and tumor tissues were segmented by 3 radiologists on T1-weighted (T1w), T2-weighted (T2w) and T1-weighted contrast-enhanced (T1wC) MR images. These images were randomly divided into two datasets, including a training dataset (90%) and an validation dataset (10%). A 10-fold cross-validation was performed to assess the performance. An attention base U-net for parotid tumor autosegmentation was created on the MRI T1w, T2 and T1wC images. The results were evaluated in a separate dataset, and the mean Dice similarity coefficient (DICE) for both parotids was 0.88. The mean DICE for left and right tumors was 0.85 and 0.86, respectively. These results indicate that the performance of this model corresponds with the radiologist's manual segmentation. In conclusion, an attention base U-net for parotid tumor autosegmentation may assist physicians to evaluate parotid gland tumors.
Collapse
Affiliation(s)
- Xianwu Xia
- The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China,Department of Oncology Intervention, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China,Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Sheng Liang
- Department of Oncology Intervention, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
| | - Fangfang Ye
- Department of Oncology Intervention, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
| | - Min-Ming Tian
- Department of Oncology Intervention, Jiangxi University of Traditional Chinese Medicine, Nanchang, Jiangxi, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China,*Correspondence: Weigang Hu, ; Leiming Xu,
| | - Leiming Xu
- The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China,*Correspondence: Weigang Hu, ; Leiming Xu,
| |
Collapse
|
15
|
Kihara S, Koike Y, Takegawa H, Anetai Y, Nakamura S, Tanigawa N, Koizumi M. Clinical target volume segmentation based on gross tumor volume using deep learning for head and neck cancer treatment. Med Dosim 2022; 48:20-24. [PMID: 36273950 DOI: 10.1016/j.meddos.2022.09.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 02/07/2022] [Accepted: 09/17/2022] [Indexed: 02/04/2023]
Abstract
Accurate clinical target volume (CTV) delineation is important for head and neck intensity-modulated radiation therapy. However, delineation is time-consuming and susceptible to interobserver variability (IOV). Based on a manual contouring process commonly used in clinical practice, we developed a deep learning (DL)-based method to delineate a low-risk CTV with computed tomography (CT) and gross tumor volume (GTV) input and compared it with a CT-only input. A total of 310 patients with oropharynx cancer were randomly divided into the training set (250) and test set (60). The low-risk CTV and primary GTV contours were used to generate label data for the input and ground truth. A 3D U-Net with a two-channel input of CT and GTV (U-NetGTV) was proposed and its performance was compared with a U-Net with only CT input (U-NetCT). The Dice similarity coefficient (DSC) and average Hausdorff distance (AHD) were evaluated. The time required to predict the CTV was 0.86 s per patient. U-NetGTV showed a significantly higher mean DSC value than U-NetCT (0.80 ± 0.03 and 0.76 ± 0.05) and a significantly lower mean AHD value (3.0 ± 0.5 mm vs 3.5 ± 0.7 mm). Compared to the existing DL method with only CT input, the proposed GTV-based segmentation using DL showed a more precise low-risk CTV segmentation for head and neck cancer. Our findings suggest that the proposed method could reduce the contouring time of a low-risk CTV, allowing the standardization of target delineations for head and neck cancer.
Collapse
Affiliation(s)
- Sayaka Kihara
- Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Yuhei Koike
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan.
| | - Hideki Takegawa
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Yusuke Anetai
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Satoaki Nakamura
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Noboru Tanigawa
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Masahiko Koizumi
- Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
16
|
Xu J, Zeng B, Egger J, Wang C, Smedby Ö, Jiang X, Chen X. A review on AI-based medical image computing in head and neck surgery. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac840f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/25/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Head and neck surgery is a fine surgical procedure with a complex anatomical space, difficult operation and high risk. Medical image computing (MIC) that enables accurate and reliable preoperative planning is often needed to reduce the operational difficulty of surgery and to improve patient survival. At present, artificial intelligence, especially deep learning, has become an intense focus of research in MIC. In this study, the application of deep learning-based MIC in head and neck surgery is reviewed. Relevant literature was retrieved on the Web of Science database from January 2015 to May 2022, and some papers were selected for review from mainstream journals and conferences, such as IEEE Transactions on Medical Imaging, Medical Image Analysis, Physics in Medicine and Biology, Medical Physics, MICCAI, etc. Among them, 65 references are on automatic segmentation, 15 references on automatic landmark detection, and eight references on automatic registration. In the elaboration of the review, first, an overview of deep learning in MIC is presented. Then, the application of deep learning methods is systematically summarized according to the clinical needs, and generalized into segmentation, landmark detection and registration of head and neck medical images. In segmentation, it is mainly focused on the automatic segmentation of high-risk organs, head and neck tumors, skull structure and teeth, including the analysis of their advantages, differences and shortcomings. In landmark detection, the focus is mainly on the introduction of landmark detection in cephalometric and craniomaxillofacial images, and the analysis of their advantages and disadvantages. In registration, deep learning networks for multimodal image registration of the head and neck are presented. Finally, their shortcomings and future development directions are systematically discussed. The study aims to serve as a reference and guidance for researchers, engineers or doctors engaged in medical image analysis of head and neck surgery.
Collapse
|
17
|
Iqbal A, Sharif M, Yasmin M, Raza M, Aftab S. Generative adversarial networks and its applications in the biomedical image segmentation: a comprehensive survey. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL 2022; 11:333-368. [PMID: 35821891 PMCID: PMC9264294 DOI: 10.1007/s13735-022-00240-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 03/16/2022] [Accepted: 05/24/2022] [Indexed: 05/13/2023]
Abstract
Recent advancements with deep generative models have proven significant potential in the task of image synthesis, detection, segmentation, and classification. Segmenting the medical images is considered a primary challenge in the biomedical imaging field. There have been various GANs-based models proposed in the literature to resolve medical segmentation challenges. Our research outcome has identified 151 papers; after the twofold screening, 138 papers are selected for the final survey. A comprehensive survey is conducted on GANs network application to medical image segmentation, primarily focused on various GANs-based models, performance metrics, loss function, datasets, augmentation methods, paper implementation, and source codes. Secondly, this paper provides a detailed overview of GANs network application in different human diseases segmentation. We conclude our research with critical discussion, limitations of GANs, and suggestions for future directions. We hope this survey is beneficial and increases awareness of GANs network implementations for biomedical image segmentation tasks.
Collapse
Affiliation(s)
- Ahmed Iqbal
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Shabib Aftab
- Department of Computer Science, Virtual University of Pakistan, Lahore, Pakistan
| |
Collapse
|
18
|
Ali H, Biswas R, Ali F, Shah U, Alamgir A, Mousa O, Shah Z. The role of generative adversarial networks in brain MRI: a scoping review. Insights Imaging 2022; 13:98. [PMID: 35662369 PMCID: PMC9167371 DOI: 10.1186/s13244-022-01237-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 05/11/2022] [Indexed: 11/23/2022] Open
Abstract
The performance of artificial intelligence (AI) for brain MRI can improve if enough data are made available. Generative adversarial networks (GANs) showed a lot of potential to generate synthetic MRI data that can capture the distribution of real MRI. Besides, GANs are also popular for segmentation, noise removal, and super-resolution of brain MRI images. This scoping review aims to explore how GANs methods are being used on brain MRI data, as reported in the literature. The review describes the different applications of GANs for brain MRI, presents the most commonly used GANs architectures, and summarizes the publicly available brain MRI datasets for advancing the research and development of GANs-based approaches. This review followed the guidelines of PRISMA-ScR to perform the study search and selection. The search was conducted on five popular scientific databases. The screening and selection of studies were performed by two independent reviewers, followed by validation by a third reviewer. Finally, the data were synthesized using a narrative approach. This review included 139 studies out of 789 search results. The most common use case of GANs was the synthesis of brain MRI images for data augmentation. GANs were also used to segment brain tumors and translate healthy images to diseased images or CT to MRI and vice versa. The included studies showed that GANs could enhance the performance of AI methods used on brain MRI imaging data. However, more efforts are needed to transform the GANs-based methods in clinical applications.
Collapse
Affiliation(s)
- Hazrat Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar.
| | - Rafiul Biswas
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Farida Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Uzair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Asma Alamgir
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Osama Mousa
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar.
| |
Collapse
|
19
|
Zhang F, Wang Q, Yang A, Lu N, Jiang H, Chen D, Yu Y, Wang Y. Geometric and Dosimetric Evaluation of the Automatic Delineation of Organs at Risk (OARs) in Non-Small-Cell Lung Cancer Radiotherapy Based on a Modified DenseNet Deep Learning Network. Front Oncol 2022; 12:861857. [PMID: 35371991 PMCID: PMC8964972 DOI: 10.3389/fonc.2022.861857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Accepted: 02/21/2022] [Indexed: 11/23/2022] Open
Abstract
Purpose To introduce an end-to-end automatic segmentation model for organs at risk (OARs) in thoracic CT images based on modified DenseNet, and reduce the workload of radiation oncologists. Materials and Methods The computed tomography (CT) images of 36 lung cancer patients were included in this study, of which 27 patients’ images were randomly selected as the training set, 9 patients’ as the testing set. The validation set was generated by cross validation and 6 patients’ images were randomly selected from the training set during each epoch as the validation set. The autosegmentation task of the left and right lungs, spinal cord, heart, trachea and esophagus was implemented, and the whole training time was approximately 5 hours. Geometric evaluation metrics including the Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95) and average surface distance (ASD), were used to assess the autosegmentation performance of OARs based on the proposed model and were compared with those based on U-Net as benchmarks. Then, two sets of treatment plans were optimized based on the manually contoured targets and OARs (Plan1), as well as the manually contours targets and the automatically contoured OARs (Plan2). Dosimetric parameters, including Dmax, Dmean and Vx, of OARs were obtained and compared. Results The DSC, HD95 and ASD of the proposed model were better than those of U-Net. The differences in the DSC of the spinal cord and esophagus, differences in the HD95 of the spinal cord, heart, trachea and esophagus, as well as differences in the ASD of the spinal cord were statistically significant between the two models (P<0.05). The differences in the dose-volume parameters of the two sets of plans were not statistically significant (P>0.05). Moreover, compared with manual segmentation, autosegmentation significantly reduced the contouring time by nearly 40.7% (P<0.05). Conclusions The bilateral lungs, spinal cord, heart and trachea could be accurately delineated using the proposed model in this study; however, the automatic segmentation effect of the esophagus must still be further improved. The concept of feature map reuse provides a new idea for automatic medical image segmentation.
Collapse
Affiliation(s)
- Fuli Zhang
- Radiation Oncology Department, The Seventh Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Qiusheng Wang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| | - Anning Yang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| | - Na Lu
- Radiation Oncology Department, The Seventh Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Huayong Jiang
- Radiation Oncology Department, The Seventh Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Diandian Chen
- Radiation Oncology Department, The Seventh Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Yanjun Yu
- Radiation Oncology Department, The Seventh Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Yadi Wang
- Radiation Oncology Department, The Seventh Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| |
Collapse
|
20
|
Recent Applications of Artificial Intelligence in Radiotherapy: Where We Are and Beyond. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073223] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
In recent decades, artificial intelligence (AI) tools have been applied in many medical fields, opening the possibility of finding novel solutions for managing very complex and multifactorial problems, such as those commonly encountered in radiotherapy (RT). We conducted a PubMed and Scopus search to identify the AI application field in RT limited to the last four years. In total, 1824 original papers were identified, and 921 were analyzed by considering the phase of the RT workflow according to the applied AI approaches. AI permits the processing of large quantities of information, data, and images stored in RT oncology information systems, a process that is not manageable for individuals or groups. AI allows the iterative application of complex tasks in large datasets (e.g., delineating normal tissues or finding optimal planning solutions) and might support the entire community working in the various sectors of RT, as summarized in this overview. AI-based tools are now on the roadmap for RT and have been applied to the entire workflow, mainly for segmentation, the generation of synthetic images, and outcome prediction. Several concerns were raised, including the need for harmonization while overcoming ethical, legal, and skill barriers.
Collapse
|
21
|
Li M, Wan C. The use of deep learning technology for the detection of optic neuropathy. Quant Imaging Med Surg 2022; 12:2129-2143. [PMID: 35284277 PMCID: PMC8899937 DOI: 10.21037/qims-21-728] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 10/26/2021] [Indexed: 03/14/2024]
Abstract
The emergence of computer graphics processing units (GPUs), improvements in mathematical models, and the availability of big data, has allowed artificial intelligence (AI) to use machine learning and deep learning (DL) technology to achieve robust performance in various fields of medicine. The DL system provides improved capabilities, especially in image recognition and image processing. Recent progress in the sorting of AI data sets has stimulated great interest in the development of DL algorithms. Compared with subjective evaluation and other traditional methods, DL algorithms can identify diseases faster and more accurately in diagnostic tests. Medical imaging is of great significance in the clinical diagnosis and individualized treatment of ophthalmic diseases. Based on the morphological data sets of millions of data points, various image-related diagnostic techniques can now impart high-resolution information on anatomical and functional changes, thereby providing unprecedented insights in ophthalmic clinical practice. As ophthalmology relies heavily on imaging examinations, it is one of the first medical fields to apply DL algorithms in clinical practice. Such algorithms can assist in the analysis of large amounts of data acquired from the examination of auxiliary images. In recent years, rapid advancements in imaging technology have facilitated the application of DL in the automatic identification and classification of pathologies that are characteristic of ophthalmic diseases, thereby providing high quality diagnostic information. This paper reviews the origins, development, and application of DL technology. The technical and clinical problems associated with building DL systems to meet clinical needs and the potential challenges of clinical application are discussed, especially in relation to the field of optic nerve diseases.
Collapse
Affiliation(s)
- Mei Li
- Department of Ophthalmology, Yanan People’s Hospital, Yanan, China
| | - Chao Wan
- Department of Ophthalmology, the First Hospital of China Medical University, Shenyang, China
| |
Collapse
|
22
|
Li M, Lian F, Li Y, Guo S. Attention-guided duplex adversarial U-net for pancreatic segmentation from computed tomography images. J Appl Clin Med Phys 2022; 23:e13537. [PMID: 35199477 PMCID: PMC8992955 DOI: 10.1002/acm2.13537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 12/28/2021] [Accepted: 01/07/2022] [Indexed: 12/02/2022] Open
Abstract
Purpose Segmenting the organs from computed tomography (CT) images is crucial to early diagnosis and treatment. Pancreas segmentation is especially challenging because the pancreas has a small volume and a large variation in shape. Methods To mitigate this issue, an attention‐guided duplex adversarial U‐Net (ADAU‐Net) for pancreas segmentation is proposed in this work. First, two adversarial networks are integrated into the baseline U‐Net to ensure the obtained prediction maps resemble the ground truths. Then, attention blocks are applied to preserve much contextual information for segmentation. The implementation of the proposed ADAU‐Net consists of two steps: 1) backbone segmentor selection scheme is introduced to select an optimal backbone segmentor from three two‐dimensional segmentation model variants based on a conventional U‐Net and 2) attention blocks are integrated into the backbone segmentor at several locations to enhance the interdependency among pixels for a better segmentation performance, and the optimal structure is selected as a final version. Results The experimental results on the National Institutes of Health Pancreas‐CT dataset show that our proposed ADAU‐Net outperforms the baseline segmentation network by 6.39% in dice similarity coefficient and obtains a competitive performance compared with the‐state‐of‐art methods for pancreas segmentation. Conclusion The ADAU‐Net achieves satisfactory segmentation results on the public pancreas dataset, indicating that the proposed model can segment pancreas outlines from CT images accurately.
Collapse
Affiliation(s)
- Meiyu Li
- College of Electronic Science and Engineering, Jilin University, Changchun, China
| | - Fenghui Lian
- School of Aviation Operations and Services, Air Force Aviation University, Changchun, China
| | - Yang Li
- School of Aviation Operations and Services, Air Force Aviation University, Changchun, China
| | - Shuxu Guo
- College of Electronic Science and Engineering, Jilin University, Changchun, China
| |
Collapse
|
23
|
Omari EA, Zhang Y, Ahunbay E, Paulson E, Amjad A, Chen X, Liang Y, Li XA. Multi parametric magnetic resonance imaging for radiation treatment planning. Med Phys 2022; 49:2836-2845. [PMID: 35170769 DOI: 10.1002/mp.15534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 10/05/2021] [Accepted: 01/03/2022] [Indexed: 11/09/2022] Open
Abstract
In recent years, multi-parametric magnetic resonance imaging (MpMRI) has played a major role in radiation therapy treatment planning. The superior soft tissue contrast, functional or physiological imaging capabilities and the flexibility of site-specific image sequence development has placed MpMRI at the forefront. In this article, the present status of MpMRI for external beam radiation therapy planning is reviewed. Common MpMRI sequences, preprocessing and QA strategies are briefly discussed, and various image registration techniques and strategies are addressed. Image segmentation methods including automatic segmentation and deep learning techniques for organs at risk and target delineation are reviewed. Due to the advancement in MRI guided online adaptive radiotherapy, treatment planning considerations addressing MRI only planning are also discussed. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Eenas A Omari
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Ying Zhang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Ergun Ahunbay
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Eric Paulson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Asma Amjad
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Xinfeng Chen
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Ying Liang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - X Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| |
Collapse
|
24
|
Dai X, Lei Y, Wang T, Zhou J, Rudra S, McDonald M, Curran WJ, Liu T, Yang X. Multi-organ auto-delineation in head-and-neck MRI for radiation therapy using regional convolutional neural network. Phys Med Biol 2022; 67:10.1088/1361-6560/ac3b34. [PMID: 34794138 PMCID: PMC8811683 DOI: 10.1088/1361-6560/ac3b34] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Accepted: 11/18/2021] [Indexed: 01/23/2023]
Abstract
Magnetic resonance imaging (MRI) allows accurate and reliable organ delineation for many disease sites in radiation therapy because MRI is able to offer superb soft-tissue contrast. Manual organ-at-risk delineation is labor-intensive and time-consuming. This study aims to develop a deep-learning-based automated multi-organ segmentation method to release the labor and accelerate the treatment planning process for head-and-neck (HN) cancer radiotherapy. A novel regional convolutional neural network (R-CNN) architecture, namely, mask scoring R-CNN, has been developed in this study. In the proposed model, a deep attention feature pyramid network is used as a backbone to extract the coarse features given by MRI, followed by feature refinement using R-CNN. The final segmentation is obtained through mask and mask scoring networks taking those refined feature maps as input. With the mask scoring mechanism incorporated into conventional mask supervision, the classification error can be highly minimized in conventional mask R-CNN architecture. A cohort of 60 HN cancer patients receiving external beam radiation therapy was used for experimental validation. Five-fold cross-validation was performed for the assessment of our proposed method. The Dice similarity coefficients of brain stem, left/right cochlea, left/right eye, larynx, left/right lens, mandible, optic chiasm, left/right optic nerve, oral cavity, left/right parotid, pharynx, and spinal cord were 0.89 ± 0.06, 0.68 ± 0.14/0.68 ± 0.18, 0.89 ± 0.07/0.89 ± 0.05, 0.90 ± 0.07, 0.67 ± 0.18/0.67 ± 0.10, 0.82 ± 0.10, 0.61 ± 0.14, 0.67 ± 0.11/0.68 ± 0.11, 0.92 ± 0.07, 0.85 ± 0.06/0.86 ± 0.05, 0.80 ± 0.13, and 0.77 ± 0.15, respectively. After the model training, all OARs can be segmented within 1 min.
Collapse
|
25
|
Xun S, Li D, Zhu H, Chen M, Wang J, Li J, Chen M, Wu B, Zhang H, Chai X, Jiang Z, Zhang Y, Huang P. Generative adversarial networks in medical image segmentation: A review. Comput Biol Med 2022; 140:105063. [PMID: 34864584 DOI: 10.1016/j.compbiomed.2021.105063] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 11/14/2021] [Accepted: 11/20/2021] [Indexed: 12/13/2022]
Abstract
PURPOSE Since Generative Adversarial Network (GAN) was introduced into the field of deep learning in 2014, it has received extensive attention from academia and industry, and a lot of high-quality papers have been published. GAN effectively improves the accuracy of medical image segmentation because of its good generating ability and capability to capture data distribution. This paper introduces the origin, working principle, and extended variant of GAN, and it reviews the latest development of GAN-based medical image segmentation methods. METHOD To find the papers, we searched on Google Scholar and PubMed with the keywords like "segmentation", "medical image", and "GAN (or generative adversarial network)". Also, additional searches were performed on Semantic Scholar, Springer, arXiv, and the top conferences in computer science with the above keywords related to GAN. RESULTS We reviewed more than 120 GAN-based architectures for medical image segmentation that were published before September 2021. We categorized and summarized these papers according to the segmentation regions, imaging modality, and classification methods. Besides, we discussed the advantages, challenges, and future research directions of GAN in medical image segmentation. CONCLUSIONS We discussed in detail the recent papers on medical image segmentation using GAN. The application of GAN and its extended variants has effectively improved the accuracy of medical image segmentation. Obtaining the recognition of clinicians and patients and overcoming the instability, low repeatability, and uninterpretability of GAN will be an important research direction in the future.
Collapse
Affiliation(s)
- Siyi Xun
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China
| | - Dengwang Li
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China.
| | - Hui Zhu
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong, China
| | - Min Chen
- The Second Hospital of Shandong University, Shandong University, The Department of Medicine, The Second Hospital of Shandong University, Jinan, China
| | - Jianbo Wang
- Department of Radiation Oncology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, Shandong, 250012, China
| | - Jie Li
- Department of Infectious Disease, Shandong Provincial Hospital Affiliated to Shandong University, Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| | - Meirong Chen
- Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, Shandong, China
| | - Bing Wu
- Laibo Biotechnology Co., Ltd., Jinan, Shandong, China
| | - Hua Zhang
- LinkingMed Technology Co., Ltd., Beijing, China
| | - Xiangfei Chai
- Huiying Medical Technology (Beijing) Co., Ltd., Beijing, China
| | - Zekun Jiang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China
| | - Yan Zhang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China
| | - Pu Huang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, 250358, China.
| |
Collapse
|
26
|
Li J, Udupa JK, Odhner D, Tong Y, Torigian DA. SOMA: Subject-, object-, and modality-adapted precision atlas approach for automatic anatomy recognition and delineation in medical images. Med Phys 2021; 48:7806-7825. [PMID: 34668207 PMCID: PMC8678400 DOI: 10.1002/mp.15308] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 09/12/2021] [Accepted: 09/29/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE In the multi-atlas segmentation (MAS) method, a large enough atlas set, which can cover the complete spectrum of the whole population pattern of the target object will benefit the segmentation quality. However, the difficulty in obtaining and generating such a large set of atlases and the computational burden required in the segmentation procedure make this approach impractical. In this paper, we propose a method called SOMA to select subject-, object-, and modality-adapted precision atlases for automatic anatomy recognition in medical images with pathology, following the idea that different regions of the target object in a novel image can be recognized by different atlases with regionally best similarity, so that effective atlases have no need to be globally similar to the target subject and also have no need to be overall similar to the target object. METHODS The SOMA method consists of three main components: atlas building, object recognition, and object delineation. Considering the computational complexity, we utilize an all-to-template strategy to align all images to the same image space belonging to the root image determined by the minimum spanning tree (MST) strategy among a subset of radiologically near-normal images. The object recognition process is composed of two stages: rough recognition and refined recognition. In rough recognition, subimage matching is conducted between the test image and each image of the whole atlas set, and only the atlas corresponding to the best-matched subimage contributes to the recognition map regionally. The frequency of best match for each atlas is recorded by a counter, and the atlases with the highest frequencies are selected as the precision atlases. In refined recognition, only the precision atlases are examined, and the subimage matching is conducted in a nonlocal manner of searching to further increase the accuracy of boundary matching. Delineation is based on a U-net-based deep learning network, where the original gray scale image together with the fuzzy map from refined recognition compose a two-channel input to the network, and the output is a segmentation map of the target object. RESULTS Experiments are conducted on computed tomography (CT) images with different qualities in two body regions - head and neck (H&N) and thorax, from 298 subjects with nine objects and 241 subjects with six objects, respectively. Most objects achieve a localization error within two voxels after refined recognition, with marked improvement in localization accuracy from rough to refined recognition of 0.6-3 mm in H&N and 0.8-4.9 mm in thorax, and also in delineation accuracy (Dice coefficient) from refined recognition to delineation of 0.01-0.11 in H&N and 0.01-0.18 in thorax. CONCLUSIONS The SOMA method shows high accuracy and robustness in anatomy recognition and delineation. The improvements from rough to refined recognition and further to delineation, as well as immunity of recognition accuracy to varying image and object qualities, demonstrate the core principles of SOMA where segmentation accuracy increases with precision atlases and gradually refined object matching.
Collapse
Affiliation(s)
- Jieyu Li
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Jayaram K. Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Drew A. Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
27
|
Chen M, Wu S, Zhao W, Zhou Y, Zhou Y, Wang G. Application of deep learning to auto-delineation of target volumes and organs at risk in radiotherapy. Cancer Radiother 2021; 26:494-501. [PMID: 34711488 DOI: 10.1016/j.canrad.2021.08.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 07/30/2021] [Accepted: 08/04/2021] [Indexed: 11/28/2022]
Abstract
The technological advancement heralded the arrival of precision radiotherapy (RT), thereby increasing the therapeutic ratio and decreasing the side effects from treatment. Contour of target volumes (TV) and organs at risk (OARs) in RT is a complicated process. In recent years, automatic contouring of TV and OARs has rapidly developed due to the advances in deep learning (DL). This technology has the potential to save time and to reduce intra- or inter-observer variability. In this paper, the authors provide an overview of RT, introduce the concept of DL, summarize the data characteristics of the included literature, summarize the possible challenges for DL in the future, and discuss the possible research directions.
Collapse
Affiliation(s)
- M Chen
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - S Wu
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - W Zhao
- Bengbu Medical College, Bengbu, Anhui 233030, China
| | - Y Zhou
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - Y Zhou
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - G Wang
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China.
| |
Collapse
|
28
|
Samarasinghe G, Jameson M, Vinod S, Field M, Dowling J, Sowmya A, Holloway L. Deep learning for segmentation in radiation therapy planning: a review. J Med Imaging Radiat Oncol 2021; 65:578-595. [PMID: 34313006 DOI: 10.1111/1754-9485.13286] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 06/29/2021] [Indexed: 12/21/2022]
Abstract
Segmentation of organs and structures, as either targets or organs-at-risk, has a significant influence on the success of radiation therapy. Manual segmentation is a tedious and time-consuming task for clinicians, and inter-observer variability can affect the outcomes of radiation therapy. The recent hype over deep neural networks has added many powerful auto-segmentation methods as variations of convolutional neural networks (CNN). This paper presents a descriptive review of the literature on deep learning techniques for segmentation in radiation therapy planning. The most common CNN architecture across the four clinical sub sites considered was U-net, with the majority of deep learning segmentation articles focussed on head and neck normal tissue structures. The most common data sets were CT images from an inhouse source, along with some public data sets. N-fold cross-validation was commonly employed; however, not all work separated training, test and validation data sets. This area of research is expanding rapidly. To facilitate comparisons of proposed methods and benchmarking, consistent use of appropriate metrics and independent validation should be carefully considered.
Collapse
Affiliation(s)
- Gihan Samarasinghe
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia
| | - Michael Jameson
- Genesiscare, Sydney, New South Wales, Australia.,St Vincent's Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Shalini Vinod
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Matthew Field
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Jason Dowling
- Commonwealth Scientific and Industrial Research Organisation, Australian E-Health Research Centre, Herston, Queensland, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
| | - Lois Holloway
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| |
Collapse
|
29
|
Nikolov S, Blackwell S, Zverovitch A, Mendes R, Livne M, De Fauw J, Patel Y, Meyer C, Askham H, Romera-Paredes B, Kelly C, Karthikesalingam A, Chu C, Carnell D, Boon C, D'Souza D, Moinuddin SA, Garie B, McQuinlan Y, Ireland S, Hampton K, Fuller K, Montgomery H, Rees G, Suleyman M, Back T, Hughes CO, Ledsam JR, Ronneberger O. Clinically Applicable Segmentation of Head and Neck Anatomy for Radiotherapy: Deep Learning Algorithm Development and Validation Study. J Med Internet Res 2021; 23:e26151. [PMID: 34255661 PMCID: PMC8314151 DOI: 10.2196/26151] [Citation(s) in RCA: 95] [Impact Index Per Article: 31.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 02/10/2021] [Accepted: 04/30/2021] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Over half a million individuals are diagnosed with head and neck cancer each year globally. Radiotherapy is an important curative treatment for this disease, but it requires manual time to delineate radiosensitive organs at risk. This planning process can delay treatment while also introducing interoperator variability, resulting in downstream radiation dose differences. Although auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying, and achieving expert performance remain. OBJECTIVE Adopting a deep learning approach, we aim to demonstrate a 3D U-Net architecture that achieves expert-level performance in delineating 21 distinct head and neck organs at risk commonly segmented in clinical practice. METHODS The model was trained on a data set of 663 deidentified computed tomography scans acquired in routine clinical practice and with both segmentations taken from clinical practice and segmentations created by experienced radiographers as part of this research, all in accordance with consensus organ at risk definitions. RESULTS We demonstrated the model's clinical applicability by assessing its performance on a test set of 21 computed tomography scans from clinical practice, each with 21 organs at risk segmented by 2 independent experts. We also introduced surface Dice similarity coefficient, a new metric for the comparison of organ delineation, to quantify the deviation between organ at risk surface contours rather than volumes, better reflecting the clinical task of correcting errors in automated organ segmentations. The model's generalizability was then demonstrated on 2 distinct open-source data sets, reflecting different centers and countries to model training. CONCLUSIONS Deep learning is an effective and clinically applicable technique for the segmentation of the head and neck anatomy for radiotherapy. With appropriate validation studies and regulatory approvals, this system could improve the efficiency, consistency, and safety of radiotherapy pathways.
Collapse
Affiliation(s)
| | | | | | - Ruheena Mendes
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | | | | | | | | | | | | | | | | | | | - Dawn Carnell
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Cheng Boon
- Clatterbridge Cancer Centre NHS Foundation Trust, Liverpool, United Kingdom
| | - Derek D'Souza
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Syed Ali Moinuddin
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | | | | | | | | | | | | | - Geraint Rees
- University College London, London, United Kingdom
| | | | | | | | | | | |
Collapse
|
30
|
McKenzie EM, Tong N, Ruan D, Cao M, Chin RK, Sheng K. Using neural networks to extend cropped medical images for deformable registration among images with differing scan extents. Med Phys 2021; 48:4459-4471. [PMID: 34101198 DOI: 10.1002/mp.15039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/07/2021] [Accepted: 05/27/2021] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Missing or discrepant imaging volume is a common challenge in deformable image registration (DIR). To minimize the adverse impact, we train a neural network to synthesize cropped portions of head and neck CT's and then test its use in DIR. METHODS Using a training dataset of 409 head and neck CT's, we trained a generative adversarial network to take in a cropped 3D image and output an image with synthesized anatomy in the cropped region. The network used a 3D U-Net generator along with Visual Geometry Group (VGG) deep feature losses. To test our technique, for each of the 53 test volumes, we used Elastix to deformably register combinations of a randomly cropped, full, and synthetically full volume to a single cropped, full, and synthetically full target volume. We additionally tested our method's robustness to crop extent by progressively increasing the amount of cropping, synthesizing the missing anatomy using our network, and then performing the same registration combinations. Registration performance was measured using 95% Hausdorff distance across 16 contours. RESULTS We successfully trained a network to synthesize missing anatomy in superiorly and inferiorly cropped images. The network can estimate large regions in an incomplete image, far from the cropping boundary. Registration using our estimated full images was not significantly different from registration using the original full images. The average contour matching error for full image registration was 9.9 mm, whereas our method was 11.6, 12.1, and 13.6 mm for synthesized-to-full, full-to-synthesized, and synthesized-to-synthesized registrations, respectively. In comparison, registration using the cropped images had errors of 31.7 mm and higher. Plotting the registered image contour error as a function of initial preregistered error shows that our method is robust to registration difficulty. Synthesized-to-full registration was statistically independent of cropping extent up to 18.7 cm superiorly cropped. Synthesized-to-synthesized registration was nearly independent, with a -0.04 mm of change in average contour error for every additional millimeter of cropping. CONCLUSIONS Different or inadequate in scan extent is a major cause of DIR inaccuracies. We address this challenge by training a neural network to complete cropped 3D images. We show that with image completion, the source of DIR inaccuracy is eliminated, and the method is robust to varying crop extent.
Collapse
Affiliation(s)
- Elizabeth M McKenzie
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Nuo Tong
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Dan Ruan
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Minsong Cao
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Robert K Chin
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Ke Sheng
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| |
Collapse
|
31
|
Qiu B, van der Wel H, Kraeima J, Glas HH, Guo J, Borra RJH, Witjes MJH, van Ooijen PMA. Automatic Segmentation of Mandible from Conventional Methods to Deep Learning-A Review. J Pers Med 2021; 11:629. [PMID: 34357096 PMCID: PMC8307673 DOI: 10.3390/jpm11070629] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 06/26/2021] [Accepted: 06/28/2021] [Indexed: 01/05/2023] Open
Abstract
Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order to build a personalized 3D digital mandible model for 3D printing and treatment planning of OMFS. Segmented mandible structures are used to effectively visualize the mandible volumes and to evaluate particular mandible properties quantitatively. However, mandible segmentation is always challenging for both clinicians and researchers, due to complex structures and higher attenuation materials, such as teeth (filling) or metal implants that easily lead to high noise and strong artifacts during scanning. Moreover, the size and shape of the mandible vary to a large extent between individuals. Therefore, mandible segmentation is a tedious and time-consuming task and requires adequate training to be performed properly. With the advancement of computer vision approaches, researchers have developed several algorithms to automatically segment the mandible during the last two decades. The objective of this review was to present the available fully (semi)automatic segmentation methods of the mandible published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field to help develop novel automatic methods for clinical applications.
Collapse
Affiliation(s)
- Bingjiang Qiu
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Hylke van der Wel
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Joep Kraeima
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Haye Hendrik Glas
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Jiapan Guo
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Ronald J. H. Borra
- Medical Imaging Center (MIC), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
| | - Max Johannes Hendrikus Witjes
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Peter M. A. van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| |
Collapse
|
32
|
Zhong Y, Yang Y, Fang Y, Wang J, Hu W. A Preliminary Experience of Implementing Deep-Learning Based Auto-Segmentation in Head and Neck Cancer: A Study on Real-World Clinical Cases. Front Oncol 2021; 11:638197. [PMID: 34026615 PMCID: PMC8132944 DOI: 10.3389/fonc.2021.638197] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Accepted: 04/15/2021] [Indexed: 12/29/2022] Open
Abstract
Purpose While artificial intelligence has shown great promise in organs-at-risk (OARs) auto segmentation for head and neck cancer (HNC) radiotherapy, to reach the level of clinical acceptance of this technology in real-world routine practice is still a challenge. The purpose of this study was to validate a U-net-based full convolutional neural network (CNN) for the automatic delineation of OARs of HNC, focusing on clinical implementation and evaluation. Methods In the first phase, the CNN was trained on 364 clinical HNC patients’ CT images with annotated contouring from routine clinical cases by different oncologists. The automated delineation accuracy was quantified using the Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD). To assess efficiency, the time required to edit the auto-contours to a clinically acceptable standard was evaluated by a questionnaire. For subjective evaluation, expert oncologists (more than 10 years’ experience) were randomly presented with automated delineations or manual contours of 15 OARs for 30 patient cases. In the second phase, the network was retrained with an additional 300 patients, which were generated by pre-trained CNN and edited by oncologists until to meet clinical acceptance. Results Based on DSC, the CNN performed best for the spinal cord, brainstem, temporal lobe, eyes, optic nerve, parotid glands and larynx (DSC >0.7). Higher conformity for the OARs delineation was achieved by retraining our architecture, largest DSC improvement on oral cavity (0.53 to 0.93). Compared with the manual delineation time, after using auto-contouring, this duration was significantly shortened from hours to minutes. In the subjective evaluation, two observes showed an apparent inclination on automatic OARs contouring, even for relatively low DSC values. Most of the automated OARs segmentation can reach the clinical acceptance level compared to manual delineations. Conclusions After retraining, the CNN developed for OARs automated delineation in HNC was proved to be more robust, efficiency and consistency in clinical practice. Deep learning-based auto-segmentation shows great potential to alleviate the labor-intensive contouring of OAR for radiotherapy treatment planning.
Collapse
Affiliation(s)
- Yang Zhong
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yanju Yang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yingtao Fang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| |
Collapse
|
33
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 59] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
34
|
Lee SL, Hall WA, Morris ZS, Christensen L, Bassetti M. MRI-Guided Radiation Therapy. ADVANCES IN ONCOLOGY 2021; 1:29-39. [PMID: 37064601 PMCID: PMC10104451 DOI: 10.1016/j.yao.2021.02.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/18/2023]
Affiliation(s)
- Sangjune Laurence Lee
- Department of Human Oncology, University of Wisconsin Hospital and Clinics, Madison, WI, USA
- Department of Oncology, Division of Radiation Oncology, University of Calgary, Calgary, AB, Canada
| | - William A. Hall
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Zachary S. Morris
- Department of Human Oncology, University of Wisconsin Hospital and Clinics, Madison, WI, USA
| | - Leslie Christensen
- University of Wisconsin School of Medicine and Public Health, Madison, WI, USA
| | - Michael Bassetti
- Department of Human Oncology, University of Wisconsin Hospital and Clinics, Madison, WI, USA
- Corresponding author. Department of Human Oncology, University of Wisconsin, University Hospital L7/B36, 600 Highland Avenue, Madison, WI 53792.
| |
Collapse
|
35
|
Li J, Udupa JK, Tong Y, Wang L, Torigian DA. Segmentation evaluation with sparse ground truth data: Simulating true segmentations as perfect/imperfect as those generated by humans. Med Image Anal 2021; 69:101980. [PMID: 33588116 PMCID: PMC7933105 DOI: 10.1016/j.media.2021.101980] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 01/19/2021] [Accepted: 01/20/2021] [Indexed: 10/22/2022]
Abstract
Fully annotated data sets play important roles in medical image segmentation and evaluation. Expense and imprecision are the two main issues in generating ground truth (GT) segmentations. In this paper, in an attempt to overcome these two issues jointly, we propose a method, named SparseGT, which exploit variability among human segmenters to maximally save manual workload in GT generation for evaluating actual segmentations by algorithms. Pseudo ground truth (p-GT) segmentations are created by only a small fraction of workload and with human-level perfection/imperfection, and they can be used in practice as a substitute for fully manual GT in evaluating segmentation algorithms at the same precision. p-GT segmentations are generated by first selecting slices sparsely, where manual contouring is conducted only on these sparse slices, and subsequently filling segmentations on other slices automatically. By creating p-GT with different levels of sparseness, we determine the largest workload reduction achievable for each considered object, where the variability of the generated p-GT is statistically indistinguishable from inter-segmenter differences in full manual GT segmentations for that object. Furthermore, we investigate the segmentation evaluation errors introduced by variability in manual GT by applying p-GT in evaluation of actual segmentations by an algorithm. Experiments are conducted on ∼500 computed tomography (CT) studies involving six objects in two body regions, Head & Neck and Thorax, where optimal sparseness and corresponding evaluation errors are determined for each object and each strategy. Our results indicate that creating p-GT by the concatenated strategy of uniformly selecting sparse slices and filling segmentations via deep-learning (DL) network show highest manual workload reduction by ∼80-96% without sacrificing evaluation accuracy compared to fully manual GT. Nevertheless, other strategies also have obvious contributions in different situations. A non-uniform strategy for slice selection shows its advantage for objects with irregular shape change from slice to slice. An interpolation strategy for filling segmentations can achieve ∼60-90% of workload reduction in simulating human-level GT without the need of an actual training stage and shows potential in enlarging data sets for training p-GT generation networks. We conclude that not only over 90% reduction in workload is feasible without sacrificing evaluation accuracy but also the suitable strategy and the optimal sparseness level achievable for creating p-GT are object- and application-specific.
Collapse
Affiliation(s)
- Jieyu Li
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, 800 Dongchuan RD, Shanghai, 200240, China; Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, United States
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, United States.
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, United States
| | - Lisheng Wang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, 800 Dongchuan RD, Shanghai, 200240, China
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, United States
| |
Collapse
|
36
|
Ivanovska T, Daboul A, Kalentev O, Hosten N, Biffar R, Völzke H, Wörgötter F. A deep cascaded segmentation of obstructive sleep apnea-relevant organs from sagittal spine MRI. Int J Comput Assist Radiol Surg 2021; 16:579-588. [PMID: 33770362 PMCID: PMC8052251 DOI: 10.1007/s11548-021-02333-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 02/24/2021] [Indexed: 11/25/2022]
Abstract
Purpose The main purpose of this work was to develop an efficient approach for segmentation of structures that are relevant for diagnosis and treatment of obstructive sleep apnea syndrome (OSAS), namely pharynx, tongue, and soft palate, from mid-sagittal magnetic resonance imaging (MR) data. This framework will be applied to big data acquired within an on-going epidemiological study from a general population. Methods A deep cascaded framework for subsequent segmentation of pharynx, tongue, and soft palate is presented. The pharyngeal structure was segmented first, since the airway was clearly visible in the T1-weighted sequence. Thereafter, it was used as an anatomical landmark for tongue location. Finally, the soft palate region was extracted using segmented tongue and pharynx structures and used as input for a deep network. In each segmentation step, a UNet-like architecture was applied. Results The result assessment was performed qualitatively by comparing the region boundaries obtained from the expert to the framework results and quantitatively using the standard Dice coefficient metric. Additionally, cross-validation was applied to ensure that the framework performance did not depend on the specific selection of the validation set. The average Dice coefficients on the test set were \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$0.89\pm 0.03$$\end{document}0.89±0.03, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$0.87\pm 0.02$$\end{document}0.87±0.02, and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$0.79\pm 0.08$$\end{document}0.79±0.08 for tongue, pharynx, and soft palate tissues, respectively. The results were similar to other approaches and consistent with expert readings. Conclusion Due to high speed and efficiency, the framework will be applied for big epidemiological data with thousands of participants acquired within the Study of Health in Pomerania as well as other epidemiological studies to provide information on the anatomical structures and aspects that constitute important risk factors to the OSAS development.
Collapse
Affiliation(s)
- Tatyana Ivanovska
- Department of Computational Neuroscience, Georg-August-University, Friedrich-Hund Platz, 1, 37077 Göttingen, Germany
| | - Amro Daboul
- Department of Prosthodontics, Gerodontology and Biomaterials, University Medicine Greifswald, Fleischmannstr. 42-44, 17475 Greifswald, Germany
| | - Oleksandr Kalentev
- Institute for Physics, Alumni of University of Greifswald, Felix-Hausdorff-Str. 18, 17489 Greifswald, Germany
| | - Norbert Hosten
- Department of Radiology and Neuroradiology, University Medicine Greifswald, Fleischmannstr. 42-44, 17475 Greifswald, Germany
| | - Reiner Biffar
- Department of Prosthodontics, Gerodontology and Biomaterials, University Medicine Greifswald, Fleischmannstr. 42-44, 17475 Greifswald, Germany
| | - Henry Völzke
- Institute for Community Medicine, University Medicine Greifswald, Walther-Rathenau-Str. 48, 17489 Greifswald, Germany
| | - Florentin Wörgötter
- Department of Computational Neuroscience, Georg-August-University, Friedrich-Hund Platz, 1, 37077 Göttingen, Germany
| |
Collapse
|
37
|
Cao Y, Vassantachart A, Ye JC, Yu C, Ruan D, Sheng K, Lao Y, Shen ZL, Balik S, Bian S, Zada G, Shiu A, Chang EL, Yang W. Automatic detection and segmentation of multiple brain metastases on magnetic resonance image using asymmetric UNet architecture. Phys Med Biol 2021; 66:015003. [PMID: 33186927 DOI: 10.1088/1361-6560/abca53] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Detection of brain metastases is a paramount task in cancer management due both to the number of high-risk patients and the difficulty of achieving consistent detection. In this study, we aim to improve the accuracy of automated brain metastasis (BM) detection methods using a novel asymmetric UNet (asym-UNet) architecture. An end-to-end asymmetric 3D-UNet architecture, with two down-sampling arms and one up-sampling arm, was constructed to capture the imaging features. The two down-sampling arms were trained using two different kernels (3 × 3 × 3 and 1 × 1 × 3, respectively) with the kernel (1 × 1 × 3) dominating the learning. As a comparison, vanilla single 3D UNets were trained with different kernels and evaluated using the same datasets. Voxel-based Dice similarity coefficient (DSCv), sensitivity (S v), precision (P v), BM-based sensitivity (S BM), and false detection rate (F BM) were used to evaluate model performance. Contrast-enhanced T1 MR images from 195 patients with a total of 1034 BMs were solicited from our institutional stereotactic radiosurgery database. The patient cohort was split into training (160 patients, 809 lesions), validation (20 patients, 136 lesions), and testing (15 patients, 89 lesions) datasets. The lesions in the testing dataset were further divided into two subgroups based on the diameters (small S = 1-10 mm, large L = 11-26 mm). In the testing dataset, there were 72 and 17 BMs in the S and L sub-groups, respectively. Among all trained networks, asym-UNet achieved the highest DSCv of 0.84 and lowest F BM of 0.24. Although vanilla 3D-UNet with a single 1 × 1 × 3 kernel achieved the highest sensitivities for the S group, it resulted in the lowest precision and highest false detection rate. Asym-UNet was shown to balance sensitivity and false detection rate as well as keep the segmentation accuracy high. The novel asym-UNet segmentation network showed overall competitive segmentation performance and more pronounced improvement in hard-to-detect small BMs comparing to the vanilla single 3D UNet.
Collapse
Affiliation(s)
- Yufeng Cao
- Department of Radiation Oncology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States of America
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
38
|
Gou S, Tong N, Qi S, Yang S, Chin R, Sheng K. Self-channel-and-spatial-attention neural network for automated multi-organ segmentation on head and neck CT images. Phys Med Biol 2020; 65:245034. [PMID: 32097892 DOI: 10.1088/1361-6560/ab79c3] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Accurate segmentation of organs at risk (OARs) is necessary for adaptive head and neck (H&N) cancer treatment planning, but manual delineation is tedious, slow, and inconsistent. A self-channel-and-spatial-attention neural network (SCSA-Net) is developed for H&N OAR segmentation on CT images. To simultaneously ease the training and improve the segmentation performance, the proposed SCSA-Net utilizes the self-attention ability of the network. Spatial and channel-wise attention learning mechanisms are both employed to adaptively force the network to emphasize the meaningful features and weaken the irrelevant features simultaneously. The proposed network was first evaluated on a public dataset, which includes 48 patients, then on a separate serial CT dataset, which contains ten patients who received weekly diagnostic fan-beam CT scans. On the second dataset, the accuracy of using SCSA-Net to track the parotid and submandibular gland volume changes during radiotherapy treatment was quantified. The Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average surface distance (ASD), and 95% maximum surface distance (95SD) were calculated on the brainstem, optic chiasm, optic nerves, mandible, parotid glands, and submandibular glands to evaluate the proposed SCSA-Net. The proposed SCSA-Net consistently outperforms the state-of-the-art methods on the public dataset. Specifically, compared with Res-Net and SE-Net, which is constructed from squeeze-and-excitation block equipped residual blocks, the DSC of the optic nerves and submandibular glands is improved by 0.06, 0.03 and 0.05, 0.04 by the SCSA-Net. Moreover, the proposed method achieves statistically significant improvements in terms of DSC on all and eight of nine OARs over Res-Net and SE-Net, respectively. The trained network was able to achieve good segmentation results on the serial dataset, but the results were further improved after fine-tuning of the model using the simulation CT images. For the parotids and submandibular glands, the volume changes of individual patients are highly consistent between the automated and manual segmentation (Pearson's correlation 0.97-0.99). The proposed SCSA-Net is computationally efficient to perform segmentation (sim 2 s/CT).
Collapse
Affiliation(s)
- Shuiping Gou
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi 710071, People's Republic of China
| | | | | | | | | | | |
Collapse
|
39
|
Cao M, Stiehl B, Yu VY, Sheng K, Kishan AU, Chin RK, Yang Y, Ruan D. Analysis of Geometric Performance and Dosimetric Impact of Using Automatic Contour Segmentation for Radiotherapy Planning. Front Oncol 2020; 10:1762. [PMID: 33102206 PMCID: PMC7546883 DOI: 10.3389/fonc.2020.01762] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 08/06/2020] [Indexed: 11/13/2022] Open
Abstract
Purpose: To analyze geometric discrepancy and dosimetric impact in using contours generated by auto-segmentation (AS) against manually segmented (MS) clinical contours. Methods: A 48-subject prostate atlas was created and another 15 patients were used for testing. Contours were generated using a commercial atlas-based segmentation tool and compared to their clinical MS counterparts. The geometric correlation was evaluated using the Dice similarity coefficient (DSC) and Hausdorff distance (HD). Dosimetric relevance was evaluated for a subset of patients by assessing the DVH differences derived by optimizing plan dose using the AS and MS contours, respectively, and evaluating with respect to each. A paired t-test was employed for statistical comparison. The discrepancy in plan quality with respect to clinical dosimetric endpoints was evaluated. The analysis was repeated for head/neck (HN) with a 31-subject atlas and 15 test cases. Results: Dice agreement between AS and MS differed significantly across structures: from (L:0.92/R: 0.91) for the femoral heads to seminal vesical of 0.38 in the prostate cohort, and from 0.98 for the brain, to 0.36 for the chiasm of the HN group. Despite the geometric disagreement, the paired t-tests showed the lack of statistical evidence for systematic differences in dosimetric plan quality yielded by the AS and MS approach for the prostate cohort. In HN cases, statistically significant differences in dosimetric endpoints were observed in structures with small volumes or elongated shapes such as cord (p = 0.01) and esophagus (p = 0.04). The largest absolute dose difference of 11 Gy was seen in the mean pharynx dose. Conclusion: Varying AS performance among structures suggests a differential approach of using AS on a subset of structures and focus MS on the rest. The discrepancy between geometric and dosimetric-end-point driven evaluation also indicates the clinical utility of AS contours in optimization and evaluating plan quality despite of suboptimal geometrical accuracy.
Collapse
Affiliation(s)
- Minsong Cao
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Bradley Stiehl
- Physics & Biology in Medicine Graduate Program, University of California, Los Angeles, Los Angeles, CA, United States
| | - Victoria Y Yu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Ke Sheng
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Amar U Kishan
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Robert K Chin
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Yingli Yang
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Dan Ruan
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
40
|
Cardenas CE, Beadle BM, Garden AS, Skinner HD, Yang J, Rhee DJ, McCarroll RE, Netherton TJ, Gay SS, Zhang L, Court LE. Generating High-Quality Lymph Node Clinical Target Volumes for Head and Neck Cancer Radiation Therapy Using a Fully Automated Deep Learning-Based Approach. Int J Radiat Oncol Biol Phys 2020; 109:801-812. [PMID: 33068690 PMCID: PMC9472456 DOI: 10.1016/j.ijrobp.2020.10.005] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 08/12/2020] [Accepted: 10/06/2020] [Indexed: 12/17/2022]
Abstract
PURPOSE To develop a deep learning model that generates consistent, high-quality lymph node clinical target volumes (CTV) contours for head and neck cancer (HNC) patients, as an integral part of a fully automated radiation treatment planning workflow. METHODS AND MATERIALS Computed tomography (CT) scans from 71 HNC patients were retrospectively collected and split into training (n = 51), cross-validation (n = 10), and test (n = 10) data sets. All had target volume delineations covering lymph node levels Ia through V (Ia-V), Ib through V (Ib-V), II through IV (II-IV), and retropharyngeal (RP) nodes, which were previously approved by a radiation oncologist specializing in HNC. Volumes of interest (VOIs) about nodal levels were automatically identified using computer vision techniques. The VOI (cropped CT image) and approved contours were used to train a U-Net autosegmentation model. Each lymph node level was trained independently, with model parameters optimized by assessing performance on the cross-validation data set. Once optimal model parameters were identified, overlap and distance metrics were calculated between ground truth and autosegmentations on the test set. Lastly, this final model was used on 32 additional patient scans (not included in original 71 cases) and autosegmentations visually rated by 3 radiation oncologists as being "clinically acceptable without requiring edits," "requiring minor edits," or "requiring major edits." RESULTS When comparing ground truths to autosegmentations on the test data set, median Dice Similarity Coefficients were 0.90, 0.90, 0.89, and 0.81, and median mean surface distance values were 1.0 mm, 1.0 mm, 1.1 mm, and 1.3 mm for node levels Ia-V, Ib-V, II-IV, and RP nodes, respectively. Qualitative scoring varied among physicians. Overall, 99% of autosegmented target volumes were either scored as being clinically acceptable or requiring minor edits (ie, stylistic recommendations, <2 minutes). CONCLUSIONS We developed a fully automated artificial intelligence approach to autodelineate nodal CTVs for patients with intact HNC. Most autosegmentations were found to be clinically acceptable after qualitative review when considering recommended stylistic edits. This promising work automatically delineates nodal CTVs in a robust and consistent manner; this approach can be implemented in ongoing efforts for fully automated radiation treatment planning.
Collapse
Affiliation(s)
- Carlos E Cardenas
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas.
| | - Beth M Beadle
- Department of Radiation Oncology, Stanford University, Palo Alto, California
| | - Adam S Garden
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Heath D Skinner
- Department of Radiation Oncology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Jinzhong Yang
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Dong Joo Rhee
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Rachel E McCarroll
- Department of Radiation Oncology, University of Maryland Medical System, Baltimore, Maryland
| | - Tucker J Netherton
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Skylar S Gay
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Lifei Zhang
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Laurence E Court
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas
| |
Collapse
|
41
|
Liu Y, Gu X. Evaluation and comparison of global-feature-based and local-feature-based segmentation algorithms in intracranial visual pathway delineation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1766-1769. [PMID: 33018340 DOI: 10.1109/embc44109.2020.9175937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Intracranial visual pathway is related to the effective transmission of visual signals to brain. It was not only the target organ of diseases but also the organs at risk in radiotherapy thus its delineation plays an important role in both diagnosis and treatment planning. Traditional manual segmentation method suffered from time- and labor- consuming as well as intra- and inter- variability. In order to overcome these problems, state-of-the-art segmentation models were designed and various features were extracted and utilized, but it's hard to tell their effectiveness on intracranial visual pathway delineation. It's because that these methods worked on different dataset and accompanied with different training tricks. This study aimed to research the contribution of global features and local features in delineating the intracranial visual pathway from MRI scans. The two typical segmentation models, 3D UNet and DeepMedic, were chosen since they focused on global features and local features respectively. We constructed the hybrid model through serially connecting the two mentioned models to validate the performance of combined global and local features. Validation results showed that the hybrid model outperformed the individual ones. It proved that multi scale feature fusion was important in improving the segmentation performance.
Collapse
|
42
|
Sultana S, Robinson A, Song DY, Lee J. Automatic multi-organ segmentation in computed tomography images using hierarchical convolutional neural network. JOURNAL OF MEDICAL IMAGING (BELLINGHAM, WASH.) 2020; 7:055001. [PMID: 33102622 DOI: 10.1117/1.jmi.7.5.055001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 09/28/2020] [Indexed: 01/17/2023]
Abstract
Purpose: Accurate segmentation of treatment planning computed tomography (CT) images is important for radiation therapy (RT) planning. However, low soft tissue contrast in CT makes the segmentation task challenging. We propose a two-step hierarchical convolutional neural network (CNN) segmentation strategy to automatically segment multiple organs from CT. Approach: The first step generates a coarse segmentation from which organ-specific regions of interest (ROIs) are produced. The second step produces detailed segmentation of each organ. The ROIs are generated using UNet, which automatically identifies the area of each organ and improves computational efficiency by eliminating irrelevant background information. For the fine segmentation step, we combined UNet with a generative adversarial network. The generator is designed as a UNet that is trained to segment organ structures and the discriminator is a fully convolutional network, which distinguishes whether the segmentation is real or generator-predicted, thus improving the segmentation accuracy. We validated the proposed method on male pelvic and head and neck (H&N) CTs used for RT planning of prostate and H&N cancer, respectively. For the pelvic structure segmentation, the network was trained to segment the prostate, bladder, and rectum. For H&N, the network was trained to segment the parotid glands (PG) and submandibular glands (SMG). Results: The trained segmentation networks were tested on 15 pelvic and 20 H&N independent datasets. The H&N segmentation network was also tested on a public domain dataset ( N = 38 ) and showed similar performance. The average dice similarity coefficients ( mean ± SD ) of pelvic structures are 0.91 ± 0.05 (prostate), 0.95 ± 0.06 (bladder), 0.90 ± 0.09 (rectum), and H&N structures are 0.87 ± 0.04 (PG) and 0.86 ± 0.05 (SMG). The segmentation for each CT takes < 10 s on average. Conclusions: Experimental results demonstrate that the proposed method can produce fast, accurate, and reproducible segmentation of multiple organs of different sizes and shapes and show its potential to be applicable to different disease sites.
Collapse
Affiliation(s)
- Sharmin Sultana
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Adam Robinson
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Daniel Y Song
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Junghoon Lee
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| |
Collapse
|
43
|
Tseng M, Ho F, Leong YH, Wong LC, Tham IW, Cheo T, Lee AW. Emerging radiotherapy technologies and trends in nasopharyngeal cancer. Cancer Commun (Lond) 2020; 40:395-405. [PMID: 32745354 PMCID: PMC7494066 DOI: 10.1002/cac2.12082] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Accepted: 07/14/2020] [Indexed: 12/19/2022] Open
Abstract
Technology has always driven advances in radiotherapy treatment. In this review, we describe the main technological advances in radiotherapy over the past decades for the treatment of nasopharyngeal cancer (NPC) and highlight some of the pressing issues and challenges that remain. We aim to identify emerging trends in radiation medicine. These include advances in personalized medicine and advanced imaging modalities, standardization of planning and delineation, assessment of treatment response and adaptive re‐planning, impact of particle therapy, and role of artificial intelligence or automation in clinical care. In conclusion, we expect significant improvement in the therapeutic ratio of radiotherapy treatment for NPC over the next decade.
Collapse
Affiliation(s)
- Michelle Tseng
- Radiation Oncology Centre, Mt Elizabeth Novena Hospital, Singapore, 329563, Singapore
| | - Francis Ho
- Radiation Oncology Centre, Mt Elizabeth Novena Hospital, Singapore, 329563, Singapore
| | - Yiat Horng Leong
- Radiation Oncology Centre, Mt Elizabeth Novena Hospital, Singapore, 329563, Singapore
| | - Lea Choung Wong
- Radiation Oncology Centre, Mt Elizabeth Novena Hospital, Singapore, 329563, Singapore
| | - Ivan Wk Tham
- Radiation Oncology Centre, Mt Elizabeth Novena Hospital, Singapore, 329563, Singapore
| | - Timothy Cheo
- Radiation Oncology Centre, Mt Elizabeth Novena Hospital, Singapore, 329563, Singapore
| | - Anne Wm Lee
- Department of Clinical Oncology, the University of Hong Kong-Shenzhen Hospital, the University of Hong Kong, Hong Kong, 999077, P. R. China
| |
Collapse
|
44
|
Sheng K. Artificial intelligence in radiotherapy: a technological review. Front Med 2020; 14:431-449. [PMID: 32728877 DOI: 10.1007/s11684-020-0761-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2019] [Accepted: 02/14/2020] [Indexed: 12/19/2022]
Abstract
Radiation therapy (RT) is widely used to treat cancer. Technological advances in RT have occurred in the past 30 years. These advances, such as three-dimensional image guidance, intensity modulation, and robotics, created challenges and opportunities for the next breakthrough, in which artificial intelligence (AI) will possibly play important roles. AI will replace certain repetitive and labor-intensive tasks and improve the accuracy and consistency of others, particularly those with increased complexity because of technological advances. The improvement in efficiency and consistency is important to manage the increasing cancer patient burden to the society. Furthermore, AI may provide new functionalities that facilitate satisfactory RT. The functionalities include superior images for real-time intervention and adaptive and personalized RT. AI may effectively synthesize and analyze big data for such purposes. This review describes the RT workflow and identifies areas, including imaging, treatment planning, quality assurance, and outcome prediction, that benefit from AI. This review primarily focuses on deep-learning techniques, although conventional machine-learning techniques are also mentioned.
Collapse
Affiliation(s)
- Ke Sheng
- Department of Radiation Oncology, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
45
|
Vrtovec T, Močnik D, Strojan P, Pernuš F, Ibragimov B. Auto-segmentation of organs at risk for head and neck radiotherapy planning: From atlas-based to deep learning methods. Med Phys 2020; 47:e929-e950. [PMID: 32510603 DOI: 10.1002/mp.14320] [Citation(s) in RCA: 71] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2019] [Revised: 05/27/2020] [Accepted: 05/29/2020] [Indexed: 02/06/2023] Open
Abstract
Radiotherapy (RT) is one of the basic treatment modalities for cancer of the head and neck (H&N), which requires a precise spatial description of the target volumes and organs at risk (OARs) to deliver a highly conformal radiation dose to the tumor cells while sparing the healthy tissues. For this purpose, target volumes and OARs have to be delineated and segmented from medical images. As manual delineation is a tedious and time-consuming task subjected to intra/interobserver variability, computerized auto-segmentation has been developed as an alternative. The field of medical imaging and RT planning has experienced an increased interest in the past decade, with new emerging trends that shifted the field of H&N OAR auto-segmentation from atlas-based to deep learning-based approaches. In this review, we systematically analyzed 78 relevant publications on auto-segmentation of OARs in the H&N region from 2008 to date, and provided critical discussions and recommendations from various perspectives: image modality - both computed tomography and magnetic resonance image modalities are being exploited, but the potential of the latter should be explored more in the future; OAR - the spinal cord, brainstem, and major salivary glands are the most studied OARs, but additional experiments should be conducted for several less studied soft tissue structures; image database - several image databases with the corresponding ground truth are currently available for methodology evaluation, but should be augmented with data from multiple observers and multiple institutions; methodology - current methods have shifted from atlas-based to deep learning auto-segmentation, which is expected to become even more sophisticated; ground truth - delineation guidelines should be followed and participation of multiple experts from multiple institutions is recommended; performance metrics - the Dice coefficient as the standard volumetric overlap metrics should be accompanied with at least one distance metrics, and combined with clinical acceptability scores and risk assessments; segmentation performance - the best performing methods achieve clinically acceptable auto-segmentation for several OARs, however, the dosimetric impact should be also studied to provide clinically relevant endpoints for RT planning.
Collapse
Affiliation(s)
- Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Domen Močnik
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Primož Strojan
- Institute of Oncology Ljubljana, Zaloška cesta 2, Ljubljana, SI-1000, Slovenia
| | - Franjo Pernuš
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia.,Department of Computer Science, University of Copenhagen, Universitetsparken 1, Copenhagen, D-2100, Denmark
| |
Collapse
|