1
|
Asif RN, Naseem MT, Ahmad M, Mazhar T, Khan MA, Khan MA, Al-Rasheed A, Hamam H. Brain tumor detection empowered with ensemble deep learning approaches from MRI scan images. Sci Rep 2025; 15:15002. [PMID: 40301625 PMCID: PMC12041211 DOI: 10.1038/s41598-025-99576-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2025] [Accepted: 04/21/2025] [Indexed: 05/01/2025] Open
Abstract
Brain tumor detection is essential for early diagnosis and successful treatment, both of which can significantly enhance patient outcomes. To evaluate brain MRI scans and categorize them into four types-pituitary, meningioma, glioma, and normal-this study investigates a potent artificial intelligence (AI) technique. Even though AI has been utilized in the past to detect brain tumors, current techniques still have issues with accuracy and dependability. Our study presents a novel AI technique that combines two distinct deep learning models to enhance this. When combined, these models improve accuracy and yield more trustworthy outcomes than when used separately. Key performance metrics including accuracy, precision, and dependability are used to assess the system once it has been trained using MRI scan pictures. Our results show that this combined AI approach works better than individual models, particularly in identifying different types of brain tumors. Specifically, the InceptionV3 + Xception combination hit an accuracy level of 98.50% in training and 98.30% in validation. Such results further argue the potential application for advanced AI techniques in medical imaging while speaking even more strongly to the fact that multiple AI models used concurrently are able to enhance brain tumor detection.
Collapse
Affiliation(s)
- Rizwana Naz Asif
- School of Computer Science, National College of Business Administration and Economics, Lahore, 54000, Pakistan
| | - Muhammad Tahir Naseem
- Department of Electronic Engineering, Yeungnam University, Gyeongsan-si, 38541, Republic of Korea
| | - Munir Ahmad
- University College, Korea University, Seoul, 02841, Republic of Korea
| | - Tehseen Mazhar
- School of Computer Science, National College of Business Administration and Economics, Lahore, 54000, Pakistan.
- Department of Computer Science,School Education Department, Government of Punjab, Layyah, 31200, Pakistan.
| | - Muhammad Adnan Khan
- Department of Software, Faculty of Artificial Intelligence and Software, Gachon University, Seongnam-si, 13557, Republic of Korea.
| | - Muhammad Amir Khan
- School of Computing Sciences, College of Computing, Informatics and Mathematics, Universiti Teknologi MARA, Shah Alam, 40450, Selangor, Malaysia.
| | - Amal Al-Rasheed
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Habib Hamam
- Faculty of Engineering, Université de Moncton, Moncton, NB E1 A3E9, Canada
- School of Electrical Engineering, University of Johannesburg, Johannesburg, 2006, South Africa
- International Institute of Technology and Management (IITG), Av. Grandes Ecoles, Libreville BP 1989, Gabon
- Bridges for Academic Excellence, Spectrum, Tunis, Center-ville, Tunisia
| |
Collapse
|
2
|
Cai Z, Zhong Z, Lin H, Huang B, Xu Z, Huang B, Deng W, Wu Q, Lei K, Lyu J, Ye Y, Chen H, Zhang J. Self-supervised learning on dual-sequence magnetic resonance imaging for automatic segmentation of nasopharyngeal carcinoma. Comput Med Imaging Graph 2024; 118:102471. [PMID: 39608271 DOI: 10.1016/j.compmedimag.2024.102471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 10/08/2024] [Accepted: 11/12/2024] [Indexed: 11/30/2024]
Abstract
Automating the segmentation of nasopharyngeal carcinoma (NPC) is crucial for therapeutic procedures but presents challenges given the hurdles in amassing extensively annotated datasets. Although previous studies have applied self-supervised learning to capitalize on unlabeled data to improve segmentation performance, these methods often overlooked the benefits of dual-sequence magnetic resonance imaging (MRI). In the present study, we incorporated self-supervised learning with a saliency transformation module using unlabeled dual-sequence MRI for accurate NPC segmentation. 44 labeled and 72 unlabeled patients were collected to develop and evaluate our network. Impressively, our network achieved a mean Dice similarity coefficient (DSC) of 0.77, which is consistent with a previous study that relied on a training set of 4,100 annotated cases. The results further revealed that our approach required minimal adjustments, primarily < 20% tweak in the DSC, to meet clinical standards. By enhancing the automatic segmentation of NPC, our method alleviates the annotation burden on oncologists, curbs subjectivity, and ensures reliable NPC delineation.
Collapse
Affiliation(s)
- Zongyou Cai
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Zhangnan Zhong
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Haiwei Lin
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Bingsheng Huang
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Ziyue Xu
- NVIDIA Corporation, Bethesda, MD, USA
| | - Bin Huang
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Wei Deng
- Department of Radiology, Panyu Central Hospital, Guangzhou, China; Medical Imaging Institute of Panyu, Guangzhou, China
| | - Qiting Wu
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Kaixin Lei
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Jiegeng Lyu
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Yufeng Ye
- Department of Radiology, Panyu Central Hospital, Guangzhou, China; Medical Imaging Institute of Panyu, Guangzhou, China.
| | - Hanwei Chen
- Panyu Health Management Center (Panyu Rehabilitation Hospital), Guangzhou, China.
| | - Jian Zhang
- Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, China; Shenzhen University Medical School, Shenzhen University, Shenzhen, 518055, Guangdong, China.
| |
Collapse
|
3
|
Wang CK, Wang TW, Yang YX, Wu YT. Deep Learning for Nasopharyngeal Carcinoma Segmentation in Magnetic Resonance Imaging: A Systematic Review and Meta-Analysis. Bioengineering (Basel) 2024; 11:504. [PMID: 38790370 PMCID: PMC11118180 DOI: 10.3390/bioengineering11050504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 05/11/2024] [Accepted: 05/15/2024] [Indexed: 05/26/2024] Open
Abstract
Nasopharyngeal carcinoma is a significant health challenge that is particularly prevalent in Southeast Asia and North Africa. MRI is the preferred diagnostic tool for NPC due to its superior soft tissue contrast. The accurate segmentation of NPC in MRI is crucial for effective treatment planning and prognosis. We conducted a search across PubMed, Embase, and Web of Science from inception up to 20 March 2024, adhering to the PRISMA 2020 guidelines. Eligibility criteria focused on studies utilizing DL for NPC segmentation in adults via MRI. Data extraction and meta-analysis were conducted to evaluate the performance of DL models, primarily measured by Dice scores. We assessed methodological quality using the CLAIM and QUADAS-2 tools, and statistical analysis was performed using random effects models. The analysis incorporated 17 studies, demonstrating a pooled Dice score of 78% for DL models (95% confidence interval: 74% to 83%), indicating a moderate to high segmentation accuracy by DL models. Significant heterogeneity and publication bias were observed among the included studies. Our findings reveal that DL models, particularly convolutional neural networks, offer moderately accurate NPC segmentation in MRI. This advancement holds the potential for enhancing NPC management, necessitating further research toward integration into clinical practice.
Collapse
Affiliation(s)
- Chih-Keng Wang
- School of Medicine, College of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan; (C.-K.W.)
- Department of Otolaryngology-Head and Neck Surgery, Taichung Veterans General Hospital, Taichung 407219, Taiwan
| | - Ting-Wei Wang
- School of Medicine, College of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan; (C.-K.W.)
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| | - Ya-Xuan Yang
- Department of Otolaryngology-Head and Neck Surgery, Taichung Veterans General Hospital, Taichung 407219, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| |
Collapse
|
4
|
Krishnapriya S, Karuna Y. A deep learning model for the localization and extraction of brain tumors from MR images using YOLOv7 and grab cut algorithm. Front Oncol 2024; 14:1347363. [PMID: 38680854 PMCID: PMC11045991 DOI: 10.3389/fonc.2024.1347363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 03/25/2024] [Indexed: 05/01/2024] Open
Abstract
Introduction Brain tumors are a common disease that affects millions of people worldwide. Considering the severity of brain tumors (BT), it is important to diagnose the disease in its early stages. With advancements in the diagnostic process, Magnetic Resonance Imaging (MRI) has been extensively used in disease detection. However, the accurate identification of BT is a complex task, and conventional techniques are not sufficiently robust to localize and extract tumors in MRI images. Therefore, in this study, we used a deep learning model combined with a segmentation algorithm to localize and extract tumors from MR images. Method This paper presents a Deep Learning (DL)-based You Look Only Once (YOLOv7) model in combination with the Grab Cut algorithm to extract the foreground of the tumor image to enhance the detection process. YOLOv7 is used to localize the tumor region, and the Grab Cut algorithm is used to extract the tumor from the localized region. Results The performance of the YOLOv7 model with and without the Grab Cut algorithm is evaluated. The results show that the proposed approach outperforms other techniques, such as hybrid CNN-SVM, YOLOv5, and YOLOv6, in terms of accuracy, precision, recall, specificity, and F1 score. Discussion Our results show that the proposed technique achieves a high dice score between tumor-extracted images and ground truth images. The findings show that the performance of the YOLOv7 model is improved by the inclusion of the Grab Cut algorithm compared to the performance of the model without the algorithm.
Collapse
Affiliation(s)
| | - Yepuganti Karuna
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| |
Collapse
|
5
|
Zeng Y, Zeng P, Shen S, Liang W, Li J, Zhao Z, Zhang K, Shen C. DCTR U-Net: automatic segmentation algorithm for medical images of nasopharyngeal cancer in the context of deep learning. Front Oncol 2023; 13:1190075. [PMID: 37546396 PMCID: PMC10402756 DOI: 10.3389/fonc.2023.1190075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Nasopharyngeal carcinoma (NPC) is a malignant tumor that occurs in the wall of the nasopharyngeal cavity and is prevalent in Southern China, Southeast Asia, North Africa, and the Middle East. According to studies, NPC is one of the most common malignant tumors in Hainan, China, and it has the highest incidence rate among otorhinolaryngological malignancies. We proposed a new deep learning network model to improve the segmentation accuracy of the target region of nasopharyngeal cancer. Our model is based on the U-Net-based network, to which we add Dilated Convolution Module, Transformer Module, and Residual Module. The new deep learning network model can effectively solve the problem of restricted convolutional fields of perception and achieve global and local multi-scale feature fusion. In our experiments, the proposed network was trained and validated using 10-fold cross-validation based on the records of 300 clinical patients. The results of our network were evaluated using the dice similarity coefficient (DSC) and the average symmetric surface distance (ASSD). The DSC and ASSD values are 0.852 and 0.544 mm, respectively. With the effective combination of the Dilated Convolution Module, Transformer Module, and Residual Module, we significantly improved the segmentation performance of the target region of the NPC.
Collapse
Affiliation(s)
- Yan Zeng
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
- ChinaPersonnel Department, Hainan Medical University, Haikou, China
| | - PengHui Zeng
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - ShaoDong Shen
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Wei Liang
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Jun Li
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Zhe Zhao
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Kun Zhang
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
- School of Information Science and Technology, Hainan Normal University, Haikou, China
| | - Chong Shen
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| |
Collapse
|
6
|
Samee NA, Mahmoud NF, Atteia G, Abdallah HA, Alabdulhafith M, Al-Gaashani MSAM, Ahmad S, Muthanna MSA. Classification Framework for Medical Diagnosis of Brain Tumor with an Effective Hybrid Transfer Learning Model. Diagnostics (Basel) 2022; 12:diagnostics12102541. [PMID: 36292230 PMCID: PMC9600529 DOI: 10.3390/diagnostics12102541] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 10/13/2022] [Accepted: 10/13/2022] [Indexed: 11/16/2022] Open
Abstract
Brain tumors (BTs) are deadly diseases that can strike people of every age, all over the world. Every year, thousands of people die of brain tumors. Brain-related diagnoses require caution, and even the smallest error in diagnosis can have negative repercussions. Medical errors in brain tumor diagnosis are common and frequently result in higher patient mortality rates. Magnetic resonance imaging (MRI) is widely used for tumor evaluation and detection. However, MRI generates large amounts of data, making manual segmentation difficult and laborious work, limiting the use of accurate measurements in clinical practice. As a result, automated and dependable segmentation methods are required. Automatic segmentation and early detection of brain tumors are difficult tasks in computer vision due to their high spatial and structural variability. Therefore, early diagnosis or detection and treatment are critical. Various traditional Machine learning (ML) techniques have been used to detect various types of brain tumors. The main issue with these models is that the features were manually extracted. To address the aforementioned insightful issues, this paper presents a hybrid deep transfer learning (GN-AlexNet) model of BT tri-classification (pituitary, meningioma, and glioma). The proposed model combines GoogleNet architecture with the AlexNet model by removing the five layers of GoogleNet and adding ten layers of the AlexNet model, which extracts features and classifies them automatically. On the same CE-MRI dataset, the proposed model was compared to transfer learning techniques (VGG-16, AlexNet, SqeezNet, ResNet, and MobileNet-V2) and ML/DL. The proposed model outperformed the current methods in terms of accuracy and sensitivity (accuracy of 99.51% and sensitivity of 98.90%).
Collapse
Affiliation(s)
- Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Noha F. Mahmoud
- Rehabilitation Sciences Department, Health and Rehabilitation Sciences College, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
- Correspondence: (N.F.M.); (G.A.)
| | - Ghada Atteia
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
- Correspondence: (N.F.M.); (G.A.)
| | - Hanaa A. Abdallah
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Maali Alabdulhafith
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Mehdhar S. A. M. Al-Gaashani
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Shahab Ahmad
- School of Economics & Management, Chongqing University of Post and Telecommunication, Chongqing 400065, China
| | - Mohammed Saleh Ali Muthanna
- Institute of Computer Technologies and Information Security, Southern Federal University, 347922 Taganrog, Russia
| |
Collapse
|
7
|
Abstract
Brain tumors (BTs) are spreading very rapidly across the world. Every year, thousands of people die due to deadly brain tumors. Therefore, accurate detection and classification are essential in the treatment of brain tumors. Numerous research techniques have been introduced for BT detection as well as classification based on traditional machine learning (ML) and deep learning (DL). The traditional ML classifiers require hand-crafted features, which is very time-consuming. On the contrary, DL is very robust in feature extraction and has recently been widely used for classification and detection purposes. Therefore, in this work, we propose a hybrid deep learning model called DeepTumorNet for three types of brain tumors (BTs)—glioma, meningioma, and pituitary tumor classification—by adopting a basic convolutional neural network (CNN) architecture. The GoogLeNet architecture of the CNN model was used as a base. While developing the hybrid DeepTumorNet approach, the last 5 layers of GoogLeNet were removed, and 15 new layers were added instead of these 5 layers. Furthermore, we also utilized a leaky ReLU activation function in the feature map to increase the expressiveness of the model. The proposed model was tested on a publicly available research dataset for evaluation purposes, and it obtained 99.67% accuracy, 99.6% precision, 100% recall, and a 99.66% F1-score. The proposed methodology obtained the highest accuracy compared with the state-of-the-art classification results obtained with Alex net, Resnet50, darknet53, Shufflenet, GoogLeNet, SqueezeNet, ResNet101, Exception Net, and MobileNetv2. The proposed model showed its superiority over the existing models for BT classification from the MRI images.
Collapse
|
8
|
Tao G, Li H, Huang J, Han C, Chen J, Ruan G, Huang W, Hu Y, Dan T, Zhang B, He S, Liu L, Cai H. SeqSeg: A Sequential Method to Achieve Nasopharyngeal Carcinoma Segmentation Free from Background Dominance. Med Image Anal 2022; 78:102381. [DOI: 10.1016/j.media.2022.102381] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 01/18/2022] [Accepted: 01/31/2022] [Indexed: 11/30/2022]
|
9
|
Schouten JPE, Noteboom S, Martens RM, Mes SW, Leemans CR, de Graaf P, Steenwijk MD. Automatic segmentation of head and neck primary tumors on MRI using a multi-view CNN. Cancer Imaging 2022; 22:8. [PMID: 35033188 PMCID: PMC8761340 DOI: 10.1186/s40644-022-00445-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 12/31/2021] [Indexed: 12/24/2022] Open
Abstract
Background Accurate segmentation of head and neck squamous cell cancer (HNSCC) is important for radiotherapy treatment planning. Manual segmentation of these tumors is time-consuming and vulnerable to inconsistencies between experts, especially in the complex head and neck region. The aim of this study is to introduce and evaluate an automatic segmentation pipeline for HNSCC using a multi-view CNN (MV-CNN). Methods The dataset included 220 patients with primary HNSCC and availability of T1-weighted, STIR and optionally contrast-enhanced T1-weighted MR images together with a manual reference segmentation of the primary tumor by an expert. A T1-weighted standard space of the head and neck region was created to register all MRI sequences to. An MV-CNN was trained with these three MRI sequences and evaluated in terms of volumetric and spatial performance in a cross-validation by measuring intra-class correlation (ICC) and dice similarity score (DSC), respectively. Results The average manual segmented primary tumor volume was 11.8±6.70 cm3 with a median [IQR] of 13.9 [3.22-15.9] cm3. The tumor volume measured by MV-CNN was 22.8±21.1 cm3 with a median [IQR] of 16.0 [8.24-31.1] cm3. Compared to the manual segmentations, the MV-CNN scored an average ICC of 0.64±0.06 and a DSC of 0.49±0.19. Improved segmentation performance was observed with increasing primary tumor volume: the smallest tumor volume group (<3 cm3) scored a DSC of 0.26±0.16 and the largest group (>15 cm3) a DSC of 0.63±0.11 (p<0.001). The automated segmentation tended to overestimate compared to the manual reference, both around the actual primary tumor and in false positively classified healthy structures and pathologically enlarged lymph nodes. Conclusion An automatic segmentation pipeline was evaluated for primary HNSCC on MRI. The MV-CNN produced reasonable segmentation results, especially on large tumors, but overestimation decreased overall performance. In further research, the focus should be on decreasing false positives and make it valuable in treatment planning.
Collapse
Affiliation(s)
- Jens P E Schouten
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
| | - Samantha Noteboom
- Department of Anatomy and Neurosciences, Amsterdam UMC, Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
| | - Roland M Martens
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
| | - Steven W Mes
- Department of Otolaryngology - Head and Neck Surgery, Amsterdam UMC, Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
| | - C René Leemans
- Department of Otolaryngology - Head and Neck Surgery, Amsterdam UMC, Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
| | - Pim de Graaf
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
| | - Martijn D Steenwijk
- Department of Anatomy and Neurosciences, Amsterdam UMC, Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands. .,, De Boelelaan 1108, 1081 HZ, Amsterdam, The Netherlands.
| |
Collapse
|
10
|
Liu J, Shao H, Jiang Y, Deng X. CNN-Based Hidden-Layer Topological Structure Design and Optimization Methods for Image Classification. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10742-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
11
|
Wahid KA, Ahmed S, He R, van Dijk LV, Teuwen J, McDonald BA, Salama V, Mohamed AS, Salzillo T, Dede C, Taku N, Lai SY, Fuller CD, Naser MA. Evaluation of deep learning-based multiparametric MRI oropharyngeal primary tumor auto-segmentation and investigation of input channel effects: Results from a prospective imaging registry. Clin Transl Radiat Oncol 2022; 32:6-14. [PMID: 34765748 PMCID: PMC8570930 DOI: 10.1016/j.ctro.2021.10.003] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 09/24/2021] [Accepted: 10/10/2021] [Indexed: 12/09/2022] Open
Abstract
BACKGROUND/PURPOSE Oropharyngeal cancer (OPC) primary gross tumor volume (GTVp) segmentation is crucial for radiotherapy. Multiparametric MRI (mpMRI) is increasingly used for OPC adaptive radiotherapy but relies on manual segmentation. Therefore, we constructed mpMRI deep learning (DL) OPC GTVp auto-segmentation models and determined the impact of input channels on segmentation performance. MATERIALS/METHODS GTVp ground truth segmentations were manually generated for 30 OPC patients from a clinical trial. We evaluated five mpMRI input channels (T2, T1, ADC, Ktrans, Ve). 3D Residual U-net models were developed and assessed using leave-one-out cross-validation. A baseline T2 model was compared to mpMRI models (T2 + T1, T2 + ADC, T2 + Ktrans, T2 + Ve, all five channels [ALL]) primarily using the Dice similarity coefficient (DSC). False-negative DSC (FND), false-positive DSC, sensitivity, positive predictive value, surface DSC, Hausdorff distance (HD), 95% HD, and mean surface distance were also assessed. For the best model, ground truth and DL-generated segmentations were compared through a blinded Turing test using three physician observers. RESULTS Models yielded mean DSCs from 0.71 ± 0.12 (ALL) to 0.73 ± 0.12 (T2 + T1). Compared to the T2 model, performance was significantly improved for FND, sensitivity, surface DSC, HD, and 95% HD for the T2 + T1 model (p < 0.05) and for FND for the T2 + Ve and ALL models (p < 0.05). No model demonstrated significant correlations between tumor size and DSC (p > 0.05). Most models demonstrated significant correlations between tumor size and HD or Surface DSC (p < 0.05), except those that included ADC or Ve as input channels (p > 0.05). On average, there were no significant differences between ground truth and DL-generated segmentations for all observers (p > 0.05). CONCLUSION DL using mpMRI provides reasonably accurate segmentations of OPC GTVp that may be comparable to ground truth segmentations generated by clinical experts. Incorporating additional mpMRI channels may increase the performance of FND, sensitivity, surface DSC, HD, and 95% HD, and improve model robustness to tumor size.
Collapse
Affiliation(s)
- Kareem A. Wahid
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Sara Ahmed
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Renjie He
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Lisanne V. van Dijk
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Centre, Nijmegen, The Netherlands
| | - Brigid A. McDonald
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Vivian Salama
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Abdallah S.R. Mohamed
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Travis Salzillo
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Cem Dede
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Nicolette Taku
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Stephen Y. Lai
- Department of Head and Neck Surgery, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Clifton D. Fuller
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Mohamed A. Naser
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| |
Collapse
|
12
|
Yu X, Jin F, Luo H, Lei Q, Wu Y. Gross Tumor Volume Segmentation for Stage III NSCLC Radiotherapy Using 3D ResSE-Unet. Technol Cancer Res Treat 2022; 21:15330338221090847. [PMID: 35443832 PMCID: PMC9047806 DOI: 10.1177/15330338221090847] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
INTRODUCTION Radiotherapy is one of the most effective ways to treat lung cancer. Accurately delineating the gross target volume is a key step in the radiotherapy process. In current clinical practice, the target area is still delineated manually by radiologists, which is time-consuming and laborious. However, these problems can be better solved by deep learning-assisted automatic segmentation methods. METHODS In this paper, a 3D CNN model named 3D ResSE-Unet is proposed for gross tumor volume segmentation for stage III NSCLC radiotherapy. This model is based on 3D Unet and combines residual connection and channel attention mechanisms. Three-dimensional convolution operation and encoding-decoding structure are used to mine three-dimensional spatial information of tumors from computed tomography data. Inspired by ResNet and SE-Net, residual connection and channel attention mechanisms are used to improve segmentation performance. A total of 214 patients with stage III NSCLC were collected selectively and 148 cases were randomly selected as the training set, 30 cases as the validation set, and 36 cases as the testing set. The segmentation performance of models was evaluated by the testing set. In addition, the segmentation results of different depths of 3D Unet were analyzed. And the performance of 3D ResSE-Unet was compared with 3D Unet, 3D Res-Unet, and 3D SE-Unet. RESULTS Compared with other depths, 3D Unet with four downsampling depths is more suitable for our work. Compared with 3D Unet, 3D Res-Unet, and 3D SE-Unet, 3D ResSE-Unet can obtain superior results. Its dice similarity coefficient, 95th-percentile of Hausdorff distance, and average surface distance can reach 0.7367, 21.39mm, 4.962mm, respectively. And the average time cost of 3D ResSE-Unet to segment a patient is only about 10s. CONCLUSION The method proposed in this study provides a new tool for GTV auto-segmentation and may be useful for lung cancer radiotherapy.
Collapse
Affiliation(s)
- Xinhao Yu
- College of Bioengineering, 47913Chongqing University, Chongqing, China.,Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Fu Jin
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - HuanLi Luo
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Qianqian Lei
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Yongzhong Wu
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| |
Collapse
|
13
|
Liu Y, Yuan X, Jiang X, Wang P, Kou J, Wang H, Liu M. Dilated Adversarial U-Net Network for automatic gross tumor volume segmentation of nasopharyngeal carcinoma. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107722] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
14
|
Kazemimoghadam M, Chi W, Rahimi A, Kim N, Alluri P, Nwachukwu C, Lu W, Gu X. Saliency-guided deep learning network for automatic tumor bed volume delineation in post-operative breast irradiation. Phys Med Biol 2021; 66:10.1088/1361-6560/ac176d. [PMID: 34298539 PMCID: PMC8639319 DOI: 10.1088/1361-6560/ac176d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 07/23/2021] [Indexed: 11/12/2022]
Abstract
Efficient, reliable and reproducible target volume delineation is a key step in the effective planning of breast radiotherapy. However, post-operative breast target delineation is challenging as the contrast between the tumor bed volume (TBV) and normal breast tissue is relatively low in CT images. In this study, we propose to mimic the marker-guidance procedure in manual target delineation. We developed a saliency-based deep learning segmentation (SDL-Seg) algorithm for accurate TBV segmentation in post-operative breast irradiation. The SDL-Seg algorithm incorporates saliency information in the form of markers' location cues into a U-Net model. The design forces the model to encode the location-related features, which underscores regions with high saliency levels and suppresses low saliency regions. The saliency maps were generated by identifying markers on CT images. Markers' location were then converted to probability maps using a distance transformation coupled with a Gaussian filter. Subsequently, the CT images and the corresponding saliency maps formed a multi-channel input for the SDL-Seg network. Our in-house dataset was comprised of 145 prone CT images from 29 post-operative breast cancer patients, who received 5-fraction partial breast irradiation (PBI) regimen on GammaPod. The 29 patients were randomly split into training (19), validation (5) and test (5) sets. The performance of the proposed method was compared against basic U-Net. Our model achieved mean (standard deviation) of 76.4(±2.7) %, 6.76(±1.83) mm, and 1.9(±0.66) mm for Dice similarity coefficient, 95 percentile Hausdorff distance, and average symmetric surface distance respectively on the test set with computation time of below 11 seconds per one CT volume. SDL-Seg showed superior performance relative to basic U-Net for all the evaluation metrics while preserving low computation cost. The findings demonstrate that SDL-Seg is a promising approach for improving the efficiency and accuracy of the on-line treatment planning procedure of PBI, such as GammaPod based PBI.
Collapse
Affiliation(s)
- Mahdieh Kazemimoghadam
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Weicheng Chi
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, People's Republic of China
| | - Asal Rahimi
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Nathan Kim
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Prasanna Alluri
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Chika Nwachukwu
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | | | - Xuejun Gu
- Stanford University, Palo Alto, CA, United States of America
| |
Collapse
|
15
|
Li S, Deng YQ, Zhu ZL, Hua HL, Tao ZZ. A Comprehensive Review on Radiomics and Deep Learning for Nasopharyngeal Carcinoma Imaging. Diagnostics (Basel) 2021; 11:1523. [PMID: 34573865 PMCID: PMC8465998 DOI: 10.3390/diagnostics11091523] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 08/10/2021] [Accepted: 08/19/2021] [Indexed: 12/23/2022] Open
Abstract
Nasopharyngeal carcinoma (NPC) is one of the most common malignant tumours of the head and neck, and improving the efficiency of its diagnosis and treatment strategies is an important goal. With the development of the combination of artificial intelligence (AI) technology and medical imaging in recent years, an increasing number of studies have been conducted on image analysis of NPC using AI tools, especially radiomics and artificial neural network methods. In this review, we present a comprehensive overview of NPC imaging research based on radiomics and deep learning. These studies depict a promising prospect for the diagnosis and treatment of NPC. The deficiencies of the current studies and the potential of radiomics and deep learning for NPC imaging are discussed. We conclude that future research should establish a large-scale labelled dataset of NPC images and that studies focused on screening for NPC using AI are necessary.
Collapse
Affiliation(s)
- Song Li
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China; (S.L.); (Y.-Q.D.); (H.-L.H.)
| | - Yu-Qin Deng
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China; (S.L.); (Y.-Q.D.); (H.-L.H.)
| | - Zhi-Ling Zhu
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital Affiliated to Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China;
| | - Hong-Li Hua
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China; (S.L.); (Y.-Q.D.); (H.-L.H.)
| | - Ze-Zhang Tao
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China; (S.L.); (Y.-Q.D.); (H.-L.H.)
| |
Collapse
|
16
|
Samarasinghe G, Jameson M, Vinod S, Field M, Dowling J, Sowmya A, Holloway L. Deep learning for segmentation in radiation therapy planning: a review. J Med Imaging Radiat Oncol 2021; 65:578-595. [PMID: 34313006 DOI: 10.1111/1754-9485.13286] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 06/29/2021] [Indexed: 12/21/2022]
Abstract
Segmentation of organs and structures, as either targets or organs-at-risk, has a significant influence on the success of radiation therapy. Manual segmentation is a tedious and time-consuming task for clinicians, and inter-observer variability can affect the outcomes of radiation therapy. The recent hype over deep neural networks has added many powerful auto-segmentation methods as variations of convolutional neural networks (CNN). This paper presents a descriptive review of the literature on deep learning techniques for segmentation in radiation therapy planning. The most common CNN architecture across the four clinical sub sites considered was U-net, with the majority of deep learning segmentation articles focussed on head and neck normal tissue structures. The most common data sets were CT images from an inhouse source, along with some public data sets. N-fold cross-validation was commonly employed; however, not all work separated training, test and validation data sets. This area of research is expanding rapidly. To facilitate comparisons of proposed methods and benchmarking, consistent use of appropriate metrics and independent validation should be carefully considered.
Collapse
Affiliation(s)
- Gihan Samarasinghe
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia
| | - Michael Jameson
- Genesiscare, Sydney, New South Wales, Australia.,St Vincent's Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Shalini Vinod
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Matthew Field
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Jason Dowling
- Commonwealth Scientific and Industrial Research Organisation, Australian E-Health Research Centre, Herston, Queensland, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
| | - Lois Holloway
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| |
Collapse
|
17
|
Badrigilan S, Nabavi S, Abin AA, Rostampour N, Abedi I, Shirvani A, Ebrahimi Moghaddam M. Deep learning approaches for automated classification and segmentation of head and neck cancers and brain tumors in magnetic resonance images: a meta-analysis study. Int J Comput Assist Radiol Surg 2021; 16:529-542. [PMID: 33666859 DOI: 10.1007/s11548-021-02326-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Accepted: 02/16/2021] [Indexed: 12/24/2022]
Abstract
PURPOSE Deep learning (DL) has led to widespread changes in automated segmentation and classification for medical purposes. This study is an attempt to use statistical methods to analyze studies related to segmentation and classification of head and neck cancers (HNCs) and brain tumors in MRI images. METHODS PubMed, Web of Science, Embase, and Scopus were searched to retrieve related studies published from January 2016 to January 2020. Studies that evaluated the performance of DL-based models in the segmentation, and/or classification and/or grading of HNCs and/or brain tumors were included. Selected studies for each analysis were statistically evaluated based on the diagnostic performance metrics. RESULTS The search results retrieved 1,664 related studies, of which 30 studies were eligible for meta-analysis. The overall performance of DL models for the complete tumor in terms of the pooled Dice score, sensitivity, and specificity was 0.8965 (95% confidence interval (95% CI): 0.76-0.9994), 0.9132 (95% CI: 0.71-0.994) and 0.9164 (95% CI: 0.78-1.00), respectively. The DL methods achieved the highest performance for classifying three types of glioma, meningioma, and pituitary tumors with overall accuracies of 96.01%, 99.73%, and 96.58%, respectively. Stratification of glioma tumors by high and low grading revealed overall accuracies of 94.32% and 94.23% for the DL methods, respectively. CONCLUSION Based on the obtained results, we can acknowledge the significant ability of DL methods in the mentioned applications. Poor reporting in these studies challenges the analysis process, so it is recommended that future studies report comprehensive results based on different metrics.
Collapse
Affiliation(s)
- Samireh Badrigilan
- Department of Medical Physics, School of Medicine, Kermanshah University of Medical Sciences, Kermanshah, Iran
| | - Shahabedin Nabavi
- Faculty of Computer Science and Engineering, Shahid Beheshti University, Tehran, Iran
| | - Ahmad Ali Abin
- Faculty of Computer Science and Engineering, Shahid Beheshti University, Tehran, Iran
| | - Nima Rostampour
- Department of Medical Physics, School of Medicine, Kermanshah University of Medical Sciences, Kermanshah, Iran.
| | - Iraj Abedi
- Department of Medical Physics, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Atefeh Shirvani
- Department of Medical Physics, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | | |
Collapse
|
18
|
Fei Y, Zhang F, Zu C, Hong M, Peng X, Xiao J, Wu X, Zhou J, Wang Y. MRF-RFS: A Modified Random Forest Recursive Feature Selection Algorithm for Nasopharyngeal Carcinoma Segmentation. Methods Inf Med 2021; 59:151-161. [PMID: 33618420 DOI: 10.1055/s-0040-1721791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
BACKGROUND An accurate and reproducible method to delineate tumor margins is of great importance in clinical diagnosis and treatment. In nasopharyngeal carcinoma (NPC), due to limitations such as high variability, low contrast, and discontinuous boundaries in presenting soft tissues, tumor margin can be extremely difficult to identify in magnetic resonance imaging (MRI), increasing the challenge of NPC segmentation task. OBJECTIVES The purpose of this work is to develop a semiautomatic algorithm for NPC image segmentation with minimal human intervention, while it is also capable of delineating tumor margins with high accuracy and reproducibility. METHODS In this paper, we propose a novel feature selection algorithm for the identification of the margin of NPC image, named as modified random forest recursive feature selection (MRF-RFS). Specifically, to obtain a more discriminative feature subset for segmentation, a modified recursive feature selection method is applied to the original handcrafted feature set. Moreover, we combine the proposed feature selection method with the classical random forest (RF) in the training stage to take full advantage of its intrinsic property (i.e., feature importance measure). RESULTS To evaluate the segmentation performance, we verify our method on the T1-weighted MRI images of 18 NPC patients. The experimental results demonstrate that the proposed MRF-RFS method outperforms the baseline methods and deep learning methods on the task of segmenting NPC images. CONCLUSION The proposed method could be effective in NPC diagnosis and useful for guiding radiation therapy.
Collapse
Affiliation(s)
- Yuchen Fei
- School of Computer Science, Sichuan University, Chengdu, Sichuan, People's Republic of China
| | - Fengyu Zhang
- School of Computer Science, Sichuan University, Chengdu, Sichuan, People's Republic of China
| | - Chen Zu
- Department of Risk Controlling Research, JD.com, Sichuan, People's Republic of China
| | - Mei Hong
- School of Computer Science, Sichuan University, Chengdu, Sichuan, People's Republic of China
| | - Xingchen Peng
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, People's Republic of China
| | - Jianghong Xiao
- Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, People's Republic of China
| | - Xi Wu
- School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan, People's Republic of China
| | - Jiliu Zhou
- School of Computer Science, Sichuan University, Chengdu, Sichuan, People's Republic of China.,School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan, People's Republic of China
| | - Yan Wang
- School of Computer Science, Sichuan University, Chengdu, Sichuan, People's Republic of China
| |
Collapse
|
19
|
Conditional Generative Adversarial Networks with Multi-scale Discriminators for Prostate MRI Segmentation. Neural Process Lett 2020. [DOI: 10.1007/s11063-020-10303-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
20
|
Wang X, Yang G, Zhang Y, Zhu L, Xue X, Zhang B, Cai C, Jin H, Zheng J, Wu J, Yang W, Dai Z. Automated delineation of nasopharynx gross tumor volume for nasopharyngeal carcinoma by plain CT combining contrast-enhanced CT using deep learning. JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES 2020. [DOI: 10.1080/16878507.2020.1795565] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Affiliation(s)
- Xuetao Wang
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Geng Yang
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Lin Zhu
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xiaoguang Xue
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Bailin Zhang
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Chunya Cai
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Huaizhi Jin
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Jianxiao Zheng
- Department of Radiotherapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Jian Wu
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | | |
Collapse
|
21
|
Chen H, Qi Y, Yin Y, Li T, Liu X, Li X, Gong G, Wang L. MMFNet: A multi-modality MRI fusion network for segmentation of nasopharyngeal carcinoma. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.02.002] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
22
|
Ye Y, Cai Z, Huang B, He Y, Zeng P, Zou G, Deng W, Chen H, Huang B. Fully-Automated Segmentation of Nasopharyngeal Carcinoma on Dual-Sequence MRI Using Convolutional Neural Networks. Front Oncol 2020; 10:166. [PMID: 32154168 PMCID: PMC7045897 DOI: 10.3389/fonc.2020.00166] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2019] [Accepted: 01/30/2020] [Indexed: 11/13/2022] Open
Abstract
In this study, we proposed an automated method based on convolutional neural network (CNN) for nasopharyngeal carcinoma (NPC) segmentation on dual-sequence magnetic resonance imaging (MRI). T1-weighted (T1W) and T2-weighted (T2W) MRI images were collected from 44 NPC patients. We developed a dense connectivity embedding U-net (DEU) and trained the network based on the two-dimensional dual-sequence MRI images in the training dataset and applied post-processing to remove the false positive results. In order to justify the effectiveness of dual-sequence MRI images, we performed an experiment with different inputs in eight randomly selected patients. We evaluated DEU's performance by using a 10-fold cross-validation strategy and compared the results with the previous studies. The Dice similarity coefficient (DSC) of the method using only T1W, only T2W and dual-sequence of 10-fold cross-validation as different inputs were 0.620 ± 0.0642, 0.642 ± 0.118 and 0.721 ± 0.036, respectively. The median DSC in 10-fold cross-validation experiment with DEU was 0.735. The average DSC of seven external subjects was 0.87. To summarize, we successfully proposed and verified a fully automatic NPC segmentation method based on DEU and dual-sequence MRI images with accurate and stable performance. If further verified, our proposed method would be of use in clinical practice of NPC.
Collapse
Affiliation(s)
- Yufeng Ye
- Department of Radiology, Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Zongyou Cai
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Shenzhen University General Hospital Clinical Research Center for Neurological Diseases, Shenzhen, China
| | - Bin Huang
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Shenzhen University General Hospital Clinical Research Center for Neurological Diseases, Shenzhen, China
| | - Yan He
- Department of Oncology, Panyu Central Hospital, Guangzhou, China
- Cancer Institute of Panyu, Guangzhou, China
| | - Ping Zeng
- Department of Radiology, Shenzhen University General Hospital, Shenzhen, China
| | - Guorong Zou
- Department of Oncology, Panyu Central Hospital, Guangzhou, China
- Cancer Institute of Panyu, Guangzhou, China
| | - Wei Deng
- Department of Radiology, Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Hanwei Chen
- Department of Radiology, Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Bingsheng Huang
- Medical Imaging Institute of Panyu, Guangzhou, China
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| |
Collapse
|
23
|
Li Q, Xu Y, Chen Z, Liu D, Feng ST, Law M, Ye Y, Huang B. Tumor Segmentation in Contrast-Enhanced Magnetic Resonance Imaging for Nasopharyngeal Carcinoma: Deep Learning with Convolutional Neural Network. BIOMED RESEARCH INTERNATIONAL 2018; 2018:9128527. [PMID: 30417017 PMCID: PMC6207874 DOI: 10.1155/2018/9128527] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2018] [Revised: 09/12/2018] [Accepted: 10/02/2018] [Indexed: 12/21/2022]
Abstract
OBJECTIVES To evaluate the application of a deep learning architecture, based on the convolutional neural network (CNN) technique, to perform automatic tumor segmentation of magnetic resonance imaging (MRI) for nasopharyngeal carcinoma (NPC). MATERIALS AND METHODS In this prospective study, 87 MRI containing tumor regions were acquired from newly diagnosed NPC patients. These 87 MRI were augmented to >60,000 images. The proposed CNN network is composed of two phases: feature representation and scores map reconstruction. We designed a stepwise scheme to train our CNN network. To evaluate the performance of our method, we used case-by-case leave-one-out cross-validation (LOOCV). The ground truth of tumor contouring was acquired by the consensus of two experienced radiologists. RESULTS The mean values of dice similarity coefficient, percent match, and their corresponding ratio with our method were 0.89±0.05, 0.90±0.04, and 0.84±0.06, respectively, all of which were better than reported values in the similar studies. CONCLUSIONS We successfully established a segmentation method for NPC based on deep learning in contrast-enhanced magnetic resonance imaging. Further clinical trials with dedicated algorithms are warranted.
Collapse
Affiliation(s)
- Qiaoliang Li
- School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Yuzhen Xu
- School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Zhewei Chen
- School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Dexiang Liu
- Department of Radiology, Guangzhou Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Shi-Ting Feng
- Department of Radiology, First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Martin Law
- Department of Radiology, Queen Mary Hospital, Hong Kong
| | - Yufeng Ye
- Department of Radiology, Guangzhou Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Bingsheng Huang
- School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| |
Collapse
|
24
|
Tseng HH, Luo Y, Ten Haken RK, El Naqa I. The Role of Machine Learning in Knowledge-Based Response-Adapted Radiotherapy. Front Oncol 2018; 8:266. [PMID: 30101124 PMCID: PMC6072876 DOI: 10.3389/fonc.2018.00266] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2018] [Accepted: 06/27/2018] [Indexed: 12/16/2022] Open
Abstract
With the continuous increase in radiotherapy patient-specific data from multimodality imaging and biotechnology molecular sources, knowledge-based response-adapted radiotherapy (KBR-ART) is emerging as a vital area for radiation oncology personalized treatment. In KBR-ART, planned dose distributions can be modified based on observed cues in patients' clinical, geometric, and physiological parameters. In this paper, we present current developments in the field of adaptive radiotherapy (ART), the progression toward KBR-ART, and examine several applications of static and dynamic machine learning approaches for realizing the KBR-ART framework potentials in maximizing tumor control and minimizing side effects with respect to individual radiotherapy patients. Specifically, three questions required for the realization of KBR-ART are addressed: (1) what knowledge is needed; (2) how to estimate RT outcomes accurately; and (3) how to adapt optimally. Different machine learning algorithms for KBR-ART application shall be discussed and contrasted. Representative examples of different KBR-ART stages are also visited.
Collapse
Affiliation(s)
- Huan-Hsin Tseng
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, United States
| | | | | | | |
Collapse
|