1
|
Wang C, Zhou Y, Li Y, Pang W, Wang L, Du W, Yang H, Jin Y. ICPPNet: A semantic segmentation network model based on inter-class positional prior for scoliosis reconstruction in ultrasound images. J Biomed Inform 2025; 166:104827. [PMID: 40258407 DOI: 10.1016/j.jbi.2025.104827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Revised: 04/06/2025] [Accepted: 04/07/2025] [Indexed: 04/23/2025]
Abstract
OBJECTIVE Considering the radiation hazard of X-ray, safer, more convenient and cost-effective ultrasound methods are gradually becoming new diagnostic approaches for scoliosis. For ultrasound images of spine regions, it is challenging to accurately identify spine regions in images due to relatively small target areas and the presence of a lot of interfering information. Therefore, we developed a novel neural network that incorporates prior knowledge to precisely segment spine regions in ultrasound images. MATERIALS AND METHODS We constructed a dataset of ultrasound images of spine regions for semantic segmentation. The dataset contains 3136 images of 30 patients with scoliosis. And we propose a network model (ICPPNet), which fully utilizes inter-class positional prior knowledge by combining an inter-class positional probability heatmap, to achieve accurate segmentation of target areas. RESULTS ICPPNet achieved an average Dice similarity coefficient of 70.83% and an average 95% Hausdorff distance of 11.28 mm on the dataset, demonstrating its excellent performance. The average error between the Cobb angle measured by our method and the Cobb angle measured by X-ray images is 1.41 degrees, and the coefficient of determination is 0.9879 with a strong correlation. DISCUSSION AND CONCLUSION ICPPNet provides a new solution for the medical image segmentation task with positional prior knowledge between target classes. And ICPPNet strongly supports the subsequent reconstruction of spine models using ultrasound images.
Collapse
Affiliation(s)
- Changlong Wang
- College of Software, Jilin University, Changchun, 130012, Jilin, China
| | - You Zhou
- College of Computer Science and Technology, Jilin University, Changchun, 130012, Jilin, China.
| | - Yuanshu Li
- College of Computer Science and Technology, Jilin University, Changchun, 130012, Jilin, China
| | - Wei Pang
- School of Mathematical and Computer Sciences, Heriot-Watt University, EH14, 4AS, Edinburgh, United Kingdom
| | - Liupu Wang
- College of Computer Science and Technology, Jilin University, Changchun, 130012, Jilin, China
| | - Wei Du
- College of Computer Science and Technology, Jilin University, Changchun, 130012, Jilin, China
| | - Hui Yang
- Public Computer Education and Research Center, Jilin University, Changchun, 130012, Jilin, China.
| | - Ying Jin
- Department of Ultrasound, China-Japan Union Hospital of Jilin University, Changchun, 130031, Jilin, China.
| |
Collapse
|
2
|
Luo X, Fu J, Zhong Y, Liu S, Han B, Astaraki M, Bendazzoli S, Toma-Dasu I, Ye Y, Chen Z, Xia Y, Su Y, Ye J, He J, Xing Z, Wang H, Zhu L, Yang K, Fang X, Wang Z, Lee CW, Park SJ, Chun J, Ulrich C, Maier-Hein KH, Ndipenoch N, Miron A, Li Y, Zhang Y, Chen Y, Bai L, Huang J, An C, Wang L, Huang K, Gu Y, Zhou T, Zhou M, Zhang S, Liao W, Wang G, Zhang S. SegRap2023: A benchmark of organs-at-risk and gross tumor volume Segmentation for Radiotherapy Planning of Nasopharyngeal Carcinoma. Med Image Anal 2025; 101:103447. [PMID: 39756265 DOI: 10.1016/j.media.2024.103447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 11/18/2024] [Accepted: 12/23/2024] [Indexed: 01/07/2025]
Abstract
Radiation therapy is a primary and effective treatment strategy for NasoPharyngeal Carcinoma (NPC). The precise delineation of Gross Tumor Volumes (GTVs) and Organs-At-Risk (OARs) is crucial in radiation treatment, directly impacting patient prognosis. Despite that deep learning has achieved remarkable performance on various medical image segmentation tasks, its performance on OARs and GTVs of NPC is still limited, and high-quality benchmark datasets on this task are highly desirable for model development and evaluation. To alleviate this problem, the SegRap2023 challenge was organized in conjunction with MICCAI2023 and presented a large-scale benchmark for OAR and GTV segmentation with 400 Computed Tomography (CT) scans from 200 NPC patients, each with a pair of pre-aligned non-contrast and contrast-enhanced CT scans. The challenge aimed to segment 45 OARs and 2 GTVs from the paired CT scans per patient, and received 10 and 11 complete submissions for the two tasks, respectively. In this paper, we detail the challenge and analyze the solutions of all participants. The average Dice similarity coefficient scores for all submissions ranged from 76.68% to 86.70%, and 70.42% to 73.44% for OARs and GTVs, respectively. We conclude that the segmentation of relatively large OARs is well-addressed, and more efforts are needed for GTVs and small or thin OARs. The benchmark remains available at: https://segrap2023.grand-challenge.org.
Collapse
Affiliation(s)
- Xiangde Luo
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Department of Radiation Oncology, Sichuan Cancer Hospital & Institute, Chengdu, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Jia Fu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yunxin Zhong
- Canon Medical Systems (China) Co. Ltd., Beijing, China
| | - Shuolin Liu
- Canon Medical Systems (China) Co. Ltd., Beijing, China
| | - Bing Han
- Canon Medical Systems (China) Co. Ltd., Beijing, China
| | - Mehdi Astaraki
- Department of Medical Radiation Physics, Stockholm University, Solna, Sweden
| | - Simone Bendazzoli
- Department of Biomedical Engineering and Health Systems, KTH, Huddinge, Sweden
| | - Iuliana Toma-Dasu
- Department of Medical Radiation Physics, Stockholm University, Solna, Sweden
| | - Yiwen Ye
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China
| | - Ziyang Chen
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China
| | - Yong Xia
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China
| | - Yanzhou Su
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Jin Ye
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Junjun He
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Zhaohu Xing
- Department of Systems Hub, Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
| | - Hongqiu Wang
- Department of Systems Hub, Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
| | - Lei Zhu
- Department of Systems Hub, Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
| | - Kaixiang Yang
- Wuhan National Laboratory for Optoelectronics and with MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, China
| | - Xin Fang
- Wuhan National Laboratory for Optoelectronics and with MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, China
| | - Zhiwei Wang
- Wuhan National Laboratory for Optoelectronics and with MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, China
| | - Chan Woong Lee
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea
| | - Sang Joon Park
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, South Korea
| | | | - Constantin Ulrich
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Germany
| | - Klaus H Maier-Hein
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Germany
| | - Nchongmaje Ndipenoch
- Department of Computer Science, Brunel University London, Uxbridge, United Kingdom
| | - Alina Miron
- Department of Computer Science, Brunel University London, Uxbridge, United Kingdom
| | - Yongmin Li
- Department of Computer Science, Brunel University London, Uxbridge, United Kingdom
| | | | - Yu Chen
- MedMind Technology Co. Ltd., Beijing, China
| | - Lu Bai
- MedMind Technology Co. Ltd., Beijing, China
| | - Jinlong Huang
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China
| | - Chengyang An
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China
| | - Lisheng Wang
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China
| | - Kaiwen Huang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Yunqi Gu
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Tao Zhou
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Mu Zhou
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Shichuan Zhang
- Department of Radiation Oncology, Sichuan Cancer Hospital & Institute, Chengdu, China
| | - Wenjun Liao
- Department of Radiation Oncology, Sichuan Cancer Hospital & Institute, Chengdu, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China.
| | - Shaoting Zhang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China.
| |
Collapse
|
3
|
Qi Y, Wei L, Yang J, Xu J, Wang H, Yu Q, Shen G, Cao Y. CQENet: A segmentation model for nasopharyngeal carcinoma based on confidence quantitative evaluation. Comput Med Imaging Graph 2025; 123:102525. [PMID: 40107148 DOI: 10.1016/j.compmedimag.2025.102525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2024] [Revised: 02/10/2025] [Accepted: 03/04/2025] [Indexed: 03/22/2025]
Abstract
Accurate segmentation of the tumor regions of nasopharyngeal carcinoma (NPC) is of significant importance for radiotherapy of NPC. However, the precision of existing automatic segmentation methods for NPC remains inadequate, primarily manifested in the difficulty of tumor localization and the challenges in delineating blurred boundaries. Additionally, the black-box nature of deep learning models leads to insufficient quantification of the confidence in the results, preventing users from directly understanding the model's confidence in its predictions, which severely impacts the clinical application of deep learning models. This paper proposes an automatic segmentation model for NPC based on confidence quantitative evaluation (CQENet). To address the issue of insufficient confidence quantification in NPC segmentation results, we introduce a confidence assessment module (CAM) that enables the model to output not only the segmentation results but also the confidence in those results, aiding users in understanding the uncertainty risks associated with model outputs. To address the difficulty in localizing the position and extent of tumors, we propose a tumor feature adjustment module (FAM) for precise tumor localization and extent determination. To address the challenge of delineating blurred tumor boundaries, we introduce a variance attention mechanism (VAM) to assist in edge delineation during fine segmentation. We conducted experiments on a multicenter NPC dataset, validating that our proposed method is effective and superior to existing state-of-the-art models, possessing considerable clinical application value.
Collapse
Affiliation(s)
- Yiqiu Qi
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China; National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Shenyang, China
| | - Lijun Wei
- Liaoning Cancer Hospital & Institute, Shenyang, China
| | - Jinzhu Yang
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China; National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Shenyang, China.
| | - Jiachen Xu
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China; National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Shenyang, China
| | - Hongfei Wang
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China; National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Shenyang, China
| | - Qi Yu
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China; National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Shenyang, China
| | - Guoguang Shen
- Peoples Hospital of Naiman Banner, Inner Mongolia, China
| | - Yubo Cao
- Department of Medical Oncology, The Fourth Affiliated Hospital of China Medical University, Shenyang, China
| |
Collapse
|
4
|
Jin J, Zhang J, Yu X, Xiang Z, Zhu X, Guo M, Zhao Z, Li W, Li H, Xu J, Jin X. Radiomics-guided generative adversarial network for automatic primary target volume segmentation for nasopharyngeal carcinoma using computed tomography images. Med Phys 2025; 52:1119-1132. [PMID: 39535436 DOI: 10.1002/mp.17493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 10/16/2024] [Accepted: 10/17/2024] [Indexed: 11/16/2024] Open
Abstract
BACKGROUND Automatic primary gross tumor volume (GTVp) segmentation for nasopharyngeal carcinoma (NPC) is a quite challenging task because of the existence of similar visual characteristics between tumors and their surroundings, especially on computed tomography (CT) images with severe low contrast resolution. Therefore, most recently proposed methods based on radiomics or deep learning (DL) is difficult to achieve good results on CT datasets. PURPOSE A peritumoral radiomics-guided generative adversarial network (PRG-GAN) was proposed to address this challenge. METHODS A total of 157 NPC patients with CT images was collected and divided into training, validation, and testing cohorts of 108, 9, and 30 patients, respectively. The proposed model was based on a standard GAN consisting of a generator network and a discriminator network. Morphological dilation on the initial segmentation results from GAN was first conducted to delineate annular peritumoral region, in which radiomics features were extracted as priori guide knowledge. Then, radiomics features were fused with semantic features by the discriminator's fully connected layer to achieve the voxel-level classification and segmentation. The dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and average symmetric surface distance (ASSD) were used to evaluate the segmentation performance using a paired samples t-test with Bonferroni correction and Cohen's d (d) effect sizes. A two-sided p-value of less than 0.05 was considered statistically significant. RESULTS The model-generated predictions had a high overlap ratio with the ground truth. The average DSC, HD95, and ASSD were significantly improved from 0.80 ± 0.12, 4.65 ± 4.71 mm, and 1.35 ± 1.15 mm of GAN to 0.85 ± 0.18 (p = 0.001, d = 0.71), 4.15 ± 7.56 mm (p = 0.002, d = 0.67), and 1.11 ± 1.65 mm (p < 0.001, d = 0.46) of PRG-GAN, respectively. CONCLUSION Integrating radiomics features into GAN is promising to solve unclear border limitations and increase the delineation accuracy of GTVp for patients with NPC.
Collapse
Affiliation(s)
- Juebin Jin
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Jicheng Zhang
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Xianwen Yu
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Ziqing Xiang
- Department of Medical Engineering, 2nd Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Xuanxuan Zhu
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Mingrou Guo
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Zeshuo Zhao
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - WenLong Li
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Heng Li
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Jiayi Xu
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Xiance Jin
- Department of Radiotherapy Center, 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
- School of Basic Medical Science, Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
5
|
Wang H, Chen J, Zhang S, He Y, Xu J, Wu M, He J, Liao W, Luo X. Dual-Reference Source-Free Active Domain Adaptation for Nasopharyngeal Carcinoma Tumor Segmentation Across Multiple Hospitals. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:4078-4090. [PMID: 38861437 DOI: 10.1109/tmi.2024.3412923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2024]
Abstract
Nasopharyngeal carcinoma (NPC) is a prevalent and clinically significant malignancy that predominantly impacts the head and neck area. Precise delineation of the Gross Tumor Volume (GTV) plays a pivotal role in ensuring effective radiotherapy for NPC. Despite recent methods that have achieved promising results on GTV segmentation, they are still limited by lacking carefully-annotated data and hard-to-access data from multiple hospitals in clinical practice. Although some unsupervised domain adaptation (UDA) has been proposed to alleviate this problem, unconditionally mapping the distribution distorts the underlying structural information, leading to inferior performance. To address this challenge, we devise a novel Source-Free Active Domain Adaptation framework to facilitate domain adaptation for the GTV segmentation task. Specifically, we design a dual reference strategy to select domain-invariant and domain-specific representative samples from a specific target domain for annotation and model fine-tuning without relying on source-domain data. Our approach not only ensures data privacy but also reduces the workload for oncologists as it just requires annotating a few representative samples from the target domain and does not need to access the source data. We collect a large-scale clinical dataset comprising 1057 NPC patients from five hospitals to validate our approach. Experimental results show that our method outperforms the previous active learning (e.g., AADA and MHPL) and UDA (e.g., Tent and CPR) methods, and achieves comparable results to the fully supervised upper bound, even with few annotations, highlighting the significant medical utility of our approach. In addition, there is no public dataset about multi-center NPC segmentation, we will release code and dataset for future research (Git) https://github.com/whq-xxh/Active-GTV-Seg.
Collapse
|
6
|
Cai Z, Zhong Z, Lin H, Huang B, Xu Z, Huang B, Deng W, Wu Q, Lei K, Lyu J, Ye Y, Chen H, Zhang J. Self-supervised learning on dual-sequence magnetic resonance imaging for automatic segmentation of nasopharyngeal carcinoma. Comput Med Imaging Graph 2024; 118:102471. [PMID: 39608271 DOI: 10.1016/j.compmedimag.2024.102471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 10/08/2024] [Accepted: 11/12/2024] [Indexed: 11/30/2024]
Abstract
Automating the segmentation of nasopharyngeal carcinoma (NPC) is crucial for therapeutic procedures but presents challenges given the hurdles in amassing extensively annotated datasets. Although previous studies have applied self-supervised learning to capitalize on unlabeled data to improve segmentation performance, these methods often overlooked the benefits of dual-sequence magnetic resonance imaging (MRI). In the present study, we incorporated self-supervised learning with a saliency transformation module using unlabeled dual-sequence MRI for accurate NPC segmentation. 44 labeled and 72 unlabeled patients were collected to develop and evaluate our network. Impressively, our network achieved a mean Dice similarity coefficient (DSC) of 0.77, which is consistent with a previous study that relied on a training set of 4,100 annotated cases. The results further revealed that our approach required minimal adjustments, primarily < 20% tweak in the DSC, to meet clinical standards. By enhancing the automatic segmentation of NPC, our method alleviates the annotation burden on oncologists, curbs subjectivity, and ensures reliable NPC delineation.
Collapse
Affiliation(s)
- Zongyou Cai
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Zhangnan Zhong
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Haiwei Lin
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Bingsheng Huang
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Ziyue Xu
- NVIDIA Corporation, Bethesda, MD, USA
| | - Bin Huang
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Wei Deng
- Department of Radiology, Panyu Central Hospital, Guangzhou, China; Medical Imaging Institute of Panyu, Guangzhou, China
| | - Qiting Wu
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Kaixin Lei
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Jiegeng Lyu
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Yufeng Ye
- Department of Radiology, Panyu Central Hospital, Guangzhou, China; Medical Imaging Institute of Panyu, Guangzhou, China.
| | - Hanwei Chen
- Panyu Health Management Center (Panyu Rehabilitation Hospital), Guangzhou, China.
| | - Jian Zhang
- Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, China; Shenzhen University Medical School, Shenzhen University, Shenzhen, 518055, Guangdong, China.
| |
Collapse
|
7
|
Dong X, Yang K, Liu J, Tang F, Liao W, Zhang Y, Liang S. Cross-Domain Mutual-Assistance Learning Framework for Fully Automated Diagnosis of Primary Tumor in Nasopharyngeal Carcinoma. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3676-3689. [PMID: 38739507 DOI: 10.1109/tmi.2024.3400406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Accurate T-staging of nasopharyngeal carcinoma (NPC) holds paramount importance in guiding treatment decisions and prognosticating outcomes for distinct risk groups. Regrettably, the landscape of deep learning-based techniques for T-staging in NPC remains sparse, and existing methodologies often exhibit suboptimal performance due to their neglect of crucial domain-specific knowledge pertinent to primary tumor diagnosis. To address these issues, we propose a new cross-domain mutual-assistance learning framework for fully automated diagnosis of primary tumor using H&N MR images. Specifically, we tackle primary tumor diagnosis task with the convolutional neural network consisting of a 3D cross-domain knowledge perception network (CKP net) for excavated cross-domain-invariant features emphasizing tumor intensity variations and internal tumor heterogeneity, and a multi-domain mutual-information sharing fusion network (M2SF net), comprising a dual-pathway domain-specific representation module and a mutual information fusion module, for intelligently gauging and amalgamating multi-domain, multi-scale T-stage diagnosis-oriented features. The proposed 3D cross-domain mutual-assistance learning framework not only embraces task-specific multi-domain diagnostic knowledge but also automates the entire process of primary tumor diagnosis. We evaluate our model on an internal and an external MR images dataset in a three-fold cross-validation paradigm. Exhaustive experimental results demonstrate that our method outperforms the other algorithms, and obtains promising performance for tumor segmentation and T-staging. These findings underscore its potential for clinical application, offering valuable assistance to clinicians in treatment decision-making and prognostication for various risk groups.
Collapse
|
8
|
Yu L, Min W, Wang S. Boundary-Aware Gradient Operator Network for Medical Image Segmentation. IEEE J Biomed Health Inform 2024; 28:4711-4723. [PMID: 38776204 DOI: 10.1109/jbhi.2024.3404273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/24/2024]
Abstract
Medical image segmentation is a crucial task in computer-aided diagnosis. Although convolutional neural networks (CNNs) have made significant progress in the field of medical image segmentation, the convolution kernels of CNNs are optimized from random initialization without explicitly encoding gradient information, leading to a lack of specificity for certain features, such as blurred boundary features. Furthermore, the frequently applied down-sampling operation also loses the fine structural features in shallow layers. Therefore, we propose a boundary-aware gradient operator network (BG-Net) for medical image segmentation, in which the gradient convolution (GConv) and the boundary-aware mechanism (BAM) modules are developed to simulate image boundary features and the remote dependencies between channels. The GConv module transforms the gradient operator into a convolutional operation that can extract gradient features; it attempts to extract more features such as images boundaries and textures, thereby fully utilizing limited input to capture more features representing boundaries. In addition, the BAM can increase the amount of global contextual information while suppressing invalid information by focusing on feature dependencies and the weight ratios between channels. Thus, the boundary perception ability of BG-Net is improved. Finally, we use a multi-modal fusion mechanism to effectively fuse lightweight gradient convolution and U-shaped branch features into a multilevel feature, enabling global dependencies and low-level spatial details to be effectively captured in a shallower manner. We conduct extensive experiments on eight datasets that broadly cover medical images to evaluate the effectiveness of the proposed BG-Net. The experimental results demonstrate that BG-Net outperforms the state-of-the-art methods, particularly those focused on boundary segmentation.
Collapse
|
9
|
Zhou H, Zhao Q, Huang W, Liang Z, Cui C, Ma H, Luo C, Li S, Ruan G, Chen H, Zhu Y, Zhang G, Liu S, Liu L, Li H, Yang H, Xie H. A novel fully automatic segmentation and counting system for metastatic lymph nodes on multimodal magnetic resonance imaging: Evaluation and prognostic implications in nasopharyngeal carcinoma. Radiother Oncol 2024; 197:110367. [PMID: 38834152 DOI: 10.1016/j.radonc.2024.110367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 05/28/2024] [Accepted: 06/01/2024] [Indexed: 06/06/2024]
Abstract
BACKGROUND The number of metastatic lymph nodes (MLNs) is crucial for the survival of nasopharyngeal carcinoma (NPC), but manual counting is laborious. This study aims to explore the feasibility and prognostic value of automatic MLNs segmentation and counting. METHODS We retrospectively enrolled 980 newly diagnosed patients in the primary cohort and 224 patients from two external cohorts. We utilized the nnUnet model for automatic MLNs segmentation on multimodal magnetic resonance imaging. MLNs counting methods, including manual delineation-assisted counting (MDAC) and fully automatic lymph node counting system (AMLNC), were compared with manual evaluation (Gold standard). RESULTS In the internal validation group, the MLNs segmentation results showed acceptable agreement with manual delineation, with a mean Dice coefficient of 0.771. The consistency among three counting methods was as follows 0.778 (Gold vs. AMLNC), 0.638 (Gold vs. MDAC), and 0.739 (AMLNC vs. MDAC). MLNs numbers were categorized into three-category variable (1-4, 5-9, > 9) and two-category variable (<4, ≥ 4) based on the gold standard and AMLNC. These categorical variables demonstrated acceptable discriminating abilities for 5-year overall survival (OS), progression-free, and distant metastasis-free survival. Compared with base prediction model, the model incorporating two-category AMLNC-counting numbers showed improved C-indexes for 5-year OS prediction (0.658 vs. 0.675, P = 0.045). All results have been successfully validated in the external cohort. CONCLUSIONS The AMLNC system offers a time- and labor-saving approach for fully automatic MLNs segmentation and counting in NPC. MLNs counting using AMLNC demonstrated non-inferior performance in survival discrimination compared to manual detection.
Collapse
Affiliation(s)
- Haoyang Zhou
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, PR China.
| | - Qin Zhao
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Wenjie Huang
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Zhiying Liang
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Chunyan Cui
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Huali Ma
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Chao Luo
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Shuqi Li
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Guangying Ruan
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Hongbo Chen
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, PR China.
| | - Yuliang Zhu
- Department of Nasopharyngeal Head and Neck Tumor Radiotherapy, Zhongshan City People's Hospital, ZhongShan, PR China.
| | - Guoyi Zhang
- Department of Radiation Oncology, Foshan Academy of Medical Sciences, Sun Yat-Sen University Foshan Hospital and The First People's Hospital of Foshan, Foshan, PR China.
| | - Shanshan Liu
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, PR China.
| | - Lizhi Liu
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Haojiang Li
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Hui Yang
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Hui Xie
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| |
Collapse
|
10
|
Han X, Chen Z, Lin G, Lv W, Zheng C, Lu W, Sun Y, Lu L. Semi-supervised model based on implicit neural representation and mutual learning (SIMN) for multi-center nasopharyngeal carcinoma segmentation on MRI. Comput Biol Med 2024; 175:108368. [PMID: 38663351 DOI: 10.1016/j.compbiomed.2024.108368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 03/06/2024] [Accepted: 03/24/2024] [Indexed: 05/15/2024]
Abstract
BACKGROUND The issue of using deep learning to obtain accurate gross tumor volume (GTV) and metastatic lymph nodes (MLN) segmentation for nasopharyngeal carcinoma (NPC) on heterogeneous magnetic resonance imaging (MRI) images with limited labeling remains unsolved. METHOD We collected 918 patients with MRI images from three hospitals to develop and validate models and proposed a semi-supervised framework for the fine delineation of multi-center NPC boundaries by integrating uncertainty-based implicit neural representations named SIMN. The framework utilizes the deep mutual learning approach with CNN and Transformer, incorporating dynamic thresholds. Additionally, domain adaptive algorithms are employed to enhance the performance. RESULTS SIMN predictions have a high overlap ratio with the ground truth. Under the 20 % labeled cases, for the internal test cohorts, the average DSC in GTV and MLN are 0.7981 and 0.7804, respectively; for external test cohort Wu Zhou Red Cross Hospital, the average DSC in GTV and MLN are 0.7217 and 0.7581, respectively; for external test cohorts First People Hospital of Foshan, the average DSC in GTV and MLN are 0.7004 and 0.7692, respectively. No significant differences are found in DSC, HD95, ASD, and Recall for patients with different clinical categories. Moreover, SIMN outperformed existing classical semi-supervised methods. CONCLUSIONS SIMN showed a highly accurate GTV and MLN segmentation for NPC on multi-center MRI images under Semi-Supervised Learning (SSL), which can easily transfer to other centers without fine-tuning. It suggests that it has the potential to act as a generalized delineation solution for heterogeneous MRI images with limited labels in clinical deployment.
Collapse
Affiliation(s)
- Xu Han
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Pazhou Lab, Guangzhou, 510515, China
| | - Zihang Chen
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, 510060, China
| | - Guoyu Lin
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, 510060, China
| | - Wenbing Lv
- School of Information Science and Engineering, Yunnan University, Kunming, 650504, China
| | - Chundan Zheng
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Pazhou Lab, Guangzhou, 510515, China
| | - Wantong Lu
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Pazhou Lab, Guangzhou, 510515, China
| | - Ying Sun
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, 510060, China.
| | - Lijun Lu
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Pazhou Lab, Guangzhou, 510515, China.
| |
Collapse
|
11
|
Xie H, Huang W, Li S, Huang M, Luo C, Li S, Cui C, Ma H, Li H, Liu L, Wang X, Fu G. Radiomics-based lymph nodes prognostic models from three MRI regions in nasopharyngeal carcinoma. Heliyon 2024; 10:e31557. [PMID: 38803981 PMCID: PMC11128517 DOI: 10.1016/j.heliyon.2024.e31557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 05/17/2024] [Accepted: 05/17/2024] [Indexed: 05/29/2024] Open
Abstract
Accurate prediction of the prognosis of nasopharyngeal carcinoma (NPC) is important for treatment. Lymph nodes metastasis is an important predictor for distant failure and regional recurrence in patients with NPC. Traditionally, subjective radiological evaluation increases concerns regarding the accuracy and consistency of predictions. Radiomics is an objective and quantitative evaluation algorithm for medical images. This retrospective analysis was conducted based on the data of 729 patients newly diagnosed with NPC without distant metastases to evaluate the performance of radiomics pretreatment using magnetic resonance imaging (MRI)-determined metastatic lymph nodes models to predict NPC prognosis with three delineation methods. Radiomics features were extracted from all lymph nodes (ALN), largest lymph node (LLN), and largest slice of the largest lymph node (LSLN) to generate three radiomics signatures. The radiomics signatures, clinical model, and radiomics-clinic merged models were developed in training cohort for predicting overall survival (OS). The results showed that LSLN signature with clinical factors predicted OS with high accuracy and robustness using pretreatment MR-determined metastatic lymph nodes (C-index [95 % confidence interval]: 0.762[0.760-0.763]), providing a new tool for treatment planning in NPC.
Collapse
Affiliation(s)
- Hui Xie
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Wenjie Huang
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Shaolong Li
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Manqian Huang
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Chao Luo
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Shuqi Li
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Chunyan Cui
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Huali Ma
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Haojiang Li
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Lizhi Liu
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Xiaoyi Wang
- Department of Radiology, Hainan General Hospital (Hainan Affiliated Hospital of Hainan Medical University), Haikou, China
| | - Gui Fu
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, China
| |
Collapse
|
12
|
Wang CK, Wang TW, Yang YX, Wu YT. Deep Learning for Nasopharyngeal Carcinoma Segmentation in Magnetic Resonance Imaging: A Systematic Review and Meta-Analysis. Bioengineering (Basel) 2024; 11:504. [PMID: 38790370 PMCID: PMC11118180 DOI: 10.3390/bioengineering11050504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 05/11/2024] [Accepted: 05/15/2024] [Indexed: 05/26/2024] Open
Abstract
Nasopharyngeal carcinoma is a significant health challenge that is particularly prevalent in Southeast Asia and North Africa. MRI is the preferred diagnostic tool for NPC due to its superior soft tissue contrast. The accurate segmentation of NPC in MRI is crucial for effective treatment planning and prognosis. We conducted a search across PubMed, Embase, and Web of Science from inception up to 20 March 2024, adhering to the PRISMA 2020 guidelines. Eligibility criteria focused on studies utilizing DL for NPC segmentation in adults via MRI. Data extraction and meta-analysis were conducted to evaluate the performance of DL models, primarily measured by Dice scores. We assessed methodological quality using the CLAIM and QUADAS-2 tools, and statistical analysis was performed using random effects models. The analysis incorporated 17 studies, demonstrating a pooled Dice score of 78% for DL models (95% confidence interval: 74% to 83%), indicating a moderate to high segmentation accuracy by DL models. Significant heterogeneity and publication bias were observed among the included studies. Our findings reveal that DL models, particularly convolutional neural networks, offer moderately accurate NPC segmentation in MRI. This advancement holds the potential for enhancing NPC management, necessitating further research toward integration into clinical practice.
Collapse
Affiliation(s)
- Chih-Keng Wang
- School of Medicine, College of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan; (C.-K.W.)
- Department of Otolaryngology-Head and Neck Surgery, Taichung Veterans General Hospital, Taichung 407219, Taiwan
| | - Ting-Wei Wang
- School of Medicine, College of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan; (C.-K.W.)
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| | - Ya-Xuan Yang
- Department of Otolaryngology-Head and Neck Surgery, Taichung Veterans General Hospital, Taichung 407219, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| |
Collapse
|
13
|
Zeng Y, Li J, Zhao Z, Liang W, Zeng P, Shen S, Zhang K, Shen C. WET-UNet: Wavelet integrated efficient transformer networks for nasopharyngeal carcinoma tumor segmentation. Sci Prog 2024; 107:368504241232537. [PMID: 38567422 PMCID: PMC11320696 DOI: 10.1177/00368504241232537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Nasopharyngeal carcinoma is a malignant tumor that occurs in the epithelium and mucosal glands of the nasopharynx, and its pathological type is mostly poorly differentiated squamous cell carcinoma. Since the nasopharynx is located deep in the head and neck, early diagnosis and timely treatment are critical to patient survival. However, nasopharyngeal carcinoma tumors are small in size and vary widely in shape, and it is also a challenge for experienced doctors to delineate tumor contours. In addition, due to the special location of nasopharyngeal carcinoma, complex treatments such as radiotherapy or surgical resection are often required, so accurate pathological diagnosis is also very important for the selection of treatment options. However, the current deep learning segmentation model faces the problems of inaccurate segmentation and unstable segmentation process, which are mainly limited by the accuracy of data sets, fuzzy boundaries, and complex lines. In order to solve these two challenges, this article proposes a hybrid model WET-UNet based on the UNet network as a powerful alternative for nasopharyngeal cancer image segmentation. On the one hand, wavelet transform is integrated into UNet to enhance the lesion boundary information by using low-frequency components to adjust the encoder at low frequencies and optimize the subsequent computational process of the Transformer to improve the accuracy and robustness of image segmentation. On the other hand, the attention mechanism retains the most valuable pixels in the image for us, captures the remote dependencies, and enables the network to learn more representative features to improve the recognition ability of the model. Comparative experiments show that our network structure outperforms other models for nasopharyngeal cancer image segmentation, and we demonstrate the effectiveness of adding two modules to help tumor segmentation. The total data set of this article is 5000, and the ratio of training and verification is 8:2. In the experiment, accuracy = 85.2% and precision = 84.9% can show that our proposed model has good performance in nasopharyngeal cancer image segmentation.
Collapse
Affiliation(s)
- Yan Zeng
- State Key Laboratory of Marine Resource Utilization in South China Sea, Hainan University, Haikou, China
- School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Jun Li
- State Key Laboratory of Marine Resource Utilization in South China Sea, Hainan University, Haikou, China
- School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Zhe Zhao
- State Key Laboratory of Marine Resource Utilization in South China Sea, Hainan University, Haikou, China
- School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Wei Liang
- State Key Laboratory of Marine Resource Utilization in South China Sea, Hainan University, Haikou, China
- School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Penghui Zeng
- State Key Laboratory of Marine Resource Utilization in South China Sea, Hainan University, Haikou, China
- School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Shaodong Shen
- State Key Laboratory of Marine Resource Utilization in South China Sea, Hainan University, Haikou, China
- School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Kun Zhang
- State Key Laboratory of Marine Resource Utilization in South China Sea, Hainan University, Haikou, China
- School of Information Science and Technology, Hainan Normal University, Haikou, China
| | - Chong Shen
- State Key Laboratory of Marine Resource Utilization in South China Sea, Hainan University, Haikou, China
- School of Information and Communication Engineering, Hainan University, Haikou, China
| |
Collapse
|
14
|
Ren CX, Xu GX, Dai DQ, Lin L, Sun Y, Liu QS. Cross-site prognosis prediction for nasopharyngeal carcinoma from incomplete multi-modal data. Med Image Anal 2024; 93:103103. [PMID: 38368752 DOI: 10.1016/j.media.2024.103103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 12/05/2023] [Accepted: 02/05/2024] [Indexed: 02/20/2024]
Abstract
Accurate prognosis prediction for nasopharyngeal carcinoma based on magnetic resonance (MR) images assists in the guidance of treatment intensity, thus reducing the risk of recurrence and death. To reduce repeated labor and sufficiently explore domain knowledge, aggregating labeled/annotated data from external sites enables us to train an intelligent model for a clinical site with unlabeled data. However, this task suffers from the challenges of incomplete multi-modal examination data fusion and image data heterogeneity among sites. This paper proposes a cross-site survival analysis method for prognosis prediction of nasopharyngeal carcinoma from domain adaptation viewpoint. Utilizing a Cox model as the basic framework, our method equips it with a cross-attention based multi-modal fusion regularization. This regularization model effectively fuses the multi-modal information from multi-parametric MR images and clinical features onto a domain-adaptive space, despite the absence of some modalities. To enhance the feature discrimination, we also extend the contrastive learning technique to censored data cases. Compared with the conventional approaches which directly deploy a trained survival model in a new site, our method achieves superior prognosis prediction performance in cross-site validation experiments. These results highlight the key role of cross-site adaptability of our method and support its value in clinical practice.
Collapse
Affiliation(s)
- Chuan-Xian Ren
- School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China.
| | - Geng-Xin Xu
- School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China
| | - Dao-Qing Dai
- School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China
| | - Li Lin
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, China
| | - Ying Sun
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, China
| | - Qing-Shan Liu
- School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
| |
Collapse
|
15
|
Liu Y, Yang B, Chen X, Zhu J, Ji G, Liu Y, Chen B, Lu N, Yi J, Wang S, Li Y, Dai J, Men K. Efficient segmentation using domain adaptation for MRI-guided and CBCT-guided online adaptive radiotherapy. Radiother Oncol 2023; 188:109871. [PMID: 37634767 DOI: 10.1016/j.radonc.2023.109871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 07/31/2023] [Accepted: 08/20/2023] [Indexed: 08/29/2023]
Abstract
BACKGROUND Delineation of regions of interest (ROIs) is important for adaptive radiotherapy (ART) but it is also time consuming and labor intensive. AIM This study aims to develop efficient segmentation methods for magnetic resonance imaging-guided ART (MRIgART) and cone-beam computed tomography-guided ART (CBCTgART). MATERIALS AND METHODS MRIgART and CBCTgART studies enrolled 242 prostate cancer patients and 530 nasopharyngeal carcinoma patients, respectively. A public dataset of CBCT from 35 pancreatic cancer patients was adopted to test the framework. We designed two domain adaption methods to learn and adapt the features from planning computed tomography (pCT) to MRI or CBCT modalities. The pCT was transformed to synthetic MRI (sMRI) for MRIgART, while CBCT was transformed to synthetic CT (sCT) for CBCTgART. Generalized segmentation models were trained with large popular data in which the inputs were sMRI for MRIgART and pCT for CBCTgART. Finally, the personalized models for each patient were established by fine-tuning the generalized model with the contours on pCT of that patient. The proposed method was compared with deformable image registration (DIR), a regular deep learning (DL) model trained on the same modality (DL-regular), and a generalized model in our framework (DL-generalized). RESULTS The proposed method achieved better or comparable performance. For MRIgART of the prostate cancer patients, the mean dice similarity coefficient (DSC) of four ROIs was 87.2%, 83.75%, 85.36%, and 92.20% for the DIR, DL-regular, DL-generalized, and proposed method, respectively. For CBCTgART of the nasopharyngeal carcinoma patients, the mean DSC of two target volumes were 90.81% and 91.18%, 75.17% and 58.30%, for the DIR, DL-regular, DL-generalized, and the proposed method, respectively. For CBCTgART of the pancreatic cancer patients, the mean DSC of two ROIs were 61.94% and 61.44%, 63.94% and 81.56%, for the DIR, DL-regular, DL-generalized, and the proposed method, respectively. CONCLUSION The proposed method utilizing personalized modeling improved the segmentation accuracy of ART.
Collapse
Affiliation(s)
- Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Guangqian Ji
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yueping Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Bo Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ningning Lu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Junlin Yi
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Shulian Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yexiong Li
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.
| |
Collapse
|
16
|
Zeng Y, Zeng P, Shen S, Liang W, Li J, Zhao Z, Zhang K, Shen C. DCTR U-Net: automatic segmentation algorithm for medical images of nasopharyngeal cancer in the context of deep learning. Front Oncol 2023; 13:1190075. [PMID: 37546396 PMCID: PMC10402756 DOI: 10.3389/fonc.2023.1190075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Nasopharyngeal carcinoma (NPC) is a malignant tumor that occurs in the wall of the nasopharyngeal cavity and is prevalent in Southern China, Southeast Asia, North Africa, and the Middle East. According to studies, NPC is one of the most common malignant tumors in Hainan, China, and it has the highest incidence rate among otorhinolaryngological malignancies. We proposed a new deep learning network model to improve the segmentation accuracy of the target region of nasopharyngeal cancer. Our model is based on the U-Net-based network, to which we add Dilated Convolution Module, Transformer Module, and Residual Module. The new deep learning network model can effectively solve the problem of restricted convolutional fields of perception and achieve global and local multi-scale feature fusion. In our experiments, the proposed network was trained and validated using 10-fold cross-validation based on the records of 300 clinical patients. The results of our network were evaluated using the dice similarity coefficient (DSC) and the average symmetric surface distance (ASSD). The DSC and ASSD values are 0.852 and 0.544 mm, respectively. With the effective combination of the Dilated Convolution Module, Transformer Module, and Residual Module, we significantly improved the segmentation performance of the target region of the NPC.
Collapse
Affiliation(s)
- Yan Zeng
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
- ChinaPersonnel Department, Hainan Medical University, Haikou, China
| | - PengHui Zeng
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - ShaoDong Shen
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Wei Liang
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Jun Li
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Zhe Zhao
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Kun Zhang
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
- School of Information Science and Technology, Hainan Normal University, Haikou, China
| | - Chong Shen
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| |
Collapse
|
17
|
Zhou H, Li H, Chen S, Yang S, Ruan G, Liu L, Chen H. BSMM-Net: Multi-modal neural network based on bilateral symmetry for nasopharyngeal carcinoma segmentation. Front Hum Neurosci 2023; 16:1068713. [PMID: 36704094 PMCID: PMC9872196 DOI: 10.3389/fnhum.2022.1068713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 12/05/2022] [Indexed: 01/11/2023] Open
Abstract
Introduction Automatically and accurately delineating the primary nasopharyngeal carcinoma (NPC) tumors in head magnetic resonance imaging (MRI) images is crucial for patient staging and radiotherapy. Inspired by the bilateral symmetry of head and complementary information of different modalities, a multi-modal neural network named BSMM-Net is proposed for NPC segmentation. Methods First, a bilaterally symmetrical patch block (BSP) is used to crop the image and the bilaterally flipped image into patches. BSP can improve the precision of locating NPC lesions and is a simulation of radiologist locating the tumors with the bilateral difference of head in clinical practice. Second, modality-specific and multi-modal fusion features (MSMFFs) are extracted by the proposed MSMFF encoder to fully utilize the complementary information of T1- and T2-weighted MRI. The MSMFFs are then fed into the base decoder to aggregate representative features and precisely delineate the NPC. MSMFF is the output of MSMFF encoder blocks, which consist of six modality-specific networks and one multi-modal fusion network. Except T1 and T2, the other four modalities are generated from T1 and T2 by the BSP and DT modal generate block. Third, the MSMFF decoder with similar structure to the MSMFF encoder is deployed to supervise the encoder during training and assure the validity of the MSMFF from the encoder. Finally, experiments are conducted on the dataset of 7633 samples collected from 745 patients. Results and discussion The global DICE, precision, recall and IoU of the testing set are 0.82, 0.82, 0.86, and 0.72, respectively. The results show that the proposed model is better than the other state-of-the-art methods for NPC segmentation. In clinical diagnosis, the BSMM-Net can give precise delineation of NPC, which can be used to schedule the radiotherapy.
Collapse
Affiliation(s)
- Haoyang Zhou
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, China
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, Guangxi, China
| | - Haojiang Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center (SYSUCC), Guanghzou, Guangdong, China
| | - Shuchao Chen
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, China
| | - Shixin Yang
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, China
| | - Guangying Ruan
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center (SYSUCC), Guanghzou, Guangdong, China
| | - Lizhi Liu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center (SYSUCC), Guanghzou, Guangdong, China
| | - Hongbo Chen
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, China
| |
Collapse
|
18
|
Xia T, Huang G, Pun CM, Zhang W, Li J, Ling WK, Lin C, Yang Q. Multi-scale contextual semantic enhancement network for 3D medical image segmentation. Phys Med Biol 2022; 67. [PMID: 36317277 DOI: 10.1088/1361-6560/ac9e41] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 10/27/2022] [Indexed: 11/17/2022]
Abstract
Objective. Accurate and automatic segmentation of medical images is crucial for improving the efficiency of disease diagnosis and making treatment plans. Although methods based on convolutional neural networks have achieved excellent results in numerous segmentation tasks of medical images, they still suffer from challenges including drastic scale variations of lesions, blurred boundaries of lesions and class imbalance. Our objective is to design a segmentation framework named multi-scale contextual semantic enhancement network (3D MCSE-Net) to address the above problems.Approach. The 3D MCSE-Net mainly consists of a multi-scale context pyramid fusion module (MCPFM), a triple feature adaptive enhancement module (TFAEM), and an asymmetric class correction loss (ACCL) function. Specifically, the MCPFM resolves the problem of unreliable predictions due to variable morphology and drastic scale variations of lesions by capturing the multi-scale global context of feature maps. Subsequently, the TFAEM overcomes the problem of blurred boundaries of lesions caused by the infiltrating growth and complex context of lesions by adaptively recalibrating and enhancing the multi-dimensional feature representation of suspicious regions. Moreover, the ACCL alleviates class imbalances by adjusting asy mmetric correction coefficient and weighting factor.Main results. Our method is evaluated on the nasopharyngeal cancer tumor segmentation (NPCTS) dataset, the public dataset of the MICCAI 2017 liver tumor segmentation (LiTS) challenge and the 3D image reconstruction for comparison of algorithm and DataBase (3Dircadb) dataset to verify its effectiveness and generalizability. The experimental results show the proposed components all have unique strengths and exhibit mutually reinforcing properties. More importantly, the proposed 3D MCSE-Net outperforms previous state-of-the-art methods for tumor segmentation on the NPCTS, LiTS and 3Dircadb dataset.Significance. Our method addresses the effects of drastic scale variations of lesions, blurred boundaries of lesions and class imbalance, and improves tumors segmentation accuracy, which facilitates clinical medical diagnosis and treatment planning.
Collapse
Affiliation(s)
- Tingjian Xia
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Guoheng Huang
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Chi-Man Pun
- Department of Computer and Information Science, University of Macau, Macau 999078 SAR, People's Republic of China
| | - Weiwen Zhang
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Jiajian Li
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Wing-Kuen Ling
- School of Information Engineering, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Chao Lin
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, People's Republic of China
| | - Qi Yang
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, People's Republic of China
| |
Collapse
|
19
|
Cao X, Chen X, Lin ZC, Liang CX, Huang YY, Cai ZC, Li JP, Gao MY, Mai HQ, Li CF, Guo X, Lyu X. Add-on individualizing prediction of nasopharyngeal carcinoma using deep-learning based on MRI: A multicentre, validation study. iScience 2022; 25:104841. [PMID: 36034225 PMCID: PMC9399485 DOI: 10.1016/j.isci.2022.104841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 07/08/2022] [Accepted: 07/21/2022] [Indexed: 12/24/2022] Open
Abstract
In nasopharyngeal carcinoma, deep-learning extracted signatures on MR images might be correlated with survival. In this study, we sought to develop an individualizing model using deep-learning MRI signatures and clinical data to predict survival and to estimate the benefit of induction chemotherapy on survivals of patients with nasopharyngeal carcinoma. Two thousand ninety-seven patients from three independent hospitals were identified and randomly assigned. When the deep-learning signatures of the primary tumor and clinically involved gross cervical lymph nodes extracted from MR images were added to the clinical data and TNM staging for the progression-free survival prediction model, the combined model achieved better prediction performance. Its application is among patients deciding on treatment regimens. Under the same conditions, with the increasing MRI signatures, the survival benefits achieved by induction chemotherapy are increased. In nasopharyngeal carcinoma, these prediction models are the first to provide an individualized estimation of survivals and model the benefit of induction chemotherapy on survivals.
Collapse
Affiliation(s)
- Xun Cao
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Centre, State Key Laboratory of Oncology in South China, Collaborative Innovation Centre for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, China
- Department of Critical Care Medicine, Sun Yat-sen University Cancer Centre, State Key Laboratory of Oncology in South China, Collaborative Innovation Centre for Cancer Medicine, Guangzhou, China
| | - Xi Chen
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Centre, State Key Laboratory of Oncology in South China, Collaborative Innovation Centre for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, China
| | - Zhuo-Chen Lin
- Department of Medical Records, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Chi-Xiong Liang
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Centre, State Key Laboratory of Oncology in South China, Collaborative Innovation Centre for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, China
| | - Ying-Ying Huang
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Centre, State Key Laboratory of Oncology in South China, Collaborative Innovation Centre for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, China
| | - Zhuo-Chen Cai
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Centre, State Key Laboratory of Oncology in South China, Collaborative Innovation Centre for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, China
| | - Jian-Peng Li
- Department of Radiology, Dongguan People’s Hospital, Dongguan, China
| | - Ming-Yong Gao
- Department of Medical Imaging, The First People’s Hospital of Foshan, Foshan, China
| | - Hai-Qiang Mai
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Centre, State Key Laboratory of Oncology in South China, Collaborative Innovation Centre for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, China
| | - Chao-Feng Li
- Department of Information Technology, Sun Yat-sen University Cancer Centre, State Key Laboratory of Oncology in South China, Collaborative Innovation Centre for Cancer Medicine, Guangzhou, China
| | - Xiang Guo
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Centre, State Key Laboratory of Oncology in South China, Collaborative Innovation Centre for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, China
| | - Xing Lyu
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Centre, State Key Laboratory of Oncology in South China, Collaborative Innovation Centre for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, China
| |
Collapse
|
20
|
Martin RJ, Sharma U, Kaur K, Kadhim NM, Lamin M, Ayipeh CS. Multidimensional CNN-Based Deep Segmentation Method for Tumor Identification. BIOMED RESEARCH INTERNATIONAL 2022; 2022:5061112. [PMID: 36046444 PMCID: PMC9420592 DOI: 10.1155/2022/5061112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 07/18/2022] [Accepted: 07/23/2022] [Indexed: 11/18/2022]
Abstract
Weighted MR images of 421 patients with nasopharyngeal cancer were obtained at the head and neck level, and the tumors in the images were assessed by two expert doctors. 346 patients' multimodal pictures and labels served as training sets, whereas the remaining 75 patients' multimodal images and labels served as independent test sets. Convolutional neural network (CNN) for modal multidimensional information fusion and multimodal multidimensional information fusion (MMMDF) was used. The three models' performance is compared, and the findings reveal that the multimodal multidimensional fusion model performs best, while the two-modal multidimensional information fusion model performs second. The single-modal multidimensional information fusion model has the poorest performance. In MR images of nasopharyngeal cancer, a convolutional network can precisely and efficiently segment tumors.
Collapse
Affiliation(s)
- R. John Martin
- Faculty of Computer Science and Information Technology, Jazan University, Saudi Arabia
| | - Uttam Sharma
- Department of Computer Science and Engineering, Gautam Buddha University, Greater Noida, India
| | - Kiranjeet Kaur
- Department of CSE, University Centre for Research & Development, Chandigarh University, Mohali, Punjab 140413, India
| | - Noor Mohammed Kadhim
- Department of Medical Instruments Engineering Techniques, Al-Farahidi University, Baghdad 10021, Iraq
| | - Madonna Lamin
- Computer Science and Engineering, ITM SLS Baroda University, Vadodara, 391510 Gujarat, India
| | | |
Collapse
|
21
|
Chen Y, Han G, Lin T, Liu X. CAFS: An Attention-Based Co-Segmentation Semi-Supervised Method for Nasopharyngeal Carcinoma Segmentation. SENSORS (BASEL, SWITZERLAND) 2022; 22:5053. [PMID: 35808548 PMCID: PMC9269783 DOI: 10.3390/s22135053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 06/25/2022] [Accepted: 06/30/2022] [Indexed: 02/06/2023]
Abstract
Accurate segmentation of nasopharyngeal carcinoma is essential to its treatment effect. However, there are several challenges in existing deep learning-based segmentation methods. First, the acquisition of labeled data are challenging. Second, the nasopharyngeal carcinoma is similar to the surrounding tissues. Third, the shape of nasopharyngeal carcinoma is complex. These challenges make the segmentation of nasopharyngeal carcinoma difficult. This paper proposes a novel semi-supervised method named CAFS for automatic segmentation of nasopharyngeal carcinoma. CAFS addresses the above challenges through three mechanisms: the teacher-student cooperative segmentation mechanism, the attention mechanism, and the feedback mechanism. CAFS can use only a small amount of labeled nasopharyngeal carcinoma data to segment the cancer region accurately. The average DSC value of CAFS is 0.8723 on the nasopharyngeal carcinoma segmentation task. Moreover, CAFS has outperformed the state-of-the-art nasopharyngeal carcinoma segmentation methods in the comparison experiment. Among the compared state-of-the-art methods, CAFS achieved the highest values of DSC, Jaccard, and precision. In particular, the DSC value of CAFS is 7.42% higher than the highest DSC value in the state-of-the-art methods.
Collapse
Affiliation(s)
- Yitong Chen
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China; (Y.C.); (G.H.); (T.L.)
| | - Guanghui Han
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China; (Y.C.); (G.H.); (T.L.)
- School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450046, China
| | - Tianyu Lin
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China; (Y.C.); (G.H.); (T.L.)
| | - Xiujian Liu
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China; (Y.C.); (G.H.); (T.L.)
| |
Collapse
|
22
|
Meng M, Gu B, Bi L, Song S, Feng DD, Kim J. DeepMTS: Deep Multi-task Learning for Survival Prediction in Patients With Advanced Nasopharyngeal Carcinoma Using Pretreatment PET/CT. IEEE J Biomed Health Inform 2022; 26:4497-4507. [PMID: 35696469 DOI: 10.1109/jbhi.2022.3181791] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Nasopharyngeal Carcinoma (NPC) is a malignant epithelial cancer arising from the nasopharynx. Survival prediction is a major concern for NPC patients, as it provides early prognostic information to plan treatments. Recently, deep survival models based on deep learning have demonstrated the potential to outperform traditional radiomics-based survival prediction models. Deep survival models usually use image patches covering the whole target regions (e.g., nasopharynx for NPC) or containing only segmented tumor regions as the input. However, the models using the whole target regions will also include non-relevant background information, while the models using segmented tumor regions will disregard potentially prognostic information existing out of primary tumors (e.g., local lymph node metastasis and adjacent tissue invasion). In this study, we propose a 3D end-to-end Deep Multi-Task Survival model (DeepMTS) for joint survival prediction and tumor segmentation in advanced NPC from pretreatment PET/CT. Our novelty is the introduction of a hard-sharing segmentation backbone to guide the extraction of local features related to the primary tumors, which reduces the interference from non-relevant background information. In addition, we also introduce a cascaded survival network to capture the prognostic information existing out of primary tumors and further leverage the global tumor information (e.g., tumor size, shape, and locations) derived from the segmentation backbone. Our experiments with two clinical datasets demonstrate that our DeepMTS can consistently outperform traditional radiomics-based survival prediction models and existing deep survival models.
Collapse
|