1
|
Yuan N, Zhang Y, Lv K, Liu Y, Yang A, Hu P, Yu H, Han X, Guo X, Li J, Wang T, Lei B, Ma G. HCA-DAN: hierarchical class-aware domain adaptive network for gastric tumor segmentation in 3D CT images. Cancer Imaging 2024; 24:63. [PMID: 38773670 PMCID: PMC11107051 DOI: 10.1186/s40644-024-00711-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 05/11/2024] [Indexed: 05/24/2024] Open
Abstract
BACKGROUND Accurate segmentation of gastric tumors from CT scans provides useful image information for guiding the diagnosis and treatment of gastric cancer. However, automated gastric tumor segmentation from 3D CT images faces several challenges. The large variation of anisotropic spatial resolution limits the ability of 3D convolutional neural networks (CNNs) to learn features from different views. The background texture of gastric tumor is complex, and its size, shape and intensity distribution are highly variable, which makes it more difficult for deep learning methods to capture the boundary. In particular, while multi-center datasets increase sample size and representation ability, they suffer from inter-center heterogeneity. METHODS In this study, we propose a new cross-center 3D tumor segmentation method named Hierarchical Class-Aware Domain Adaptive Network (HCA-DAN), which includes a new 3D neural network that efficiently bridges an Anisotropic neural network and a Transformer (AsTr) for extracting multi-scale context features from the CT images with anisotropic resolution, and a hierarchical class-aware domain alignment (HCADA) module for adaptively aligning multi-scale context features across two domains by integrating a class attention map with class-specific information. We evaluate the proposed method on an in-house CT image dataset collected from four medical centers and validate its segmentation performance in both in-center and cross-center test scenarios. RESULTS Our baseline segmentation network (i.e., AsTr) achieves best results compared to other 3D segmentation models, with a mean dice similarity coefficient (DSC) of 59.26%, 55.97%, 48.83% and 67.28% in four in-center test tasks, and with a DSC of 56.42%, 55.94%, 46.54% and 60.62% in four cross-center test tasks. In addition, the proposed cross-center segmentation network (i.e., HCA-DAN) obtains excellent results compared to other unsupervised domain adaptation methods, with a DSC of 58.36%, 56.72%, 49.25%, and 62.20% in four cross-center test tasks. CONCLUSIONS Comprehensive experimental results demonstrate that the proposed method outperforms compared methods on this multi-center database and is promising for routine clinical workflows.
Collapse
Affiliation(s)
- Ning Yuan
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Yongtao Zhang
- School of Biomedical Engineering, Health Science Centers, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Kuan Lv
- Peking University China-Japan Friendship School of Clinical Medicine, Beijing, China
| | - Yiyao Liu
- School of Biomedical Engineering, Health Science Centers, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Aocai Yang
- Department of Radiology, China-Japan Friendship Hospital, No. 2 East Yinghua Road, Chaoyang District, Beijing, 100029, China
| | - Pianpian Hu
- Department of Radiology, China-Japan Friendship Hospital, No. 2 East Yinghua Road, Chaoyang District, Beijing, 100029, China
| | - Hongwei Yu
- Department of Radiology, China-Japan Friendship Hospital, No. 2 East Yinghua Road, Chaoyang District, Beijing, 100029, China
| | - Xiaowei Han
- Department of Radiology, The Affiliated Drum Tower Hospital of Nanjing University Medical School, Nanjing, China
| | - Xing Guo
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Junfeng Li
- Department of Medical Imaging, Heping Hospital Affiliated to Changzhi Medical College, Changzhi, China
| | - Tianfu Wang
- School of Biomedical Engineering, Health Science Centers, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Baiying Lei
- School of Biomedical Engineering, Health Science Centers, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
- AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Guangdong, China
| | - Guolin Ma
- Department of Radiology, China-Japan Friendship Hospital, No. 2 East Yinghua Road, Chaoyang District, Beijing, 100029, China.
| |
Collapse
|
2
|
Hu J, Cui Z, Zhang X, Zhang J, Ge Y, Zhang H, Lu Y, Shen D. Uncertainty-aware refinement framework for ovarian tumor segmentation in CECT volume. Med Phys 2024; 51:2678-2694. [PMID: 37862556 DOI: 10.1002/mp.16795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Revised: 09/05/2023] [Accepted: 09/26/2023] [Indexed: 10/22/2023] Open
Abstract
BACKGROUND Ovarian cancer is a highly lethal gynecological disease. Accurate and automated segmentation of ovarian tumors in contrast-enhanced computed tomography (CECT) images is crucial in the radiotherapy treatment of ovarian cancer, enabling radiologists to evaluate cancer progression and develop timely therapeutic plans. However, automatic ovarian tumor segmentation is challenging due to factors such as inhomogeneous background, ambiguous tumor boundaries, and imbalanced foreground-background, all of which contribute to high predictive uncertainty for a segmentation model. PURPOSE To tackle these challenges, we propose an uncertainty-aware refinement framework that aims to estimate and refine regions with high predictive uncertainty for accurate ovarian tumor segmentation in CECT images. METHODS To this end, we first employ an approximate Bayesian network to detect coarse regions of interest (ROIs) of both ovarian tumors and uncertain regions. These ROIs allow a subsequent segmentation network to narrow down the search area for tumors and prioritize uncertain regions, resulting in precise segmentation of ovarian tumors. Meanwhile, the framework integrates two guidance modules that learn two implicit functions capable of mapping query features sampled according to their uncertainty to organ or boundary manifolds, guiding the segmentation network to facilitate information encoding of uncertain regions. RESULTS Firstly, 367 CECT images are collected from the same hospital for experiments. Dice score, Jaccard, Recall, Positive predictive value (PPV), 95% Hausdorff distance (HD95) and Average symmetric surface distance (ASSD) for the testing group of 77 cases are 86.31%, 73.93%, 83.95%, 86.03%, 15.17 mm and 2.57 mm, all of which are significantly better than that of the other state-of-the-art models. And results of visual comparison shows that the compared methods have more mis-segmentation than our method. Furthermore, our method achieves a Dice score that is at least 20% higher than the Dice scores of other compared methods when tumor volumes are less than 20 cm3 $^3$ , indicating better recognition ability to small regions by our method. And then, 38 CECT images are collected from another hospital to form an external testing group. Our approach consistently outperform the compared methods significantly, with the external testing group exhibiting substantial improvements across key evaluation metrics: Dice score (83.74%), Jaccard (69.55%), Recall (82.12%), PPV (81.61%), HD95 (12.31 mm), and ASSD (2.32 mm), robustly establishing its superior performance. CONCLUSIONS Experimental results demonstrate that the framework significantly outperforms the compared state-of-the-art methods, with decreased under- or over-segmentation and better small tumor identification. It has the potential for clinical application.
Collapse
Affiliation(s)
- Jiaqi Hu
- Zhejiang Provincial Key Laboratory of Precision Diagnosis and Therapy for Major Gynecological Diseases, Department of Gynecologic Oncology, Women's Hospital and Institute of Translational Medicine, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Zhiming Cui
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Xiao Zhang
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Jiadong Zhang
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Yuyan Ge
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Honghe Zhang
- Department of Pathology, Research Unit of Intelligence Classification of Tumor Pathology and Precision Therapy, Chinese Academy of Medical Sciences, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Yan Lu
- Zhejiang Provincial Key Laboratory of Precision Diagnosis and Therapy for Major Gynecological Diseases, Department of Gynecologic Oncology, Women's Hospital and Institute of Translational Medicine, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
- Cancer center, Zhejiang University, Hangzhou, Zhejiang, China
| | - Dinggang Shen
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd. Shanghai, Shanghai, China
- Shanghai Clinical Research and Trial Center, Shanghai, China
| |
Collapse
|
3
|
Ngo TKN, Yang SJ, Mao BH, Nguyen TKM, Ng QD, Kuo YL, Tsai JH, Saw SN, Tu TY. A deep learning-based pipeline for analyzing the influences of interfacial mechanochemical microenvironments on spheroid invasion using differential interference contrast microscopic images. Mater Today Bio 2023; 23:100820. [PMID: 37810748 PMCID: PMC10558776 DOI: 10.1016/j.mtbio.2023.100820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Revised: 07/16/2023] [Accepted: 09/24/2023] [Indexed: 10/10/2023] Open
Abstract
Metastasis is the leading cause of cancer-related deaths. During this process, cancer cells are likely to navigate discrete tissue-tissue interfaces, enabling them to infiltrate and spread throughout the body. Three-dimensional (3D) spheroid modeling is receiving more attention due to its strengths in studying the invasive behavior of metastatic cancer cells. While microscopy is a conventional approach for investigating 3D invasion, post-invasion image analysis, which is a time-consuming process, remains a significant challenge for researchers. In this study, we presented an image processing pipeline that utilized a deep learning (DL) solution, with an encoder-decoder architecture, to assess and characterize the invasion dynamics of tumor spheroids. The developed models, equipped with feature extraction and measurement capabilities, could be successfully utilized for the automated segmentation of the invasive protrusions as well as the core region of spheroids situated within interfacial microenvironments with distinct mechanochemical factors. Our findings suggest that a combination of the spheroid culture and DL-based image analysis enable identification of time-lapse migratory patterns for tumor spheroids above matrix-substrate interfaces, thus paving the foundation for delineating the mechanism of local invasion during cancer metastasis.
Collapse
Affiliation(s)
- Thi Kim Ngan Ngo
- Department of Biomedical Engineering, College of Engineering, National Cheng Kung University, Tainan, 70101, Taiwan
| | - Sze Jue Yang
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, 50603, Kuala Lumpur, Malaysia
| | - Bin-Hsu Mao
- Department of Biomedical Engineering, College of Engineering, National Cheng Kung University, Tainan, 70101, Taiwan
| | - Thi Kim Mai Nguyen
- Department of Biomedical Engineering, College of Engineering, National Cheng Kung University, Tainan, 70101, Taiwan
| | - Qi Ding Ng
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, 50603, Kuala Lumpur, Malaysia
| | - Yao-Lung Kuo
- Department of Surgery, College of Medicine, National Cheng Kung University, Tainan, 70101, Taiwan
- Department of Surgery, National Cheng Kung University Hospital, Tainan, 70101, Taiwan
| | - Jui-Hung Tsai
- Department of Internal Medicine, National Cheng Kung University Hospital, Tainan, 70101, Taiwan
| | - Shier Nee Saw
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, 50603, Kuala Lumpur, Malaysia
| | - Ting-Yuan Tu
- Department of Biomedical Engineering, College of Engineering, National Cheng Kung University, Tainan, 70101, Taiwan
- Medical Device Innovation Center, National Cheng Kung University, Tainan, 70101, Taiwan
| |
Collapse
|
4
|
Yang C, Zhou Q, Li M, Xu L, Zeng Y, Liu J, Wei Y, Shi F, Chen J, Li P, Shu Y, Yang L, Shu J. MRI-based automatic identification and segmentation of extrahepatic cholangiocarcinoma using deep learning network. BMC Cancer 2023; 23:1089. [PMID: 37950207 PMCID: PMC10636947 DOI: 10.1186/s12885-023-11575-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 10/27/2023] [Indexed: 11/12/2023] Open
Abstract
BACKGROUND Accurate identification of extrahepatic cholangiocarcinoma (ECC) from an image is challenging because of the small size and complex background structure. Therefore, considering the limitation of manual delineation, it's necessary to develop automated identification and segmentation methods for ECC. The aim of this study was to develop a deep learning approach for automatic identification and segmentation of ECC using MRI. METHODS We recruited 137 ECC patients from our hospital as the main dataset (C1) and an additional 40 patients from other hospitals as the external validation set (C2). All patients underwent axial T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), and diffusion-weighted imaging (DWI). Manual delineations were performed and served as the ground truth. Next, we used 3D VB-Net to establish single-mode automatic identification and segmentation models based on T1WI (model 1), T2WI (model 2), and DWI (model 3) in the training cohort (80% of C1), and compared them with the combined model (model 4). Subsequently, the generalization capability of the best models was evaluated using the testing set (20% of C1) and the external validation set (C2). Finally, the performance of the developed models was further evaluated. RESULTS Model 3 showed the best identification performance in the training, testing, and external validation cohorts with success rates of 0.980, 0.786, and 0.725, respectively. Furthermore, model 3 yielded an average Dice similarity coefficient (DSC) of 0.922, 0.495, and 0.466 to segment ECC automatically in the training, testing, and external validation cohorts, respectively. CONCLUSION The DWI-based model performed better in automatically identifying and segmenting ECC compared to T1WI and T2WI, which may guide clinical decisions and help determine prognosis.
Collapse
Affiliation(s)
- Chunmei Yang
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Qin Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Mingdong Li
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Lulu Xu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Yanyan Zeng
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Jiong Liu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jing Chen
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Pinxiong Li
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Yue Shu
- Department of Oncology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Lu Yang
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Jian Shu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China.
| |
Collapse
|
5
|
Zhang Y, Yuan N, Liu B, Yang A, Yu H, Lv K, Luan J, Hu P, Lei H, Wang T, Ma G, Lei B. USBDAN: Unsupervised Scale-aware and Boundary-aware Domain Adaptive Network for Gastric Tumor Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082801 DOI: 10.1109/embc40787.2023.10340877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Accurate segmentation of gastric tumors from computed tomography (CT) images provides useful image information for guiding the diagnosis and treatment of gastric cancer. Researchers typically collect datasets from multiple medical centers to increase sample size and representation, but this raises the issue of data heterogeneity. To this end, we propose a new cross-center 3D tumor segmentation method named unsupervised scale-aware and boundary-aware domain adaptive network (USBDAN), which includes a new 3D neural network that efficiently bridges an Anisotropic neural network and a Transformer (AsTr) for extracting multi-scale features from the CT images with anisotropic resolution, and a scale-aware and boundary-aware domain alignment (SaBaDA) module for adaptively aligning multi-scale features between two domains and enhancing tumor boundary drawing based on location-related information drawn from each sample across all domains. We evaluate the proposed method on an in-house CT image dataset collected from four medical centers. Our results demonstrate that the proposed method outperforms several state-of-the-art methods.
Collapse
|
6
|
Shi X, Wang L, Li Y, Wu J, Huang H. GCLDNet: Gastric cancer lesion detection network combining level feature aggregation and attention feature fusion. Front Oncol 2022; 12:901475. [PMID: 36106104 PMCID: PMC9464831 DOI: 10.3389/fonc.2022.901475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 08/01/2022] [Indexed: 11/13/2022] Open
Abstract
Background Analysis of histopathological slices of gastric cancer is the gold standard for diagnosing gastric cancer, while manual identification is time-consuming and highly relies on the experience of pathologists. Artificial intelligence methods, particularly deep learning, can assist pathologists in finding cancerous tissues and realizing automated detection. However, due to the variety of shapes and sizes of gastric cancer lesions, as well as many interfering factors, GCHIs have a high level of complexity and difficulty in accurately finding the lesion region. Traditional deep learning methods cannot effectively extract discriminative features because of their simple decoding method so they cannot detect lesions accurately, and there is less research dedicated to detecting gastric cancer lesions. Methods We propose a gastric cancer lesion detection network (GCLDNet). At first, GCLDNet designs a level feature aggregation structure in decoder, which can effectively fuse deep and shallow features of GCHIs. Second, an attention feature fusion module is introduced to accurately locate the lesion area, which merges attention features of different scales and obtains rich discriminative information focusing on lesion. Finally, focal Tversky loss (FTL) is employed as a loss function to depress false-negative predictions and mine difficult samples. Results Experimental results on two GCHI datasets of SEED and BOT show that DSCs of the GCLDNet are 0.8265 and 0.8991, ACCs are 0.8827 and 0.8949, JIs are 0.7092 and 0.8182, and PREs are 0.7820 and 0.8763, respectively. Conclusions Experimental results demonstrate the effectiveness of GCLDNet in the detection of gastric cancer lesions. Compared with other state-of-the-art (SOTA) detection methods, the GCLDNet obtains a more satisfactory performance. This research can provide good auxiliary support for pathologists in clinical diagnosis.
Collapse
Affiliation(s)
- Xu Shi
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Long Wang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Yu Li
- Department of Pathology, Chongqing University Cancer Hospital and Chongqing Cancer Institute and Chongqing Cancer Hospital, Chongqing, China
| | - Jian Wu
- Head and Neck Cancer Center, Chongqing University Cancer Hospital and Chongqing Cancer Institute and Chongqing Cancer Hospital, Chongqing, China
- *Correspondence: Jian Wu, ; Hong Huang,
| | - Hong Huang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
- *Correspondence: Jian Wu, ; Hong Huang,
| |
Collapse
|