1
|
Cai L, Huang S, Zhang Y, Lu J, Zhang Y. AttriMIL: Revisiting attention-based multiple instance learning for whole-slide pathological image classification from a perspective of instance attributes. Med Image Anal 2025; 103:103631. [PMID: 40381256 DOI: 10.1016/j.media.2025.103631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2024] [Revised: 04/09/2025] [Accepted: 04/25/2025] [Indexed: 05/20/2025]
Abstract
Multiple instance learning (MIL) is a powerful approach for whole-slide pathological image (WSI) analysis, particularly suited for processing gigapixel-resolution images with slide-level labels. Recent attention-based MIL architectures have significantly advanced weakly supervised WSI classification, facilitating both clinical diagnosis and localization of disease-positive regions. However, these methods often face challenges in differentiating between instances, leading to tissue misidentification and a potential degradation in classification performance. To address these limitations, we propose AttriMIL, an attribute-aware multiple instance learning framework. By dissecting the computational flow of attention-based MIL models, we introduce a multi-branch attribute scoring mechanism that quantifies the pathological attributes of individual instances. Leveraging these quantified attributes, we further establish region-wise and slide-wise attribute constraints to dynamically model instance correlations both within and across slides during training. These constraints encourage the network to capture intrinsic spatial patterns and semantic similarities between image patches, thereby enhancing its ability to distinguish subtle tissue variations and sensitivity to challenging instances. To fully exploit the two constraints, we further develop a pathology adaptive learning technique to optimize pre-trained feature extractors, enabling the model to efficiently gather task-specific features. Extensive experiments on five public datasets demonstrate that AttriMIL consistently outperforms state-of-the-art methods across various dimensions, including bag classification accuracy, generalization ability, and disease-positive region localization. The implementation code is available at https://github.com/MedCAI/AttriMIL.
Collapse
Affiliation(s)
- Linghan Cai
- School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China.
| | - Shenjin Huang
- Faculty of Computing, Harbin Institute of Technology, Harbin, 150001, China
| | - Ye Zhang
- School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Jinpeng Lu
- School of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Yongbing Zhang
- School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China.
| |
Collapse
|
2
|
Dang C, Qi Z, Xu T, Gu M, Chen J, Wu J, Lin Y, Qi X. Deep Learning-Powered Whole Slide Image Analysis in Cancer Pathology. J Transl Med 2025; 105:104186. [PMID: 40306572 DOI: 10.1016/j.labinv.2025.104186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2024] [Revised: 03/05/2025] [Accepted: 04/22/2025] [Indexed: 05/02/2025] Open
Abstract
Pathology is the cornerstone of modern cancer care. With the advancement of precision oncology, the demand for histopathologic diagnosis and stratification of patients is increasing as personalized cancer therapy relies on accurate biomarker assessment. Recently, rapid development of whole slide imaging technology has enabled digitalization of traditional histologic slides at high resolution, holding promise to improve both the precision and efficiency of histopathologic evaluation. In particular, deep learning approaches, such as Convolutional Neural Network, Graph Convolutional Network, and Transformer, have shown great promise in enhancing the sensitivity and accuracy of whole slide image (WSI) analysis in cancer pathology because of their ability to handle high-dimensional and complex image data. The integration of deep learning models with WSIs enables us to explore and mine morphologic features beyond the visual perception of pathologists, which can help automate clinical diagnosis, assess histopathologic grade, predict clinical outcomes, and even discover novel morphologic biomarkers. In this review, we present a comprehensive framework for incorporating deep learning with WSIs, highlighting how deep learning-driven WSI analysis advances clinical tasks in cancer care. Furthermore, we critically discuss the opportunities and challenges of translating deep learning-based digital pathology into clinical practice, which should be considered to support personalized treatment of cancer patients.
Collapse
Affiliation(s)
- Chengrun Dang
- School of Chemistry and Life Sciences, Suzhou University of Science and Technology, Suzhou, China
| | - Zhuang Qi
- School of Software, Shandong University, Jinan, China
| | - Tao Xu
- School of Chemistry and Life Sciences, Suzhou University of Science and Technology, Suzhou, China
| | - Mingkai Gu
- School of Chemistry and Life Sciences, Suzhou University of Science and Technology, Suzhou, China
| | - Jiajia Chen
- School of Chemistry and Life Sciences, Suzhou University of Science and Technology, Suzhou, China
| | - Jie Wu
- Department of Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China.
| | - Yuxin Lin
- Department of Urology, The First Affiliated Hospital of Soochow University, Suzhou, China.
| | - Xin Qi
- School of Chemistry and Life Sciences, Suzhou University of Science and Technology, Suzhou, China.
| |
Collapse
|
3
|
Jin C, Luo L, Lin H, Hou J, Chen H. HMIL: Hierarchical Multi-Instance Learning for Fine-Grained Whole Slide Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:1796-1808. [PMID: 40030670 DOI: 10.1109/tmi.2024.3520602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Fine-grained classification of whole slide images (WSIs) is essential in precision oncology, enabling precise cancer diagnosis and personalized treatment strategies. The core of this task involves distinguishing subtle morphological variations within the same broad category of gigapixel-resolution images, which presents a significant challenge. While the multi-instance learning (MIL) paradigm alleviates the computational burden of WSIs, existing MIL methods often overlook hierarchical label correlations, treating fine-grained classification as a flat multi-class classification task. To overcome these limitations, we introduce a novel hierarchical multi-instance learning (HMIL) framework. By facilitating on the hierarchical alignment of inherent relationships between different hierarchy of labels at instance and bag level, our approach provides a more structured and informative learning process. Specifically, HMIL incorporates a class-wise attention mechanism that aligns hierarchical information at both the instance and bag levels. Furthermore, we introduce supervised contrastive learning to enhance the discriminative capability for fine-grained classification and a curriculum-based dynamic weighting module to adaptively balance the hierarchical feature during training. Extensive experiments on our large-scale cytology cervical cancer (CCC) dataset and two public histology datasets, BRACS and PANDA, demonstrate the state-of-the-art class-wise and overall performance of our HMIL framework. Our source code is available at https://github.com/ChengJin-git/HMIL.
Collapse
|
4
|
Cui H, Guo Q, Xu J, Wu X, Cai C, Jiao Y, Ming W, Wen H, Wang X. Prediction of molecular subtypes for endometrial cancer based on hierarchical foundation model. Bioinformatics 2025; 41:btaf059. [PMID: 39932017 PMCID: PMC11878776 DOI: 10.1093/bioinformatics/btaf059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Revised: 01/13/2025] [Accepted: 02/07/2025] [Indexed: 03/06/2025] Open
Abstract
MOTIVATION Endometrial cancer is a prevalent gynecological malignancy that requires accurate identification of its molecular subtypes for effective diagnosis and treatment. Four molecular subtypes with different clinical outcomes have been identified: POLE mutation, mismatch repair deficient, p53 abnormal, and no specific molecular profile. However, determining these subtypes typically relies on expensive gene sequencing. To overcome this limitation, we propose a novel method that utilizes hematoxylin and eosin-stained whole slide images to predict endometrial cancer molecular subtypes. RESULTS Our approach leverages a hierarchical foundation model as a backbone, fine-tuned from the UNI computational pathology foundation model, to extract tissue embedding from different scales. We have achieved promising results through extensive experimentation on the Fudan University Shanghai Cancer Center cohort (N = 364). Our model demonstrates a macro-average AUROC of 0.879 (95% CI, 0.853-0.904) in a five-fold cross-validation. Compared to the current state-of-the-art molecular subtypes prediction for endometrial cancer, our method outperforms in terms of predictive accuracy and computational efficiency. Moreover, our method is highly reproducible, allowing for ease of implementation and widespread adoption. This study aims to address the cost and time constraints associated with traditional gene sequencing techniques. By providing a reliable and accessible alternative to gene sequencing, our method has the potential to revolutionize the field of endometrial cancer diagnosis and improve patient outcomes. AVAILABILITY AND IMPLEMENTATION The codes and data used for generating results in this study are available at https://github.com/HaoyuCui/hi-UNI for GitHub and https://doi.org/10.5281/zenodo.14627478 for Zenodo.
Collapse
Affiliation(s)
- Haoyu Cui
- Jiangsu Key Laboratory of Intelligent Medical Image Computing, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Qinhao Guo
- Department of Gynecologic Oncology, Fudan University Shanghai Cancer Center, Shanghai 200032, China
| | - Jun Xu
- Jiangsu Key Laboratory of Intelligent Medical Image Computing, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Xiaohua Wu
- Department of Gynecologic Oncology, Fudan University Shanghai Cancer Center, Shanghai 200032, China
| | - Chengfei Cai
- Jiangsu Key Laboratory of Intelligent Medical Image Computing, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Yiping Jiao
- Jiangsu Key Laboratory of Intelligent Medical Image Computing, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Wenlong Ming
- Jiangsu Key Laboratory of Intelligent Medical Image Computing, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Hao Wen
- Department of Gynecologic Oncology, Fudan University Shanghai Cancer Center, Shanghai 200032, China
| | - Xiangxue Wang
- Jiangsu Key Laboratory of Intelligent Medical Image Computing, Nanjing University of Information Science and Technology, Nanjing 210044, China
| |
Collapse
|
5
|
Xie M, Geng Y, Zhang W, Li S, Dong Y, Wu Y, Tang H, Hong L. Multi-resolution consistency semi-supervised active learning framework for histopathology image classification. EXPERT SYSTEMS WITH APPLICATIONS 2025; 259:125266. [DOI: 10.1016/j.eswa.2024.125266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
6
|
Ramkumar M, Sarath Kumar R, Padmapriya R, Balu Mahandiran S. Improved DeTraC Binary Coyote Net-Based Multiple Instance Learning for Predicting Lymph Node Metastasis of Breast Cancer From Whole-Slide Pathological Images. Int J Med Robot 2024; 20:e70009. [PMID: 39545354 DOI: 10.1002/rcs.70009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 10/22/2024] [Accepted: 10/28/2024] [Indexed: 11/17/2024]
Abstract
BACKGROUND Early detection of lymph node metastasis in breast cancer is vital for improving treatment outcomes and prognosis. METHODS This study introduces an Improved Decompose, Transfer, and Compose Binary Coyote Net-based Multiple Instance Learning (ImDeTraC-BCNet-MIL) method for predicting lymph node metastasis from Whole Slide Images (WSIs) using multiple instance learning. The method involves segmenting WSIs into patches using Otsu and double-dimensional clustering techniques. The developed multiple instance learning approach introduces a paradigm into computational pathology by shaping pathological data and constructing features. ImDeTraC-BCNet-MIL was utilised for feature generation during both training and testing to differentiate lymph node metastasis in WSIs. RESULTS The proposed model achieves the highest accuracy of 95.3% and 99.8%, precision values of 98% and 99.8%, and recall rates of 92.9% and 99.8% on the Camelyon16 and Camelyon17 datasets. CONCLUSIONS These findings underscore the effectiveness of ImDeTraC-BCNet-MIL in enhancing the early detection of lymph node metastasis in breast cancer.
Collapse
Affiliation(s)
- M Ramkumar
- Electronics and Communication Engineering, Sri Krishna College of Engineering and Technology, Coimbatore, India
| | - R Sarath Kumar
- Electronics and Communication Engineering, Sri Krishna College of Engineering and Technology, Coimbatore, India
| | - R Padmapriya
- Electronics and Communication Engineering, Sri Aurobindo Centenary E.M High School, Tadipatri, India
| | - S Balu Mahandiran
- Mechanical Engineering, Sri Krishna College of Engineering and Technology, Coimbatore, India
| |
Collapse
|
7
|
Su Z, Rezapour M, Sajjad U, Niu S, Gurcan MN, Niazi MKK. Cross-Attention-Based Saliency Inference for Predicting Cancer Metastasis on Whole Slide Images. IEEE J Biomed Health Inform 2024; 28:7206-7216. [PMID: 39106145 PMCID: PMC11863751 DOI: 10.1109/jbhi.2024.3439499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/09/2024]
Abstract
Although multiple instance learning (MIL) methods are widely used for automatic tumor detection on whole slide images (WSI), they suffer from the extreme class imbalance WSIs containing small tumors where the tumor may include only a few isolated cells. For early detection, it is important that MIL algorithms can identify small tumors. Existing studies have attempted to address this issue using attention-based architectures and instance selection-based methodologies but have not produced significant improvements. This paper proposes cross-attention-based salient instance inference MIL (CASiiMIL), which involves a novel saliency-informed attention mechanism to identify small tumors (e.g., breast cancer lymph node micro-metastasis) on WSIs without needing any annotations. In addition to this new attention mechanism, we introduce a negative representation learning algorithm to facilitate the learning of saliency-informed attention weights for improved sensitivity on tumor WSIs. The proposed model outperforms the state-of-the-art MIL methods on two popular tumor metastasis detection datasets. The proposed approach demonstrates great cross-center generalizability, high accuracy in classifying WSIs with small tumor lesions, and excellent interpretability attributed to the saliency-informed attention weights. We expect that the proposed method will pave the way for training algorithms for early tumor detection on large datasets where acquiring fine-grained annotations is is not practical.
Collapse
|
8
|
Budginaite E, Magee DR, Kloft M, Woodruff HC, Grabsch HI. Computational methods for metastasis detection in lymph nodes and characterization of the metastasis-free lymph node microarchitecture: A systematic-narrative hybrid review. J Pathol Inform 2024; 15:100367. [PMID: 38455864 PMCID: PMC10918266 DOI: 10.1016/j.jpi.2024.100367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 01/31/2024] [Accepted: 01/31/2024] [Indexed: 03/09/2024] Open
Abstract
Background Histological examination of tumor draining lymph nodes (LNs) plays a vital role in cancer staging and prognostication. However, as soon as a LN is classed as metastasis-free, no further investigation will be performed and thus, potentially clinically relevant information detectable in tumor-free LNs is currently not captured. Objective To systematically study and critically assess methods for the analysis of digitized histological LN images described in published research. Methods A systematic search was conducted in several public databases up to December 2023 using relevant search terms. Studies using brightfield light microscopy images of hematoxylin and eosin or immunohistochemically stained LN tissue sections aiming to detect and/or segment LNs, their compartments or metastatic tumor using artificial intelligence (AI) were included. Dataset, AI methodology, cancer type, and study objective were compared between articles. Results A total of 7201 articles were collected and 73 articles remained for detailed analyses after article screening. Of the remaining articles, 86% aimed at LN metastasis identification, 8% aimed at LN compartment segmentation, and remaining focused on LN contouring. Furthermore, 78% of articles used patch classification and 22% used pixel segmentation models for analyses. Five out of six studies (83%) of metastasis-free LNs were performed on publicly unavailable datasets, making quantitative article comparison impossible. Conclusions Multi-scale models mimicking multiple microscopy zooms show promise for computational LN analysis. Large-scale datasets are needed to establish the clinical relevance of analyzing metastasis-free LN in detail. Further research is needed to identify clinically interpretable metrics for LN compartment characterization.
Collapse
Affiliation(s)
- Elzbieta Budginaite
- Department of Pathology, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
- Department of Precision Medicine, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
| | | | - Maximilian Kloft
- Department of Pathology, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
- Department of Internal Medicine, Justus-Liebig-University, Giessen, Germany
| | - Henry C. Woodruff
- Department of Precision Medicine, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
| | - Heike I. Grabsch
- Department of Pathology, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
- Pathology and Data Analytics, Leeds Institute of Medical Research at St James’s, University of Leeds, Leeds, UK
| |
Collapse
|
9
|
Qiu C, Tang C, Tang Y, Su K, Chai X, Zhan Z, Niu X, Li J. RGS5 + lymphatic endothelial cells facilitate metastasis and acquired drug resistance of breast cancer through oxidative stress-sensing mechanism. Drug Resist Updat 2024; 77:101149. [PMID: 39306871 DOI: 10.1016/j.drup.2024.101149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 08/24/2024] [Accepted: 09/03/2024] [Indexed: 11/12/2024]
Abstract
AIMS Oxidative stress reflected by elevated reactive oxygen species (ROS) in the tumor ecosystem, is a hallmark of human cancers. The mechanisms by which oxidative stress regulate the metastatic ecosystem and resistance remain elusive. This study aimed to dissect the oxidative stress-sensing machinery during the evolvement of early dissemination and acquired drug resistance in breast cancer. METHODS Here, we constructed single-cell landscape of primary breast tumors and metastatic lymph nodes, and focused on RGS5+ endothelial cell subpopulation in breast cancer metastasis and resistance. RESULTS We reported on RGS5 as a master in endothelial cells sensing oxidative stress. RGS5+ endothelial cells facilitated tumor-endothelial adhesion and transendothelial migration of breast cancer cells. Antioxidant suppressed oxidative stress-induced RGS5 expression in endothelial cells, and prevented adhesion and transendothelial migration of cancer cells. RGS5-overexpressed HLECs displayed attenuated glycolysis and oxidative phosphorylation. Drug-resistant HLECs with RGS5 overexpression conferred acquired drug resistance of breast cancer cells. Importantly, genetic knockdown of RGS5 prevented tumor growth and lymph node metastasis. CONCLUSIONS Our work demonstrates that RGS5 in lymphatic endothelial cells senses oxidative stress to promote breast cancer lymph node metastasis and resistance, providing a novel insight into a potentially targetable oxidative stress-sensing machinery in breast cancer treatment.
Collapse
Affiliation(s)
- Caixin Qiu
- Department of Gastrointestine and Gland Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning 530021, China
| | - Chaoyi Tang
- Department of Gastrointestine and Gland Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning 530021, China
| | - Yujun Tang
- Department of Gastrointestine and Gland Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning 530021, China
| | - Ka Su
- Department of Gastrointestine and Gland Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning 530021, China
| | - Xiao Chai
- Department of Gastrointestine and Gland Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning 530021, China
| | - Zexu Zhan
- Department of Gastrointestine and Gland Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning 530021, China
| | - Xing Niu
- China Medical University Shenyang 110122, China; Experimental Center of BIOQGene, YuanDong International Academy of Life Sciences, 999077, Hong Kong, China.
| | - Jiehua Li
- Department of Gastrointestine and Gland Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning 530021, China.
| |
Collapse
|
10
|
Wang Z, Ma J, Gao Q, Bain C, Imoto S, Liò P, Cai H, Chen H, Song J. Dual-stream multi-dependency graph neural network enables precise cancer survival analysis. Med Image Anal 2024; 97:103252. [PMID: 38963973 DOI: 10.1016/j.media.2024.103252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 05/24/2024] [Accepted: 06/21/2024] [Indexed: 07/06/2024]
Abstract
Histopathology image-based survival prediction aims to provide a precise assessment of cancer prognosis and can inform personalized treatment decision-making in order to improve patient outcomes. However, existing methods cannot automatically model the complex correlations between numerous morphologically diverse patches in each whole slide image (WSI), thereby preventing them from achieving a more profound understanding and inference of the patient status. To address this, here we propose a novel deep learning framework, termed dual-stream multi-dependency graph neural network (DM-GNN), to enable precise cancer patient survival analysis. Specifically, DM-GNN is structured with the feature updating and global analysis branches to better model each WSI as two graphs based on morphological affinity and global co-activating dependencies. As these two dependencies depict each WSI from distinct but complementary perspectives, the two designed branches of DM-GNN can jointly achieve the multi-view modeling of complex correlations between the patches. Moreover, DM-GNN is also capable of boosting the utilization of dependency information during graph construction by introducing the affinity-guided attention recalibration module as the readout function. This novel module offers increased robustness against feature perturbation, thereby ensuring more reliable and stable predictions. Extensive benchmarking experiments on five TCGA datasets demonstrate that DM-GNN outperforms other state-of-the-art methods and offers interpretable prediction insights based on the morphological depiction of high-attention patches. Overall, DM-GNN represents a powerful and auxiliary tool for personalized cancer prognosis from histopathology images and has great potential to assist clinicians in making personalized treatment decisions and improving patient outcomes.
Collapse
Affiliation(s)
- Zhikang Wang
- Xiangya Hospital, Central South University, Changsha, China; Biomedicine Discovery Institute and Department of Biochemistry and Molecular Biology, Monash University, Melbourne, Australia; Wenzhou Medical University-Monash Biomedicine Discovery Institute (BDI) Alliance in Clinical and Experimental Biomedicine, Wenzhou, China
| | - Jiani Ma
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, China
| | - Qian Gao
- Xiangya Hospital, Central South University, Changsha, China
| | - Chris Bain
- Faculty of Information Technology, Monash University, Melbourne, Australia
| | - Seiya Imoto
- Human Genome Center, Institute of Medical Science, The University of Tokyo, Tokyo, Japan
| | - Pietro Liò
- Department of Computer Science and Technology, The University of Cambridge, Cambridge, United Kingdom
| | - Hongmin Cai
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Hao Chen
- Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Jiangning Song
- Biomedicine Discovery Institute and Department of Biochemistry and Molecular Biology, Monash University, Melbourne, Australia; Wenzhou Medical University-Monash Biomedicine Discovery Institute (BDI) Alliance in Clinical and Experimental Biomedicine, Wenzhou, China.
| |
Collapse
|
11
|
Huang Z, Zhang X, Ju Y, Zhang G, Chang W, Song H, Gao Y. Explainable breast cancer molecular expression prediction using multi-task deep-learning based on 3D whole breast ultrasound. Insights Imaging 2024; 15:227. [PMID: 39320560 PMCID: PMC11424596 DOI: 10.1186/s13244-024-01810-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 09/03/2024] [Indexed: 09/26/2024] Open
Abstract
OBJECTIVES To noninvasively estimate three breast cancer biomarkers, estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) and enhance performance and interpretability via multi-task deep learning. METHODS The study included 388 breast cancer patients who received the 3D whole breast ultrasound system (3DWBUS) examinations at Xijing Hospital between October 2020 and September 2021. Two predictive models, a single-task and a multi-task, were developed; the former predicts biomarker expression, while the latter combines tumor segmentation with biomarker prediction to enhance interpretability. Performance evaluation included individual and overall prediction metrics, and Delong's test was used for performance comparison. The models' attention regions were visualized using Grad-CAM + + technology. RESULTS All patients were randomly split into a training set (n = 240, 62%), a validation set (n = 60, 15%), and a test set (n = 88, 23%). In the individual evaluation of ER, PR, and HER2 expression prediction, the single-task and multi-task models achieved respective AUCs of 0.809 and 0.735 for ER, 0.688 and 0.767 for PR, and 0.626 and 0.697 for HER2, as observed in the test set. In the overall evaluation, the multi-task model demonstrated superior performance in the test set, achieving a higher macro AUC of 0.733, in contrast to 0.708 for the single-task model. The Grad-CAM + + method revealed that the multi-task model exhibited a stronger focus on diseased tissue areas, improving the interpretability of how the model worked. CONCLUSION Both models demonstrated impressive performance, with the multi-task model excelling in accuracy and offering improved interpretability on noninvasive 3DWBUS images using Grad-CAM + + technology. CRITICAL RELEVANCE STATEMENT The multi-task deep learning model exhibits effective prediction for breast cancer biomarkers, offering direct biomarker identification and improved clinical interpretability, potentially boosting the efficiency of targeted drug screening. KEY POINTS Tumoral biomarkers are paramount for determining breast cancer treatment. The multi-task model can improve prediction performance, and improve interpretability in clinical practice. The 3D whole breast ultrasound system-based deep learning models excelled in predicting breast cancer biomarkers.
Collapse
Affiliation(s)
- Zengan Huang
- School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, Guangdong, 518055, China
| | - Xin Zhang
- School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, Guangdong, 518055, China
| | - Yan Ju
- Department of Ultrasound, Xijing Hospital, Fourth Military Medical University, No. 127 Changle West Road, Xi'an, 710032, China
| | - Ge Zhang
- Department of Ultrasound, Xijing Hospital, Fourth Military Medical University, No. 127 Changle West Road, Xi'an, 710032, China
| | - Wanying Chang
- Department of Ultrasound, Xijing Hospital, Fourth Military Medical University, No. 127 Changle West Road, Xi'an, 710032, China
| | - Hongping Song
- Department of Ultrasound, Xijing Hospital, Fourth Military Medical University, No. 127 Changle West Road, Xi'an, 710032, China.
| | - Yi Gao
- School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, Guangdong, 518055, China.
| |
Collapse
|
12
|
Zhang X, Liu C, Zhu H, Wang T, Du Z, Ding W. A universal multiple instance learning framework for whole slide image analysis. Comput Biol Med 2024; 178:108714. [PMID: 38889627 DOI: 10.1016/j.compbiomed.2024.108714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 06/04/2024] [Accepted: 06/04/2024] [Indexed: 06/20/2024]
Abstract
BACKGROUND The emergence of digital whole slide image (WSI) has driven the development of computational pathology. However, obtaining patch-level annotations is challenging and time-consuming due to the high resolution of WSI, which limits the applicability of fully supervised methods. We aim to address the challenges related to patch-level annotations. METHODS We propose a universal framework for weakly supervised WSI analysis based on Multiple Instance Learning (MIL). To achieve effective aggregation of instance features, we design a feature aggregation module from multiple dimensions by considering feature distribution, instances correlation and instance-level evaluation. First, we implement instance-level standardization layer and deep projection unit to improve the separation of instances in the feature space. Then, a self-attention mechanism is employed to explore dependencies between instances. Additionally, an instance-level pseudo-label evaluation method is introduced to enhance the available information during the weak supervision process. Finally, a bag-level classifier is used to obtain preliminary WSI classification results. To achieve even more accurate WSI label predictions, we have designed a key instance selection module that strengthens the learning of local features for instances. Combining the results from both modules leads to an improvement in WSI prediction accuracy. RESULTS Experiments conducted on Camelyon16, TCGA-NSCLC, SICAPv2, PANDA and classical MIL benchmark datasets demonstrate that our proposed method achieves a competitive performance compared to some recent methods, with maximum improvement of 14.6 % in terms of classification accuracy. CONCLUSION Our method can improve the classification accuracy of whole slide images in a weakly supervised way, and more accurately detect lesion areas.
Collapse
Affiliation(s)
- Xueqin Zhang
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China; Shanghai Key Laboratory of Computer Software Evaluating and Testing, Shanghai 201112, China
| | - Chang Liu
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China.
| | - Huitong Zhu
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China
| | - Tianqi Wang
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China
| | - Zunguo Du
- Department of Pathology, Huashan Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Weihong Ding
- Department of Urology, Huashan Hospital Affiliated to Fudan University, Shanghai, 200040, China.
| |
Collapse
|
13
|
Yan C, Sun J, Guan Y, Feng J, Liu H, Liu J. PhiHER2: phenotype-informed weakly supervised model for HER2 status prediction from pathological images. Bioinformatics 2024; 40:i79-i90. [PMID: 38940163 PMCID: PMC11211833 DOI: 10.1093/bioinformatics/btae236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/29/2024] Open
Abstract
MOTIVATION Human epidermal growth factor receptor 2 (HER2) status identification enables physicians to assess the prognosis risk and determine the treatment schedule for patients. In clinical practice, pathological slides serve as the gold standard, offering morphological information on cellular structure and tumoral regions. Computational analysis of pathological images has the potential to discover morphological patterns associated with HER2 molecular targets and achieve precise status prediction. However, pathological images are typically equipped with high-resolution attributes, and HER2 expression in breast cancer (BC) images often manifests the intratumoral heterogeneity. RESULTS We present a phenotype-informed weakly supervised multiple instance learning architecture (PhiHER2) for the prediction of the HER2 status from pathological images of BC. Specifically, a hierarchical prototype clustering module is designed to identify representative phenotypes across whole slide images. These phenotype embeddings are then integrated into a cross-attention module, enhancing feature interaction and aggregation on instances. This yields a phenotype-based feature space that leverages the intratumoral morphological heterogeneity for HER2 status prediction. Extensive results demonstrate that PhiHER2 captures a better WSI-level representation by the typical phenotype guidance and significantly outperforms existing methods on real-world datasets. Additionally, interpretability analyses of both phenotypes and WSIs provide explicit insights into the heterogeneity of morphological patterns associated with molecular HER2 status. AVAILABILITY AND IMPLEMENTATION Our model is available at https://github.com/lyotvincent/PhiHER2.
Collapse
Affiliation(s)
- Chaoyang Yan
- College of Computer Science, Nankai University, Tianjin 300071, China
- Centre for Bioinformatics and Intelligent Medicine, Nankai University, Tianjin 300071, China
| | - Jialiang Sun
- College of Computer Science, Nankai University, Tianjin 300071, China
- Centre for Bioinformatics and Intelligent Medicine, Nankai University, Tianjin 300071, China
| | - Yiming Guan
- College of Computer Science, Nankai University, Tianjin 300071, China
- Centre for Bioinformatics and Intelligent Medicine, Nankai University, Tianjin 300071, China
| | - Jiuxin Feng
- College of Computer Science, Nankai University, Tianjin 300071, China
- Centre for Bioinformatics and Intelligent Medicine, Nankai University, Tianjin 300071, China
| | - Hong Liu
- The Second Surgical Department of Breast Cancer, National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute & Hospital, Tianjin 300060, China
| | - Jian Liu
- College of Computer Science, Nankai University, Tianjin 300071, China
- Centre for Bioinformatics and Intelligent Medicine, Nankai University, Tianjin 300071, China
| |
Collapse
|
14
|
He Q, Ge S, Zeng S, Wang Y, Ye J, He Y, Li J, Wang Z, Guan T. Global attention based GNN with Bayesian collaborative learning for glomerular lesion recognition. Comput Biol Med 2024; 173:108369. [PMID: 38552283 DOI: 10.1016/j.compbiomed.2024.108369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 03/18/2024] [Accepted: 03/24/2024] [Indexed: 04/17/2024]
Abstract
BACKGROUND Glomerular lesions reflect the onset and progression of renal disease. Pathological diagnoses are widely regarded as the definitive method for recognizing these lesions, as the deviations in histopathological structures closely correlate with impairments in renal function. METHODS Deep learning plays a crucial role in streamlining the laborious, challenging, and subjective task of recognizing glomerular lesions by pathologists. However, the current methods treat pathology images as data in regular Euclidean space, limiting their ability to efficiently represent the complex local features and global connections. In response to this challenge, this paper proposes a graph neural network (GNN) that utilizes global attention pooling (GAP) to more effectively extract high-level semantic features from glomerular images. The model incorporates Bayesian collaborative learning (BCL), enhancing node feature fine-tuning and fusion during training. In addition, this paper adds a soft classification head to mitigate the semantic ambiguity associated with a purely hard classification. RESULTS This paper conducted extensive experiments on four glomerular datasets, comprising a total of 491 whole slide images (WSIs) and 9030 images. The results demonstrate that the proposed model achieves impressive F1 scores of 81.37%, 90.12%, 87.72%, and 98.68% on four private datasets for glomerular lesion recognition. These scores surpass the performance of the other models used for comparison. Furthermore, this paper employed a publicly available BReAst Carcinoma Subtyping (BRACS) dataset with an 85.61% F1 score to further prove the superiority of the proposed model. CONCLUSION The proposed model not only facilitates precise recognition of glomerular lesions but also serves as a potent tool for diagnosing kidney diseases effectively. Furthermore, the framework and training methodology of the GNN can be adeptly applied to address various pathology image classification challenges.
Collapse
Affiliation(s)
- Qiming He
- Department of Life and Health, Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, Guangdong, China
| | - Shuang Ge
- Department of Life and Health, Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, Guangdong, China; Peng Cheng Laboratory, Shenzhen, China
| | - Siqi Zeng
- Department of Life and Health, Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, Guangdong, China; Greater Bay Area National Center of Technology Innovation, Guangzhou, China
| | - Yanxia Wang
- Department of Pathology, State Key Laboratory of Cancer Biology, Xijing Hospital, Fourth Military Medical University, Xi'an, China; School of Basic Medicine, Fourth Military Medical University, Xi'an, China
| | - Jing Ye
- Department of Pathology, State Key Laboratory of Cancer Biology, Xijing Hospital, Fourth Military Medical University, Xi'an, China; School of Basic Medicine, Fourth Military Medical University, Xi'an, China
| | - Yonghong He
- Department of Life and Health, Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, Guangdong, China
| | - Jing Li
- Department of Pathology, State Key Laboratory of Cancer Biology, Xijing Hospital, Fourth Military Medical University, Xi'an, China; School of Basic Medicine, Fourth Military Medical University, Xi'an, China.
| | - Zhe Wang
- Department of Pathology, State Key Laboratory of Cancer Biology, Xijing Hospital, Fourth Military Medical University, Xi'an, China; School of Basic Medicine, Fourth Military Medical University, Xi'an, China
| | - Tian Guan
- Department of Life and Health, Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, Guangdong, China
| |
Collapse
|
15
|
Chen RJ, Ding T, Lu MY, Williamson DFK, Jaume G, Song AH, Chen B, Zhang A, Shao D, Shaban M, Williams M, Oldenburg L, Weishaupt LL, Wang JJ, Vaidya A, Le LP, Gerber G, Sahai S, Williams W, Mahmood F. Towards a general-purpose foundation model for computational pathology. Nat Med 2024; 30:850-862. [PMID: 38504018 PMCID: PMC11403354 DOI: 10.1038/s41591-024-02857-3] [Citation(s) in RCA: 159] [Impact Index Per Article: 159.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 02/05/2024] [Indexed: 03/21/2024]
Abstract
Quantitative evaluation of tissue images is crucial for computational pathology (CPath) tasks, requiring the objective characterization of histopathological entities from whole-slide images (WSIs). The high resolution of WSIs and the variability of morphological features present significant challenges, complicating the large-scale annotation of data for high-performance applications. To address this challenge, current efforts have proposed the use of pretrained image encoders through transfer learning from natural image datasets or self-supervised learning on publicly available histopathology datasets, but have not been extensively developed and evaluated across diverse tissue types at scale. We introduce UNI, a general-purpose self-supervised model for pathology, pretrained using more than 100 million images from over 100,000 diagnostic H&E-stained WSIs (>77 TB of data) across 20 major tissue types. The model was evaluated on 34 representative CPath tasks of varying diagnostic difficulty. In addition to outperforming previous state-of-the-art models, we demonstrate new modeling capabilities in CPath such as resolution-agnostic tissue classification, slide classification using few-shot class prototypes, and disease subtyping generalization in classifying up to 108 cancer types in the OncoTree classification system. UNI advances unsupervised representation learning at scale in CPath in terms of both pretraining data and downstream evaluation, enabling data-efficient artificial intelligence models that can generalize and transfer to a wide range of diagnostically challenging tasks and clinical workflows in anatomic pathology.
Collapse
Affiliation(s)
- Richard J Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Tong Ding
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Ming Y Lu
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
| | - Drew F K Williamson
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Guillaume Jaume
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Andrew H Song
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Bowen Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Andrew Zhang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Daniel Shao
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Muhammad Shaban
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Mane Williams
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Lukas Oldenburg
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Luca L Weishaupt
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Judy J Wang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Anurag Vaidya
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Long Phi Le
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Georg Gerber
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Sharifa Sahai
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Systems Biology, Harvard University, Cambridge, MA, USA
| | - Walt Williams
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA.
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA.
- Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
16
|
Li Y, Liu J, Xu Z, Shang J, Wu S, Zhang M, Liu Y. Construction and validation of a nomogram for predicting the prognosis of patients with lymph node-positive invasive micropapillary carcinoma of the breast: based on SEER database and external validation cohort. Front Oncol 2023; 13:1231302. [PMID: 37954073 PMCID: PMC10635422 DOI: 10.3389/fonc.2023.1231302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 10/09/2023] [Indexed: 11/14/2023] Open
Abstract
Background Invasive micropapillary carcinoma (IMPC) of the breast is a rare subtype of breast cancer with high incidence of aggressive clinical behavior, lymph node metastasis (LNM) and poor prognosis. In the present study, using the Surveillance, Epidemiology, and End Results (SEER) database, we analyzed the clinicopathological characteristics and prognostic factors of IMPC with LNM, and constructed a prognostic nomogram. Methods We retrospectively analyzed data for 487 breast IMPC patients with LNM in the SEER database from January 2010 to December 2015, and randomly divided these patients into a training cohort (70%) and an internal validation cohort (30%) for the construction and internal validation of the nomogram, respectively. In addition, 248 patients diagnosed with IMPC and LNM at the Fourth Hospital of Hebei Medical University from January 2010 to December 2019 were collected as an external validation cohort. Lasso regression, along with Cox regression, was used to screen risk factors. Further more, the discrimination, calibration, and clinical utility of the nomogram were assessed based on the consistency index (C-index), time-dependent receiver operating characteristic (ROC), calibration curve, and decision curve analysis (DCA). Results In summary, we identified six variables including molecular subtype of breast cancer, first malignant primary indicator, tumor grade, AJCC stage, radiotherapy and chemotherapy were independent prognostic factors in predicting the prognosis of IMPC patients with LNM (P < 0.05). Based on these factors, a nomogram was constructed for predicting 3- and 5-year overall survival (OS) of patients. The nomogram achieved a C-index of 0.789 (95%CI: 0.759-0.819) in the training cohort, 0.775 (95%CI: 0.731-0.819) in the internal validation cohort, and 0.788 (95%CI: 0.756-0.820) in the external validation cohort. According to the calculated patient risk score, the patients were divided into a high-risk group and a low-risk group, which showed a significant difference in the survival prognosis of the two groups (P<0.0001). The time-dependent ROC curves, calibration curves and DCA curves proved the superiority of the nomogram. Conclusions We have successfully constructed a nomogram that could predict 3- and 5-year OS of IMPC patients with LNM and may assist clinicians in decision-making and personalized treatment planning.
Collapse
Affiliation(s)
- Yifei Li
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, China
| | - Jinzhao Liu
- The Second Department of Thyroid and Breast Surgery, Cangzhou Central Hospital, Cangzhou, China
| | - Zihang Xu
- College of Basic Medical Sciences, Hebei Medical University, Shijiazhuang, China
| | - Jiuyan Shang
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, China
| | - Si Wu
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, China
| | - Meng Zhang
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, China
| | - Yueping Liu
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, China
| |
Collapse
|