1
|
Zheng R, Wang X, Zhu L, Yan R, Li J, Wei Y, Zhang F, Du H, Guo L, He Y, Shi H, Han A. A deep learning method for predicting the origins of cervical lymph node metastatic cancer on digital pathological images. iScience 2024; 27:110645. [PMID: 39252964 PMCID: PMC11381752 DOI: 10.1016/j.isci.2024.110645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 06/15/2024] [Accepted: 07/30/2024] [Indexed: 09/11/2024] Open
Abstract
The metastatic cancer of cervical lymph nodes presents complex shapes and poses significant challenges for doctors in determining its origin. We established a deep learning framework to predict the status of lymph nodes in patients with cervical lymphadenopathy (CLA) by hematoxylin and eosin (H&E) stained slides. This retrospective study utilized 1,036 cervical lymph node biopsy specimens at the First Affiliated Hospital of Sun Yat-Sen University (FAHSYSU). A multiple-instance learning algorithm designed for key region identification was applied, and cross-validation experiments were conducted in the dataset. Additionally, the model distinguished between primary lymphoma and metastatic cancer with high prediction accuracy. We also validated our model and other models on an external dataset. Our model showed better generalization and achieved the best results on both internal and external datasets. This algorithm offers an approach for evaluating cervical lymph node status before surgery, significantly aiding physicians in preoperative diagnosis and treatment planning.
Collapse
Affiliation(s)
- Runliang Zheng
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong, China
| | - Xuenian Wang
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong, China
| | - Lianghui Zhu
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong, China
| | - Renao Yan
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong, China
| | - Jiawen Li
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong, China
| | - Yani Wei
- Department of Pathology, the First Affiliated Hospital of Sun Yat-sen University, Shenzhen, Guangdong, China
| | - Fenfen Zhang
- Department of Pathology, the First Affiliated Hospital of Sun Yat-sen University, Shenzhen, Guangdong, China
| | - Hong Du
- Department of Pathology, Guangzhou First People's Hospital, South China University of Technology, Guangzhou, China
| | - Linlang Guo
- Department of Pathology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yonghong He
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong, China
| | - Huijuan Shi
- Department of Pathology, the First Affiliated Hospital of Sun Yat-sen University, Shenzhen, Guangdong, China
| | - Anjia Han
- Department of Pathology, the First Affiliated Hospital of Sun Yat-sen University, Shenzhen, Guangdong, China
| |
Collapse
|
2
|
He Q, Ge S, Zeng S, Wang Y, Ye J, He Y, Li J, Wang Z, Guan T. Global attention based GNN with Bayesian collaborative learning for glomerular lesion recognition. Comput Biol Med 2024; 173:108369. [PMID: 38552283 DOI: 10.1016/j.compbiomed.2024.108369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 03/18/2024] [Accepted: 03/24/2024] [Indexed: 04/17/2024]
Abstract
BACKGROUND Glomerular lesions reflect the onset and progression of renal disease. Pathological diagnoses are widely regarded as the definitive method for recognizing these lesions, as the deviations in histopathological structures closely correlate with impairments in renal function. METHODS Deep learning plays a crucial role in streamlining the laborious, challenging, and subjective task of recognizing glomerular lesions by pathologists. However, the current methods treat pathology images as data in regular Euclidean space, limiting their ability to efficiently represent the complex local features and global connections. In response to this challenge, this paper proposes a graph neural network (GNN) that utilizes global attention pooling (GAP) to more effectively extract high-level semantic features from glomerular images. The model incorporates Bayesian collaborative learning (BCL), enhancing node feature fine-tuning and fusion during training. In addition, this paper adds a soft classification head to mitigate the semantic ambiguity associated with a purely hard classification. RESULTS This paper conducted extensive experiments on four glomerular datasets, comprising a total of 491 whole slide images (WSIs) and 9030 images. The results demonstrate that the proposed model achieves impressive F1 scores of 81.37%, 90.12%, 87.72%, and 98.68% on four private datasets for glomerular lesion recognition. These scores surpass the performance of the other models used for comparison. Furthermore, this paper employed a publicly available BReAst Carcinoma Subtyping (BRACS) dataset with an 85.61% F1 score to further prove the superiority of the proposed model. CONCLUSION The proposed model not only facilitates precise recognition of glomerular lesions but also serves as a potent tool for diagnosing kidney diseases effectively. Furthermore, the framework and training methodology of the GNN can be adeptly applied to address various pathology image classification challenges.
Collapse
Affiliation(s)
- Qiming He
- Department of Life and Health, Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, Guangdong, China
| | - Shuang Ge
- Department of Life and Health, Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, Guangdong, China; Peng Cheng Laboratory, Shenzhen, China
| | - Siqi Zeng
- Department of Life and Health, Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, Guangdong, China; Greater Bay Area National Center of Technology Innovation, Guangzhou, China
| | - Yanxia Wang
- Department of Pathology, State Key Laboratory of Cancer Biology, Xijing Hospital, Fourth Military Medical University, Xi'an, China; School of Basic Medicine, Fourth Military Medical University, Xi'an, China
| | - Jing Ye
- Department of Pathology, State Key Laboratory of Cancer Biology, Xijing Hospital, Fourth Military Medical University, Xi'an, China; School of Basic Medicine, Fourth Military Medical University, Xi'an, China
| | - Yonghong He
- Department of Life and Health, Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, Guangdong, China
| | - Jing Li
- Department of Pathology, State Key Laboratory of Cancer Biology, Xijing Hospital, Fourth Military Medical University, Xi'an, China; School of Basic Medicine, Fourth Military Medical University, Xi'an, China.
| | - Zhe Wang
- Department of Pathology, State Key Laboratory of Cancer Biology, Xijing Hospital, Fourth Military Medical University, Xi'an, China; School of Basic Medicine, Fourth Military Medical University, Xi'an, China
| | - Tian Guan
- Department of Life and Health, Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, Guangdong, China
| |
Collapse
|
3
|
Li J, Cheng J, Meng L, Yan H, He Y, Shi H, Guan T, Han A. DeepTree: Pathological Image Classification Through Imitating Tree-Like Strategies of Pathologists. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1501-1512. [PMID: 38090840 DOI: 10.1109/tmi.2023.3341846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Digitization of pathological slides has promoted the research of computer-aided diagnosis, in which artificial intelligence analysis of pathological images deserves attention. Appropriate deep learning techniques in natural images have been extended to computational pathology. Still, they seldom take into account prior knowledge in pathology, especially the analysis process of lesion morphology by pathologists. Inspired by the diagnosis decision of pathologists, we design a novel deep learning architecture based on tree-like strategies called DeepTree. It imitates pathological diagnosis methods, designed as a binary tree structure, to conditionally learn the correlation between tissue morphology, and optimizes branches to finetune the performance further. To validate and benchmark DeepTree, we build a dataset of frozen lung cancer tissues and design experiments on a public dataset of breast tumor subtypes and our dataset. Results show that the deep learning architecture based on tree-like strategies makes the pathological image classification more accurate, transparent, and convincing. Simultaneously, prior knowledge based on diagnostic strategies yields superior representation ability compared to alternative methods. Our proposed methodology helps improve the trust of pathologists in artificial intelligence analysis and promotes the practical clinical application of pathology-assisted diagnosis.
Collapse
|
4
|
Lu MY, Chen B, Williamson DFK, Chen RJ, Liang I, Ding T, Jaume G, Odintsov I, Le LP, Gerber G, Parwani AV, Zhang A, Mahmood F. A visual-language foundation model for computational pathology. Nat Med 2024; 30:863-874. [PMID: 38504017 PMCID: PMC11384335 DOI: 10.1038/s41591-024-02856-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 02/05/2024] [Indexed: 03/21/2024]
Abstract
The accelerated adoption of digital pathology and advances in deep learning have enabled the development of robust models for various pathology tasks across a diverse array of diseases and patient cohorts. However, model training is often difficult due to label scarcity in the medical domain, and a model's usage is limited by the specific task and disease for which it is trained. Additionally, most models in histopathology leverage only image data, a stark contrast to how humans teach each other and reason about histopathologic entities. We introduce CONtrastive learning from Captions for Histopathology (CONCH), a visual-language foundation model developed using diverse sources of histopathology images, biomedical text and, notably, over 1.17 million image-caption pairs through task-agnostic pretraining. Evaluated on a suite of 14 diverse benchmarks, CONCH can be transferred to a wide range of downstream tasks involving histopathology images and/or text, achieving state-of-the-art performance on histology image classification, segmentation, captioning, and text-to-image and image-to-text retrieval. CONCH represents a substantial leap over concurrent visual-language pretrained systems for histopathology, with the potential to directly facilitate a wide array of machine learning-based workflows requiring minimal or no further supervised fine-tuning.
Collapse
Affiliation(s)
- Ming Y Lu
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
| | - Bowen Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Drew F K Williamson
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Richard J Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Ivy Liang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Tong Ding
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Guillaume Jaume
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Igor Odintsov
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Long Phi Le
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Georg Gerber
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Anil V Parwani
- Department of Pathology, Wexner Medical Center, Ohio State University, Columbus, OH, USA
| | - Andrew Zhang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA.
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA.
- Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
5
|
Talaat FM, El-Sappagh S, Alnowaiser K, Hassan E. Improved prostate cancer diagnosis using a modified ResNet50-based deep learning architecture. BMC Med Inform Decis Mak 2024; 24:23. [PMID: 38267994 PMCID: PMC10809762 DOI: 10.1186/s12911-024-02419-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 01/08/2024] [Indexed: 01/26/2024] Open
Abstract
Prostate cancer, the most common cancer in men, is influenced by age, family history, genetics, and lifestyle factors. Early detection of prostate cancer using screening methods improves outcomes, but the balance between overdiagnosis and early detection remains debated. Using Deep Learning (DL) algorithms for prostate cancer detection offers a promising solution for accurate and efficient diagnosis, particularly in cases where prostate imaging is challenging. In this paper, we propose a Prostate Cancer Detection Model (PCDM) model for the automatic diagnosis of prostate cancer. It proves its clinical applicability to aid in the early detection and management of prostate cancer in real-world healthcare environments. The PCDM model is a modified ResNet50-based architecture that integrates faster R-CNN and dual optimizers to improve the performance of the detection process. The model is trained on a large dataset of annotated medical images, and the experimental results show that the proposed model outperforms both ResNet50 and VGG19 architectures. Specifically, the proposed model achieves high sensitivity, specificity, precision, and accuracy rates of 97.40%, 97.09%, 97.56%, and 95.24%, respectively.
Collapse
Affiliation(s)
- Fatma M Talaat
- Faculty of Artificial Intelligence, Kafrelsheikh University, Kafrelsheikh, 33516, Egypt
- Faculty of Computer Science & Engineering, New Mansoura University, Gamasa, 35712, Egypt
| | - Shaker El-Sappagh
- Faculty of Computer Science and Engineering, Galala University, Suez, 435611, Egypt
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Banha, 13518, Egypt
| | - Khaled Alnowaiser
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al Kharj, 11942, Saudi Arabia.
| | - Esraa Hassan
- Faculty of Artificial Intelligence, Kafrelsheikh University, Kafrelsheikh, 33516, Egypt
| |
Collapse
|