1
|
Lu K, Lin S, Xue K, Huang D, Ji Y. Optimized multiple instance learning for brain tumor classification using weakly supervised contrastive learning. Comput Biol Med 2025; 191:110075. [PMID: 40220594 DOI: 10.1016/j.compbiomed.2025.110075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 02/28/2025] [Accepted: 03/21/2025] [Indexed: 04/14/2025]
Abstract
Brain tumors have a great impact on patients' quality of life and accurate histopathological classification of brain tumors is crucial for patients' prognosis. Multi-instance learning (MIL) has become the mainstream method for analyzing whole-slide images (WSIs). However, current MIL-based methods face several issues, including significant redundancy in the input and feature space, insufficient modeling of spatial relations between patches and inadequate representation capability of the feature extractor. To solve these limitations, we propose a new multi-instance learning with weakly supervised contrastive learning for brain tumor classification. Our framework consists of two parts: a cross-detection MIL aggregator (CDMIL) for brain tumor classification and a contrastive learning model based on pseudo-labels (PSCL) for optimizing feature encoder. The CDMIL consists of three modules: an internal patch anchoring module (IPAM), a local structural learning module (LSLM) and a cross-detection module (CDM). Specifically, IPAM utilizes probability distribution to generate representations of anchor samples, while LSLM extracts representations of local structural information between anchor samples. These two representations are effectively fused in CDM. Additionally, we propose a bag-level contrastive loss to interact with different subtypes in the feature space. PSCL uses the samples and pseudo-labels anchored by IPAM to optimize the performance of the feature encoder, which extracts a better feature representation to train CDMIL. We performed benchmark tests on a self-collected dataset and a publicly available dataset. The experiments show that our method has better performance than several existing state-of-the-art methods.
Collapse
Affiliation(s)
- Kaoyan Lu
- Key Laboratory of Atomic and Subatomic Structure and Quantum Control (Ministry of Education), Guangdong Basic Research Center of Excellence for Structure and Fundamental Interactions of Matter, School of Physics, South China Normal University, 378 Waihuan West Road, Panyu District, Guangzhou, 510006, Guangdong Province, China; Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials, Guangdong-Hong Kong Joint Laboratory of Quantum Matter, South China Normal University, 378 Waihuan West Road, Panyu District, 510006, Guangdong Province, Guangzhou, China
| | - Shiyu Lin
- Key Laboratory of Atomic and Subatomic Structure and Quantum Control (Ministry of Education), Guangdong Basic Research Center of Excellence for Structure and Fundamental Interactions of Matter, School of Physics, South China Normal University, 378 Waihuan West Road, Panyu District, Guangzhou, 510006, Guangdong Province, China; Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials, Guangdong-Hong Kong Joint Laboratory of Quantum Matter, South China Normal University, 378 Waihuan West Road, Panyu District, 510006, Guangdong Province, Guangzhou, China
| | - Kaiwen Xue
- School of Cyberspace Security, Beijing University of Posts and Telecommunications, 10 Xitucheng Road, Haidian District, Beijing, 100876, China
| | - Duoxi Huang
- The Third Affiliated Hospital of Southern Medical University, 183 Zhongshan Avenue West, Tianhe District, Guangzhou, 510630, Guangdong Province, China.
| | - Yanghong Ji
- Key Laboratory of Atomic and Subatomic Structure and Quantum Control (Ministry of Education), Guangdong Basic Research Center of Excellence for Structure and Fundamental Interactions of Matter, School of Physics, South China Normal University, 378 Waihuan West Road, Panyu District, Guangzhou, 510006, Guangdong Province, China; Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials, Guangdong-Hong Kong Joint Laboratory of Quantum Matter, South China Normal University, 378 Waihuan West Road, Panyu District, 510006, Guangdong Province, Guangzhou, China.
| |
Collapse
|
2
|
Dai H, Li W, Wang Q, Cheng B. Multiple Instance Learning-Based Prediction of Blood-Brain Barrier Opening Outcomes Induced by Focused Ultrasound. IEEE Trans Biomed Eng 2025; 72:1465-1472. [PMID: 40030539 DOI: 10.1109/tbme.2024.3509533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
OBJECTIVE Targeted blood-brain barrier (BBB) opening using focused ultrasound (FUS) and micro/nanobubbles is a promising method for brain drug delivery. This study aims to explore the feasibility of multiple instance learning (MIL) in accurate and fast prediction of FUS BBB opening outcomes. METHODS FUS BBB opening experiments are conducted on 52 mice with the infusion of SonoVue microbubbles or custom-made nanobubbles. Acoustic signals collected during the experiments are transformed into frequency domain and used as the dataset. We propose a Simple Transformer-based model for BBB Opening Prediction (SimTBOP). By leveraging the self-attention mechanism, our model considers the contextual relationships between signals from different pulses in a treatment and aggregates this information to predict the BBB opening outcomes. Multiple preprocessing methods are applied to evaluate the performance of the proposed model under various conditions. Additionally, a visualization technique is employed to explain and interpret the model. RESULTS The proposed model achieves excellent prediction performance with an accuracy of 96.7%. Excluding absolute intensity information and retaining baseline noise did not affect the model's performance or interpretability. The proposed model trained on SonoVue data generalizes well to nanobubble data and vice versa. Visualization results indicate that the proposed model focuses on pulses with significant signals near the ultra-harmonic frequency. CONCLUSION We demonstrate the feasibility of MIL in FUS BBB opening prediction. The proposed Transformer-based model exhibits outstanding performance, interpretability, and cross-agent generalization capability, providing a novel approach for FUS BBB opening prediction with clinical translation potential.
Collapse
|
3
|
Mai C, Wang Q, Mai Z, Qin C, Zeng J, Xie H, Xiao Y, Huang H, Chen W, Yan W, Yuan R. The application of multi-instance learning based on feature reconstruction and cross-mixing in the Gleason grading of prostate cancer from whole-slide images. Quant Imaging Med Surg 2025; 15:3263-3284. [PMID: 40235816 PMCID: PMC11994575 DOI: 10.21037/qims-24-1985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2024] [Accepted: 02/24/2025] [Indexed: 04/17/2025]
Abstract
Background Prostate cancer is a common malignancy in men, requiring accurate diagnosis and prognosis. The Gleason grading system remains the preferred method of evaluation and is critical to risk stratification and informing treatment strategies. However, analyzing whole-slide image (WSI) is significantly challenging due to high pixel density, tumor heterogeneity, and the difficulty in acquiring precise annotated data. This study developed a weakly supervised multiple instance learning (MIL)-based method for Gleason grading of prostate cancer pathology images, aiming to enhance tumor classification performance and provide more reliable support for clinical risk assessment and treatment strategies. Methods This study developed a novel feature reconstruction and cross-mixing-based MIL (FRCM-MIL) method to enhance the accuracy of prostate cancer from WSIs. This method includes a spatial feature reconstruction module based on wavelet transform (SFRM-WT), which combines frequency domain information to extract more diverse features. A cross-attention module (CAM) was included to enhance feature interaction and fusion. Additionally, a confidence query aggregation module (CQAM) was used to consolidate input features and create confidence-enhanced outputs. Results The proposed method achieved an accuracy of 81.75% and an area under the curve (AUC) of 94.41% on the Peking Union Medical College Hospital (PUMCH) dataset, along with an accuracy of 67.24% and an AUC of 91.69% on the Prostate Cancer Grade Assessment Challenge (PANDA) dataset, outperforming existing state-of-the-art approaches. Conclusions The FRCM-MIL model performs outstandingly in the Gleason grading task for prostate cancer WSIs, effectively distinguishing between different grades. This model has the potential to assist clinicians in formulating personalized chemotherapy and radiotherapy plans, ultimately improving treatment outcomes and demonstrating significant clinical value.
Collapse
Affiliation(s)
- Chaoyun Mai
- School of Electronics and Information Engineering, Wuyi University, Jiangmen, China
| | - Qianwen Wang
- School of Electronics and Information Engineering, Wuyi University, Jiangmen, China
| | - Zhipeng Mai
- Department of Urology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Chuanbo Qin
- School of Electronics and Information Engineering, Wuyi University, Jiangmen, China
| | - Junying Zeng
- School of Electronics and Information Engineering, Wuyi University, Jiangmen, China
| | - Hao Xie
- School of Electronics and Information Engineering, Wuyi University, Jiangmen, China
| | - Yu Xiao
- Department of Pathology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Hongxing Huang
- Department of Urology, Zhongshan City People’s Hospital, Zhongshan, China
| | - Weitian Chen
- Department of Urology, Zhongshan City People’s Hospital, Zhongshan, China
| | - Weigang Yan
- Department of Urology, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing, China
| | - Runqiang Yuan
- Department of Urology, Zhongshan City People’s Hospital, Zhongshan, China
| |
Collapse
|
4
|
Wu K, Jiang Z, Tang K, Shi J, Xie F, Wang W, Wu H, Zheng Y. Pan-Cancer Histopathology WSI Pre-Training With Position-Aware Masked Autoencoder. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:1610-1623. [PMID: 40030469 DOI: 10.1109/tmi.2024.3513358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Large-scale pre-training models have promoted the development of histopathology image analysis. However, existing self-supervised methods for histopathology images primarily focus on learning patch features, while there is a notable gap in the availability of pre-training models specifically designed for WSI-level feature learning. In this paper, we propose a novel self-supervised learning framework for pan-cancer WSI-level representation pre-training with the designed position-aware masked autoencoder (PAMA). Meanwhile, we propose the position-aware cross-attention (PACA) module with a kernel reorientation (KRO) strategy and an anchor dropout (AD) mechanism. The KRO strategy can capture the complete semantic structure and eliminate ambiguity in WSIs, and the AD contributes to enhancing the robustness and generalization of the model. We evaluated our method on 7 large-scale datasets from multiple organs for pan-cancer classification tasks. The results have demonstrated the effectiveness and generalization of PAMA in discriminative WSI representation learning and pan-cancer WSI pre-training. The proposed method was also compared with 8 WSI analysis methods. The experimental results have indicated that our proposed PAMA is superior to the state-of-the-art methods. The code and checkpoints are available at https://github.com/WkEEn/PAMA.
Collapse
|
5
|
Li J, Zhang Y, Shu W, Feng X, Wang Y, Yan P, Li X, Sha C, He M. M4: Multi-proxy multi-gate mixture of experts network for multiple instance learning in histopathology image analysis. Med Image Anal 2025; 103:103561. [PMID: 40198973 DOI: 10.1016/j.media.2025.103561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 02/26/2025] [Accepted: 03/23/2025] [Indexed: 04/10/2025]
Abstract
Multiple instance learning (MIL) has been successfully applied for whole slide images (WSIs) analysis in computational pathology, enabling a wide range of prediction tasks from tumor subtyping to inferring genetic mutations and multi-omics biomarkers. However, existing MIL methods predominantly focus on single-task learning, resulting in not only overall low efficiency but also the overlook of inter-task relatedness. To address these issues, we proposed an adapted architecture of Multi-gate Mixture-of-experts with Multi-proxy for Multiple instance learning (M4), and applied this framework for simultaneous prediction of multiple genetic mutations from WSIs. The proposed M4 model has two main innovations: (1) adopting a multi-gate mixture-of-experts strategy for multiple genetic mutation simultaneous prediction on a single WSI; (2) introducing a multi-proxy CNN construction on the expert and gate networks to effectively and efficiently capture patch-patch interactions from WSI. Our model achieved significant improvements across five tested TCGA datasets in comparison to current state-of-the-art single-task methods. The code is available at: https://github.com/Bigyehahaha/M4.
Collapse
Affiliation(s)
- Junyu Li
- Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, China.
| | - Ye Zhang
- Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, China; Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou, 310024, China.
| | - Wen Shu
- Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, China; College of Electrical and Information Engineering, Hunan University, Changsha, 410082, China
| | - Xiaobing Feng
- Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, China; College of Electrical and Information Engineering, Hunan University, Changsha, 410082, China
| | - Yingchun Wang
- Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, China; Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou, 310024, China
| | - Pengju Yan
- Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Xiaolin Li
- Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Chulin Sha
- Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, China.
| | - Min He
- Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, China; Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou, 310024, China; College of Electrical and Information Engineering, Hunan University, Changsha, 410082, China.
| |
Collapse
|
6
|
Wu Z, He H, Zhao X, Lin Z, Ye Y, Guo J, Hu W, Jiang X. Reimagining cancer tissue classification: a multi-scale framework based on multi-instance learning for whole slide image classification. Med Biol Eng Comput 2025:10.1007/s11517-025-03341-x. [PMID: 40088255 DOI: 10.1007/s11517-025-03341-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 02/25/2025] [Indexed: 03/17/2025]
Abstract
In cancer pathology diagnosis, analyzing Whole Slide Images (WSI) encounters challenges like invalid data, varying tissue features at different magnifications, and numerous hard samples. Multiple Instance Learning (MIL) is a powerful tool for addressing weakly supervised classification in WSI-based pathology diagnosis. However, existing MIL frameworks cannot simultaneously tackle these issues. To address these challenges, we propose an integrated recognition framework comprising three complementary components: a preprocessing selection method, an Efficient Feature Pyramid Network (EFPN) model for multi-instance learning, and a Similarity Focal Loss. The preprocessing selection method accurately identifies and selects representative image patches, effectively reducing invalid data interference and enhancing subsequent model training efficiency. The EFPN model, inspired by pathologists' diagnostic processes, captures different tissue features in WSI images by constructing a multi-scale feature pyramid, enhancing the model's ability to recognize tumor tissue features. Additionally, the Similarity Focal Loss further improves the model's discriminative power and generalization performance by focusing on hard samples and emphasizing classification boundary information. The test accuracy for binary tumor classification on the CAMELYON16 and two private datasets reached 93.58%, 84.74%, and 99.91%, respectively, all of which outperform existing techniques.
Collapse
Affiliation(s)
- Zixuan Wu
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, Guangdong, China
| | - Haiyong He
- Department of Neurosurgery, The Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou, 510630, Guangdong, China
| | - Xiushun Zhao
- School of Intelligent Systems Engineering, Sun Yat-Sen University, Shenzhen, 510275, Guangdong, China
| | - Zhenghui Lin
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, Guangdong, China
| | - Yanyan Ye
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, Guangdong, China
| | - Jing Guo
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, Guangdong, China.
| | - Wanming Hu
- Department of Pathology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou, 510060, Guangdong, China
| | - Xiaobing Jiang
- Department of Neurosurgery/Neuro-Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou, 510060, Guangdong, China.
| |
Collapse
|
7
|
Herve Q, Ipek N, Verwaeren J, De Beer T. A deep learning approach to perform defect classification of freeze-dried product. Int J Pharm 2025; 670:125127. [PMID: 39756597 DOI: 10.1016/j.ijpharm.2024.125127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Revised: 12/19/2024] [Accepted: 12/21/2024] [Indexed: 01/07/2025]
Abstract
Cosmetic inspection of freeze-dried products is an important part of the post-manufacturing quality control process. Traditionally done by human visual inspection, this method poses typical challenges and shortcomings that can be addressed with innovative techniques. While many cosmetic defects can occur, some are considered more critical than others as they can be harmful to the patient or affect the drug's efficacy. With the rise of artificial intelligence and computer vision technology, faster and more reproducible quality control is possible, allowing real-time monitoring on a continuous manufacturing line. In this study, several continuously freeze-dried samples were prepared using formulations and process settings that lead deliberately to specific defects faced in freeze-drying as well as defect-free samples. Two approaches (i.e. patch-based approach and multi-label classification) capable of handling high-resolution images based on Convolutional Neural Networks were developed and compared to select the optimal one. Additional visualization techniques were used to enhance model understanding further. The best approach achieved perfect precision and recall on critical defects, with a prediction time of less than 50 ms to make a decision on the acceptance or rejection of vials generated.
Collapse
Affiliation(s)
- Quentin Herve
- Laboratory of Pharmaceutical Process Analytical Technology, Department of Pharmaceutical Analysis, Ghent University, 9000 Gent, Belgium.
| | - Nusret Ipek
- Department of Data Analysis and Mathematical Modelling, Ghent University, Coupure links 653, B-9000 Gent, Belgium
| | - Jan Verwaeren
- Department of Data Analysis and Mathematical Modelling, Ghent University, Coupure links 653, B-9000 Gent, Belgium
| | - Thomas De Beer
- Laboratory of Pharmaceutical Process Analytical Technology, Department of Pharmaceutical Analysis, Ghent University, 9000 Gent, Belgium.
| |
Collapse
|
8
|
Amjad U, Raza A, Fahad M, Farid D, Akhunzada A, Abubakar M, Beenish H. Context aware machine learning techniques for brain tumor classification and detection - A review. Heliyon 2025; 11:e41835. [PMID: 39906822 PMCID: PMC11791217 DOI: 10.1016/j.heliyon.2025.e41835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 01/07/2025] [Accepted: 01/08/2025] [Indexed: 02/06/2025] Open
Abstract
Background Machine learning has tremendous potential in acute medical care, particularly in the field of precise medical diagnosis, prediction, and classification of brain tumors. Malignant gliomas, due to their aggressive growth and dismal prognosis, stand out among various brain tumor types. Recent advancements in understanding the genetic abnormalities that underlie these tumors have shed light on their histo-pathological and biological characteristics, which support in better classification and prognosis. Objectives This review aims to predict gene alterations and establish structured correlations among various tumor types, extending the prediction of genetic mutations and structures using the latest machine learning techniques. Specifically, it focuses on multi-modalities of Magnetic Resonance Imaging (MRI) and histopathology, utilizing Convolutional Neural Networks (CNN) for image processing and analysis. Methods The review encompasses the most recent developments in MRI, and histology image processing methods across multiple tumor classes, including Glioma, Meningioma, Pituitary, Oligodendroglioma, and Astrocytoma. It identifies challenges in tumor classification, segmentation, datasets, and modalities, employing various neural network architectures. A competitive analysis assesses the performance of CNN. Furthermore it also implies K-MEANS clustering to predict Genetic structure, Genes Clusters prediction and Molecular Alteration of various types and grades of tumors e.g. Glioma, Meningioma, Pituitary, Oligodendroglioma, and Astrocytoma. Results CNN and KNN structures, with their ability to extract highlights in image-based information, prove effective in tumor classification and segmentation, surmounting challenges in image analysis. Competitive analysis reveals that CNN and outperform others algorithms on publicly available datasets, suggesting their potential for precise tumor diagnosis and treatment planning. Conclusion Machine learning, especially through CNN and SVM algorithms, demonstrates significant potential in the accurate diagnosis and classification of brain tumors based on imaging and histo-pathological data. Further advancements in this area hold promise for improving the accuracy and efficiency of intra-operative tumor diagnosis and treatment.
Collapse
Affiliation(s)
- Usman Amjad
- NED University of Engineering and Technology, Karachi, Pakistan
| | - Asif Raza
- Sir Syed University of Engineering and Technology, Karachi, Pakistan
| | - Muhammad Fahad
- Karachi Institute of Economics and Technology, Karachi, Pakistan
| | | | - Adnan Akhunzada
- College of Computing and IT, University of Doha for Science and Technology, Qatar
| | - Muhammad Abubakar
- Muhammad Nawaz Shareef University of Engineering and Technology, Multan, Pakistan
| | - Hira Beenish
- Karachi Institute of Economics and Technology, Karachi, Pakistan
| |
Collapse
|
9
|
An Y, Lei Y, Huang Z, Liu Y, Huang M, Liu Z, Li W, Liang D, Huang W, Hu Z. MacNet: a mobile attention classification network combining convolutional neural network and transformer for the differentiation of cervical cancer. Quant Imaging Med Surg 2025; 15:55-73. [PMID: 39839018 PMCID: PMC11744112 DOI: 10.21037/qims-24-810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Accepted: 10/18/2024] [Indexed: 01/23/2025]
Abstract
Background Cervical cancer remains a critical global health issue, responsible for over 600,000 new cases and 300,000 deaths annually. Pathological imaging of cervical cancer is a crucial diagnostic tool. However, distinguishing specific areas of cellular differentiation remains challenging because of the lack of clear boundaries between cells at various stages of differentiation. To address the limitations of conventional clinical and deep learning (DL) methods, we developed a mobile attention classification network (MacNet) with multiscale features, aiming to increase the accuracy of differentiation classification and quantitatively analyze cervical cancer cell differentiation. Methods We investigated the application of MacNet for classifying non-background images into 3 stages of cervical cancer differentiation. The feature maps are processed through the Mobile Convolution Neural Network with Mobile Attention (MCMA) module, which integrates mobile convolutional blocks and mobile attention blocks. MacNet harnesses the benefits of the image pyramid structure and self-attention mechanism, enabling multiscale feature extraction and emulation of clinical pathologist analysis. The final prediction is generated by the adaptive fusion module, which aggregates features into a unified output. Results Comparative evaluations demonstrated that MacNet outperforms existing models. The proposed method achieved the best classification accuracy of 92.34% among all 7 DL-based models. Specifically, the result achieved by MacNet was 2.62% greater than that of Inception Version 3, 7.9% greater than that of vision transformer, 8.08% greater than that of the visual geometry group network, 3.21% greater than that of Densely Connected Convolutional Network, 2.85% greater than that of shifted window transformer (Swin transformer), 5.4% greater than that of Cross Stage Partial DarkNet, and 5.41% greater than that of Residual Neural Network. At the same time, MacNet also achieved superior results in recall, precision, and F1 score. Conclusions We have proposed a lightweight neural network method that innovatively combines attention mechanisms with convolutional neural networks (CNNs) to efficiently utilize multiscale information from histopathological images. This integration enables the precise quantitative display of different stages of differentiation in cervical cancer. By doing so, our method not only enhances diagnostic accuracy but also provides clinicians with a more effective tool for faster and more reliable diagnosis, representing a significant advancement in the field of pathological imaging.
Collapse
Affiliation(s)
- Yi An
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yuanyuan Lei
- Department of Pathology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Zhenxing Huang
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yu Liu
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Meiyong Huang
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhou Liu
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Wenbo Li
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Dong Liang
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Wenting Huang
- Department of Pathology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Zhanli Hu
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
10
|
Rahman T, Baras AS, Chellappa R. Evaluation of a Task-Specific Self-Supervised Learning Framework in Digital Pathology Relative to Transfer Learning Approaches and Existing Foundation Models. Mod Pathol 2025; 38:100636. [PMID: 39455029 DOI: 10.1016/j.modpat.2024.100636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Revised: 09/06/2024] [Accepted: 10/15/2024] [Indexed: 10/28/2024]
Abstract
An integral stage in typical digital pathology workflows involves deriving specific features from tiles extracted from a tessellated whole-slide image. Notably, various computer vision neural network architectures, particularly the ImageNet pretrained, have been extensively used in this domain. This study critically analyzes multiple strategies for encoding tiles to understand the extent of transfer learning and identify the most effective approach. The study categorizes neural network performance into 3 weight initialization methods: random, ImageNet-based, and self-supervised learning. Additionally, we propose a framework based on task-specific self-supervised learning, which introduces a shallow feature extraction method, employing a spatial-channel attention block to glean distinctive features optimized for histopathology intricacies. Across 2 different downstream classification tasks (patch classification and weakly supervised whole-slide image classification) with diverse classification data sets, including colorectal cancer histology, Patch Camelyon, prostate cancer detection, The Cancer Genome Atlas, and CIFAR-10, our task-specific self-supervised encoding approach consistently outperforms other convolutional neural network-based encoders. The better performances highlight the potential of task-specific attention-based self-supervised training in tailoring feature extraction for histopathology, indicating a shift from using pretrained models originating outside the histopathology domain. Our study supports the idea that task-specific self-supervised learning allows domain-specific feature extraction, encouraging a more focused analysis.
Collapse
Affiliation(s)
- Tawsifur Rahman
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore Maryland.
| | - Alexander S Baras
- Department of Pathology, Johns Hopkins University School of Medicine, Baltimore Maryland
| | - Rama Chellappa
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore Maryland; Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland
| |
Collapse
|
11
|
Zhao W, Guo Z, Fan Y, Jiang Y, Yeung MCF, Yu L. Aligning knowledge concepts to whole slide images for precise histopathology image analysis. NPJ Digit Med 2024; 7:383. [PMID: 39738468 DOI: 10.1038/s41746-024-01411-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Accepted: 12/22/2024] [Indexed: 01/02/2025] Open
Abstract
Due to the large size and lack of fine-grained annotation, Whole Slide Images (WSIs) analysis is commonly approached as a Multiple Instance Learning (MIL) problem. However, previous studies only learn from training data, posing a stark contrast to how human clinicians teach each other and reason about histopathologic entities and factors. Here, we present a novel knowledge concept-based MIL framework, named ConcepPath, to fill this gap. Specifically, ConcepPath utilizes GPT-4 to induce reliable disease-specific human expert concepts from medical literature and incorporate them with a group of purely learnable concepts to extract complementary knowledge from training data. In ConcepPath, WSIs are aligned to these linguistic knowledge concepts by utilizing the pathology vision-language model as the basic building component. In the application of lung cancer subtyping, breast cancer HER2 scoring, and gastric cancer immunotherapy-sensitive subtyping tasks, ConcepPath significantly outperformed previous SOTA methods, which lacked the guidance of human expert knowledge.
Collapse
Affiliation(s)
- Weiqin Zhao
- School of Computing and Data Science, The University of Hong Kong, Hong Kong SAR, China
| | - Ziyu Guo
- School of Computing and Data Science, The University of Hong Kong, Hong Kong SAR, China
| | - Yinshuang Fan
- School of Computing and Data Science, The University of Hong Kong, Hong Kong SAR, China
| | - Yuming Jiang
- School of Medicine, Wake Forest University, Winston-Salem, NC, USA
| | - Maximus C F Yeung
- Department of Pathology, The University of Hong Kong, Hong Kong SAR, China.
| | - Lequan Yu
- School of Computing and Data Science, The University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
12
|
Vázquez-Lema D, Mosqueira-Rey E, Hernández-Pereira E, Fernandez-Lozano C, Seara-Romera F, Pombo-Otero J. Segmentation, classification and interpretation of breast cancer medical images using human-in-the-loop machine learning. Neural Comput Appl 2024. [DOI: 10.1007/s00521-024-10799-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 11/12/2024] [Indexed: 01/04/2025]
|
13
|
Hashimoto N, Hanada H, Miyoshi H, Nagaishi M, Sato K, Hontani H, Ohshima K, Takeuchi I. Multimodal Gated Mixture of Experts Using Whole Slide Image and Flow Cytometry for Multiple Instance Learning Classification of Lymphoma. J Pathol Inform 2024; 15:100359. [PMID: 38322152 PMCID: PMC10844119 DOI: 10.1016/j.jpi.2023.100359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 12/07/2023] [Accepted: 12/23/2023] [Indexed: 02/08/2024] Open
Abstract
In this study, we present a deep-learning-based multimodal classification method for lymphoma diagnosis in digital pathology, which utilizes a whole slide image (WSI) as the primary image data and flow cytometry (FCM) data as auxiliary information. In pathological diagnosis of malignant lymphoma, FCM serves as valuable auxiliary information during the diagnosis process, offering useful insights into predicting the major class (superclass) of subtypes. By incorporating both images and FCM data into the classification process, we can develop a method that mimics the diagnostic process of pathologists, enhancing the explainability. In order to incorporate the hierarchical structure between superclasses and their subclasses, the proposed method utilizes a network structure that effectively combines the mixture of experts (MoE) and multiple instance learning (MIL) techniques, where MIL is widely recognized for its effectiveness in handling WSIs in digital pathology. The MoE network in the proposed method consists of a gating network for superclass classification and multiple expert networks for (sub)class classification, specialized for each superclass. To evaluate the effectiveness of our method, we conducted experiments involving a six-class classification task using 600 lymphoma cases. The proposed method achieved a classification accuracy of 72.3%, surpassing the 69.5% obtained through the straightforward combination of FCM and images, as well as the 70.2% achieved by the method using only images. Moreover, the combination of multiple weights in the MoE and MIL allows for the visualization of specific cellular and tumor regions, resulting in a highly explanatory model that cannot be attained with conventional methods. It is anticipated that by targeting a larger number of classes and increasing the number of expert networks, the proposed method could be effectively applied to the real problem of lymphoma diagnosis.
Collapse
Affiliation(s)
- Noriaki Hashimoto
- RIKEN Center for Advanced Intelligence Project, Furo-cho, Chikusa-ku, Nagoya, 4648603, Japan
| | - Hiroyuki Hanada
- RIKEN Center for Advanced Intelligence Project, Furo-cho, Chikusa-ku, Nagoya, 4648603, Japan
| | - Hiroaki Miyoshi
- Department of Pathology, Kurume University School of Medicine, 67 Asahi-machi, Kurume, 8300011, Japan
| | - Miharu Nagaishi
- Department of Pathology, Kurume University School of Medicine, 67 Asahi-machi, Kurume, 8300011, Japan
| | - Kensaku Sato
- Department of Pathology, Kurume University School of Medicine, 67 Asahi-machi, Kurume, 8300011, Japan
| | - Hidekata Hontani
- Department of Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, 4668555, Japan
| | - Koichi Ohshima
- Department of Pathology, Kurume University School of Medicine, 67 Asahi-machi, Kurume, 8300011, Japan
| | - Ichiro Takeuchi
- RIKEN Center for Advanced Intelligence Project, Furo-cho, Chikusa-ku, Nagoya, 4648603, Japan
- Department of Mechanical Systems Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 4648603, Japan
| |
Collapse
|
14
|
Ahmadvand P, Farahani H, Farnell D, Darbandsari A, Topham J, Karasinska J, Nelson J, Naso J, Jones SJM, Renouf D, Schaeffer DF, Bashashati A. A Deep Learning Approach for the Identification of the Molecular Subtypes of Pancreatic Ductal Adenocarcinoma Based on Whole Slide Pathology Images. THE AMERICAN JOURNAL OF PATHOLOGY 2024; 194:2302-2312. [PMID: 39222907 DOI: 10.1016/j.ajpath.2024.08.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Revised: 08/12/2024] [Accepted: 08/19/2024] [Indexed: 09/04/2024]
Abstract
Delayed diagnosis and treatment resistance result in high pancreatic ductal adenocarcinoma (PDAC) mortality rates. Identifying molecular subtypes can improve treatment, but current methods are costly and time-consuming. In this study, deep learning models were used to identify histologic features that classify PDAC molecular subtypes based on routine hematoxylin-eosin-stained histopathologic slides. A total of 97 histopathology slides associated with resectable PDAC from The Cancer Genome Atlas project were used to train a deep learning model and test the performance on 44 needle biopsy material (110 slides) from a local annotated patient cohort. The model achieved balanced accuracy of 96.19% and 83.03% in identifying the classical and basal subtypes of PDAC in The Cancer Genome Atlas and the local cohort, respectively. This study provides a promising method to cost-effectively and rapidly classify PDAC molecular subtypes based on routine hematoxylin-eosin-stained slides, potentially leading to more effective clinical management of this disease.
Collapse
Affiliation(s)
- Pouya Ahmadvand
- School of Biomedical Engineering, University of British Columbia, Vancouver, British Columbia, Canada
| | - Hossein Farahani
- School of Biomedical Engineering, University of British Columbia, Vancouver, British Columbia, Canada
| | - David Farnell
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, British Columbia, Canada; Vancouver General Hospital, Vancouver, British Columbia, Canada
| | - Amirali Darbandsari
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, British Columbia, Canada
| | - James Topham
- Pancreas Centre BC, Vancouver, British Columbia, Canada
| | | | - Jessica Nelson
- British Columbia Cancer Research Center, Vancouver, British Columbia, Canada
| | - Julia Naso
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Steven J M Jones
- Michael Smith Genome Sciences Center, British Columbia Cancer Research Center, Vancouver, British Columbia, Canada
| | - Daniel Renouf
- Department of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - David F Schaeffer
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, British Columbia, Canada; Vancouver General Hospital, Vancouver, British Columbia, Canada; Pancreas Centre BC, Vancouver, British Columbia, Canada.
| | - Ali Bashashati
- School of Biomedical Engineering, University of British Columbia, Vancouver, British Columbia, Canada; Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, British Columbia, Canada.
| |
Collapse
|
15
|
Kang H, Kim M, Ko YS, Cho Y, Yi MY. WISE: Efficient WSI selection for active learning in histopathology. Comput Med Imaging Graph 2024; 118:102455. [PMID: 39481146 DOI: 10.1016/j.compmedimag.2024.102455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 10/06/2024] [Accepted: 10/18/2024] [Indexed: 11/02/2024]
Abstract
Deep neural network (DNN) models have been applied to a wide variety of medical image analysis tasks, often with the successful performance outcomes that match those of medical doctors. However, given that even minor errors in a model can impact patients' life, it is critical that these models are continuously improved. Hence, active learning (AL) has garnered attention as an effective and sustainable strategy for enhancing DNN models for the medical domain. Extant AL research in histopathology has primarily focused on patch datasets derived from whole-slide images (WSIs), a standard form of cancer diagnostic images obtained from a high-resolution scanner. However, this approach has failed to address the selection of WSIs, which can impede the performance improvement of deep learning models and increase the number of WSIs needed to achieve the target performance. This study introduces a WSI-level AL method, termed WSI-informative selection (WISE). WISE is designed to select informative WSIs using a newly formulated WSI-level class distance metric. This method aims to identify diverse and uncertain cases of WSIs, thereby contributing to model performance enhancement. WISE demonstrates state-of-the-art performance across the Colon and Stomach datasets, collected in the real world, as well as the public DigestPath dataset, significantly reducing the required number of WSIs by more than threefold compared to the one-pool dataset setting, which has been dominantly used in the field.
Collapse
Affiliation(s)
- Hyeongu Kang
- Graduate School of Data Science, Department of Industrial and Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea
| | - Mujin Kim
- Graduate School of Data Science, Department of Industrial and Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea
| | - Young Sin Ko
- Pathology Center, Seegene Medical Foundation, Seoul, South Korea
| | - Yesung Cho
- Graduate School of Data Science, Department of Industrial and Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea
| | - Mun Yong Yi
- Graduate School of Data Science, Department of Industrial and Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea.
| |
Collapse
|
16
|
Gonzalez R, Saha A, Campbell CJ, Nejat P, Lokker C, Norgan AP. Seeing the random forest through the decision trees. Supporting learning health systems from histopathology with machine learning models: Challenges and opportunities. J Pathol Inform 2024; 15:100347. [PMID: 38162950 PMCID: PMC10755052 DOI: 10.1016/j.jpi.2023.100347] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 10/06/2023] [Accepted: 11/01/2023] [Indexed: 01/03/2024] Open
Abstract
This paper discusses some overlooked challenges faced when working with machine learning models for histopathology and presents a novel opportunity to support "Learning Health Systems" with them. Initially, the authors elaborate on these challenges after separating them according to their mitigation strategies: those that need innovative approaches, time, or future technological capabilities and those that require a conceptual reappraisal from a critical perspective. Then, a novel opportunity to support "Learning Health Systems" by integrating hidden information extracted by ML models from digitalized histopathology slides with other healthcare big data is presented.
Collapse
Affiliation(s)
- Ricardo Gonzalez
- DeGroote School of Business, McMaster University, Hamilton, Ontario, Canada
- Division of Computational Pathology and Artificial Intelligence, Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, United States
| | - Ashirbani Saha
- Department of Oncology, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
- Escarpment Cancer Research Institute, McMaster University and Hamilton Health Sciences, Hamilton, Ontario, Canada
| | - Clinton J.V. Campbell
- William Osler Health System, Brampton, Ontario, Canada
- Department of Pathology and Molecular Medicine, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Peyman Nejat
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN, United States
| | - Cynthia Lokker
- Health Information Research Unit, Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada
| | - Andrew P. Norgan
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, United States
| |
Collapse
|
17
|
Huang YY, Chu WT. Learnable Context in Multiple Instance Learning for Whole Slide Image Classification and Segmentation. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01302-8. [PMID: 39495442 DOI: 10.1007/s10278-024-01302-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 09/04/2024] [Accepted: 09/25/2024] [Indexed: 11/05/2024]
Abstract
Multiple instance learning (MIL) has become a cornerstone in whole slide image (WSI) analysis. In this paradigm, a WSI is conceptualized as a bag of instances. Instance features are extracted by a feature extractor, and then a feature aggregator fuses these instance features into a bag representation. In this paper, we advocate that both feature extraction and aggregation can be enhanced by considering the context or correlation between instances. We learn contextual features between instances, and then fuse contextual features with instance features to enhance instance representations. For feature aggregation, we observe performance instability particularly when disease-positive instances are only a minor fraction of the WSI. We introduce a self-attention mechanism to discover correlation among instances and foster more effective bag representations. Through comprehensive testing, we have demonstrated that the proposed method outperforms existing WSI classification methods by 1 to 4% classification accuracy, based on the Camelyon16 and the TCGA-NSCLC datasets. The proposed method also outperforms the most recent weakly supervised WSI segmentation method by 0.6 in terms of the Dice coefficient, based on the Camelyon16 dataset.
Collapse
Affiliation(s)
| | - Wei-Ta Chu
- National Cheng Kung University, Tainan, Taiwan.
| |
Collapse
|
18
|
Wang H, Luo L, Wang F, Tong R, Chen YW, Hu H, Lin L, Chen H. Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Bag-Level Classifier is a Good Instance-Level Teacher. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3964-3976. [PMID: 38781068 DOI: 10.1109/tmi.2024.3404549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Multiple Instance Learning (MIL) has demonstrated promise in Whole Slide Image (WSI) classification. However, a major challenge persists due to the high computational cost associated with processing these gigapixel images. Existing methods generally adopt a two-stage approach, comprising a non-learnable feature embedding stage and a classifier training stage. Though it can greatly reduce memory consumption by using a fixed feature embedder pre-trained on other domains, such a scheme also results in a disparity between the two stages, leading to suboptimal classification accuracy. To address this issue, we propose that a bag-level classifier can be a good instance-level teacher. Based on this idea, we design Iteratively Coupled Multiple Instance Learning (ICMIL) to couple the embedder and the bag classifier at a low cost. ICMIL initially fixes the patch embedder to train the bag classifier, followed by fixing the bag classifier to fine-tune the patch embedder. The refined embedder can then generate better representations in return, leading to a more accurate classifier for the next iteration. To realize more flexible and more effective embedder fine-tuning, we also introduce a teacher-student framework to efficiently distill the category knowledge in the bag classifier to help the instance-level embedder fine-tuning. Intensive experiments were conducted on four distinct datasets to validate the effectiveness of ICMIL. The experimental results consistently demonstrated that our method significantly improves the performance of existing MIL backbones, achieving state-of-the-art results. The code and the organized datasets can be accessed by: https://github.com/Dootmaan/ICMIL/tree/confidence-based.
Collapse
|
19
|
Diniz E, Santini T, Helmet K, Aizenstein HJ, Ibrahim TS. Cross-modality image translation of 3 Tesla Magnetic Resonance Imaging to 7 Tesla using Generative Adversarial Networks. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.10.16.24315609. [PMID: 39484249 PMCID: PMC11527090 DOI: 10.1101/2024.10.16.24315609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/03/2024]
Abstract
The rapid advancements in magnetic resonance imaging (MRI) technology have precipitated a new paradigm wherein cross-modality data translation across diverse imaging platforms, field strengths, and different sites is increasingly challenging. This issue is particularly accentuated when transitioning from 3 Tesla (3T) to 7 Tesla (7T) MRI systems. This study proposes a novel solution to these challenges using generative adversarial networks (GANs)-specifically, the CycleGAN architecture-to create synthetic 7T images from 3T data. Employing a dataset of 1112 and 490 unpaired 3T and 7T MR images, respectively, we trained a 2-dimensional (2D) CycleGAN model, evaluating its performance on a paired dataset of 22 participants scanned at 3T and 7T. Independent testing on 22 distinct participants affirmed the model's proficiency in accurately predicting various tissue types, encompassing cerebral spinal fluid, gray matter, and white matter. Our approach provides a reliable and efficient methodology for synthesizing 7T images, achieving a median Dice of 6.82%,7,63%, and 4.85% for Cerebral Spinal Fluid (CSF), Gray Matter (GM), and White Matter (WM), respectively, in the testing dataset, thereby significantly aiding in harmonizing heterogeneous datasets. Furthermore, it delineates the potential of GANs in amplifying the contrast-to-noise ratio (CNR) from 3T, potentially enhancing the diagnostic capability of the images. While acknowledging the risk of model overfitting, our research underscores a promising progression towards harnessing the benefits of 7T MR systems in research investigations while preserving compatibility with existent 3T MR data. This work was previously presented at the ISMRM 2021 conference (Diniz, Helmet, Santini, Aizenstein, & Ibrahim, 2021).
Collapse
Affiliation(s)
- Eduardo Diniz
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pennsylvania, United States
| | - Tales Santini
- Department of Bioengineering, University of Pittsburgh, Pennsylvania, United States
| | - Karim Helmet
- Department of Bioengineering, University of Pittsburgh, Pennsylvania, United States
- Department of Psychiatry, University of Pittsburgh, Pennsylvania, United States
| | - Howard J. Aizenstein
- Department of Bioengineering, University of Pittsburgh, Pennsylvania, United States
- Department of Psychiatry, University of Pittsburgh, Pennsylvania, United States
| | - Tamer S. Ibrahim
- Department of Bioengineering, University of Pittsburgh, Pennsylvania, United States
| |
Collapse
|
20
|
Kwak S, Akbari H, Garcia JA, Mohan S, Dicker Y, Sako C, Matsumoto Y, Nasrallah MP, Shalaby M, O’Rourke DM, Shinohara RT, Liu F, Badve C, Barnholtz-Sloan JS, Sloan AE, Lee M, Jain R, Cepeda S, Chakravarti A, Palmer JD, Dicker AP, Shukla G, Flanders AE, Shi W, Woodworth GF, Davatzikos C. Predicting peritumoral glioblastoma infiltration and subsequent recurrence using deep-learning-based analysis of multi-parametric magnetic resonance imaging. J Med Imaging (Bellingham) 2024; 11:054001. [PMID: 39220048 PMCID: PMC11363410 DOI: 10.1117/1.jmi.11.5.054001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 07/16/2024] [Accepted: 08/06/2024] [Indexed: 09/04/2024] Open
Abstract
Purpose Glioblastoma (GBM) is the most common and aggressive primary adult brain tumor. The standard treatment approach is surgical resection to target the enhancing tumor mass, followed by adjuvant chemoradiotherapy. However, malignant cells often extend beyond the enhancing tumor boundaries and infiltrate the peritumoral edema. Traditional supervised machine learning techniques hold potential in predicting tumor infiltration extent but are hindered by the extensive resources needed to generate expertly delineated regions of interest (ROIs) for training models on tissue most and least likely to be infiltrated. Approach We developed a method combining expert knowledge and training-based data augmentation to automatically generate numerous training examples, enhancing the accuracy of our model for predicting tumor infiltration through predictive maps. Such maps can be used for targeted supra-total surgical resection and other therapies that might benefit from intensive yet well-targeted treatment of infiltrated tissue. We apply our method to preoperative multi-parametric magnetic resonance imaging (mpMRI) scans from a subset of 229 patients of a multi-institutional consortium (Radiomics Signatures for Precision Diagnostics) and test the model on subsequent scans with pathology-proven recurrence. Results Leave-one-site-out cross-validation was used to train and evaluate the tumor infiltration prediction model using initial pre-surgical scans, comparing the generated prediction maps with follow-up mpMRI scans confirming recurrence through post-resection tissue analysis. Performance was measured by voxel-wised odds ratios (ORs) across six institutions: University of Pennsylvania (OR: 9.97), Ohio State University (OR: 14.03), Case Western Reserve University (OR: 8.13), New York University (OR: 16.43), Thomas Jefferson University (OR: 8.22), and Rio Hortega (OR: 19.48). Conclusions The proposed model demonstrates that mpMRI analysis using deep learning can predict infiltration in the peri-tumoral brain region for GBM patients without needing to train a model using expert ROI drawings. Results for each institution demonstrate the model's generalizability and reproducibility.
Collapse
Affiliation(s)
- Sunwoo Kwak
- University of Pennsylvania, Perelman School of Medicine, Department of Radiology, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Philadelphia, Pennsylvania, United States
| | - Hamed Akbari
- Santa Clara University, School of Engineering, Department of Bioengineering, Santa Clara, California, United States
| | - Jose A. Garcia
- University of Pennsylvania, Perelman School of Medicine, Department of Radiology, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Philadelphia, Pennsylvania, United States
| | - Suyash Mohan
- University of Pennsylvania, Perelman School of Medicine, Department of Radiology, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Philadelphia, Pennsylvania, United States
| | - Yehuda Dicker
- Columbia University, School of Engineering, Department of Computer Science, New York, United States
| | - Chiharu Sako
- University of Pennsylvania, Perelman School of Medicine, Department of Radiology, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Philadelphia, Pennsylvania, United States
| | - Yuji Matsumoto
- University of Pennsylvania, Perelman School of Medicine, Department of Radiology, Philadelphia, Pennsylvania, United States
| | - MacLean P. Nasrallah
- University of Pennsylvania, Perelman School of Medicine, Department of Radiology, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, Perelman School of Medicine, Department of Pathology and Laboratory Medicine, Philadelphia, Pennsylvania, United States
| | - Mahmoud Shalaby
- Mercy Catholic Medical Center, Department of Radiology, Philadelphia, Pennsylvania, United States
| | - Donald M. O’Rourke
- University of Pennsylvania, Perelman School of Medicine, Department of Neurosurgery, Philadelphia, Pennsylvania, United States
| | - Russel T. Shinohara
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, Perelman School of Medicine, Department of Biostatistics and Epidemiology, Philadelphia, Pennsylvania, United States
| | - Fang Liu
- University of Pennsylvania, Perelman School of Medicine, Department of Biostatistics and Epidemiology, Philadelphia, Pennsylvania, United States
| | - Chaitra Badve
- Case Western Reserve University, University Hospitals Cleveland Medical Center, Department of Radiology, Cleveland, Ohio, United States
| | - Jill S. Barnholtz-Sloan
- National Cancer Institute, Center for Biomedical Informatics and Information Technology, Division of Cancer Epidemiology and Genetics, Bethesda, Maryland, United States
| | - Andrew E. Sloan
- Piedmont Healthcare, Division of Neuroscience, Atlanta, Georgia, United States
| | - Matthew Lee
- NYU Grossman School of Medicine, Department of Radiology, New York, United States
| | - Rajan Jain
- NYU Grossman School of Medicine, Department of Radiology, New York, United States
- NYU Grossman School of Medicine, Department of Neurosurgery, New York, United States
| | | | - Arnab Chakravarti
- Ohio State University Wexner Medical Center, Department of Radiation Oncology, Columbus, Ohio, United States
| | - Joshua D. Palmer
- Ohio State University Wexner Medical Center, Department of Radiation Oncology, Columbus, Ohio, United States
| | - Adam P. Dicker
- Thomas Jefferson University, Sidney Kimmel Cancer Center, Department of Radiation Oncology, Philadelphia, Pennsylvania, United States
| | - Gaurav Shukla
- Thomas Jefferson University, Sidney Kimmel Cancer Center, Department of Radiation Oncology, Philadelphia, Pennsylvania, United States
| | - Adam E. Flanders
- Thomas Jefferson University, Sidney Kimmel Cancer Center, Department of Radiation Oncology, Philadelphia, Pennsylvania, United States
| | - Wenyin Shi
- Thomas Jefferson University, Sidney Kimmel Cancer Center, Department of Radiation Oncology, Philadelphia, Pennsylvania, United States
| | - Graeme F. Woodworth
- University of Maryland, School of Medicine, Department of Neurosurgery, Baltimore, Maryland, United States
| | - Christos Davatzikos
- University of Pennsylvania, Perelman School of Medicine, Department of Radiology, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Philadelphia, Pennsylvania, United States
| |
Collapse
|
21
|
Koga R, Koide S, Tanaka H, Taguchi K, Kugler M, Yokota T, Ohshima K, Miyoshi H, Nagaishi M, Hashimoto N, Takeuchi I, Hontani H. A study of criteria for grading follicular lymphoma using a cell type classifier from pathology images based on complementary-label learning. Micron 2024; 184:103663. [PMID: 38843576 DOI: 10.1016/j.micron.2024.103663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 05/21/2024] [Accepted: 05/21/2024] [Indexed: 06/30/2024]
Abstract
We propose a criterion for grading follicular lymphoma that is consistent with the intuitive evaluation, which is conducted by experienced pathologists. A criterion for grading follicular lymphoma is defined by the World Health Organization (WHO) based on the number of centroblasts and centrocytes within the field of view. However, the WHO criterion is not often used in clinical practice because it is impractical for pathologists to visually identify the cell type of each cell and count the number of centroblasts and centrocytes. Hence, based on the widespread use of digital pathology, we make it practical to identify and count the cell type by using image processing and then construct a criterion for grading based on the number of cells. Here, the problem is that labeling the cell type is not easy even for experienced pathologists. To alleviate this problem, we build a new dataset for cell type classification, which contains the pathologists' confusion records during labeling, and we construct the cell type classifier using complementary-label learning from this dataset. Then we propose a criterion based on the composition ratio of cell types that is consistent with the pathologists' grading. Our experiments demonstrate that the classifier can accurately identify cell types and the proposed criterion is more consistent with the pathologists' grading than the current WHO criterion.
Collapse
Affiliation(s)
- Ryoichi Koga
- Dapartment of Computer Science, Gokiso-cho, Showa-ku, Nagoya-shi, Aichi 466-8555, Japan
| | - Shingo Koide
- Dapartment of Computer Science, Gokiso-cho, Showa-ku, Nagoya-shi, Aichi 466-8555, Japan
| | - Hiromu Tanaka
- Dapartment of Computer Science, Gokiso-cho, Showa-ku, Nagoya-shi, Aichi 466-8555, Japan
| | - Kei Taguchi
- Dapartment of Computer Science, Gokiso-cho, Showa-ku, Nagoya-shi, Aichi 466-8555, Japan
| | - Mauricio Kugler
- Dapartment of Computer Science, Gokiso-cho, Showa-ku, Nagoya-shi, Aichi 466-8555, Japan
| | - Tatsuya Yokota
- Dapartment of Computer Science, Gokiso-cho, Showa-ku, Nagoya-shi, Aichi 466-8555, Japan
| | - Koichi Ohshima
- Department of Pathology, 67 Asahi-cho, Kurume-shi, Fukuoka 830-0011, Japan
| | - Hiroaki Miyoshi
- Department of Pathology, 67 Asahi-cho, Kurume-shi, Fukuoka 830-0011, Japan
| | - Miharu Nagaishi
- Department of Pathology, 67 Asahi-cho, Kurume-shi, Fukuoka 830-0011, Japan
| | - Noriaki Hashimoto
- RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Ichiro Takeuchi
- RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; Department of Mechanical Systems Engineering, Furo-sho, Chikusa-ku, Nagoya-shi Aichi 464-8601, Japan
| | - Hidekata Hontani
- Dapartment of Computer Science, Gokiso-cho, Showa-ku, Nagoya-shi, Aichi 466-8555, Japan.
| |
Collapse
|
22
|
Aftab R, Yan Q, Zhao J, Yong G, Huajie Y, Urrehman Z, Mohammad Khalid F. Neighborhood attention transformer multiple instance learning for whole slide image classification. Front Oncol 2024; 14:1389396. [PMID: 39267847 PMCID: PMC11390382 DOI: 10.3389/fonc.2024.1389396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 06/20/2024] [Indexed: 09/15/2024] Open
Abstract
Introduction Pathologists rely on whole slide images (WSIs) to diagnose cancer by identifying tumor cells and subtypes. Deep learning models, particularly weakly supervised ones, classify WSIs using image tiles but may overlook false positives and negatives due to the heterogeneous nature of tumors. Both cancerous and healthy cells can proliferate in patterns that extend beyond individual tiles, leading to errors at the tile level that result in inaccurate tumor-level classifications. Methods To address this limitation, we introduce NATMIL (Neighborhood Attention Transformer Multiple Instance Learning), which utilizes the Neighborhood Attention Transformer to incorporate contextual dependencies among WSI tiles. NATMIL enhances multiple instance learning by integrating a broader tissue context into the model. Our approach enhances the accuracy of tumor classification by considering the broader tissue context, thus reducing errors associated with isolated tile analysis. Results We conducted a quantitative analysis to evaluate NATMIL's performance against other weakly supervised algorithms. When applied to subtyping non-small cell lung cancer (NSCLC) and lymph node (LN) tumors, NATMIL demonstrated superior accuracy. Specifically, NATMIL achieved accuracy values of 89.6% on the Camelyon dataset and 88.1% on the TCGA-LUSC dataset, outperforming existing methods. These results underscore NATMIL's potential as a robust tool for improving the precision of cancer diagnosis using WSIs. Discussion Our findings demonstrate that NATMIL significantly improves tumor classification accuracy by reducing errors associated with isolated tile analysis. The integration of contextual dependencies enhances the precision of cancer diagnosis using WSIs, highlighting NATMILs´ potential as a robust tool in pathology.
Collapse
Affiliation(s)
- Rukhma Aftab
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Qiang Yan
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, Shanxi, China
- School of Software, North University of China, Taiyuan, Shanxi, China
| | - Juanjuan Zhao
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Gao Yong
- Department of Respiratory and Critical Care Medicine, Sinopharm Tongmei General Hospital, Datong, Shanxi, China
| | - Yue Huajie
- First Hospital of Shanxi Medical University, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Zia Urrehman
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Faizi Mohammad Khalid
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, Shanxi, China
| |
Collapse
|
23
|
Yang X, Zhang R, Yang Y, Zhang Y, Chen K. PathEX: Make good choice for whole slide image extraction. PLoS One 2024; 19:e0304702. [PMID: 39208135 PMCID: PMC11361590 DOI: 10.1371/journal.pone.0304702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 05/17/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND The tile-based approach has been widely used for slide-level predictions in whole slide image (WSI) analysis. However, the irregular shapes and variable dimensions of tumor regions pose challenges for the process. To address this issue, we proposed PathEX, a framework that integrates intersection over tile (IoT) and background over tile (BoT) algorithms to extract tile images around boundaries of annotated regions while excluding the blank tile images within these regions. METHODS We developed PathEX, which incorporated IoT and BoT into tile extraction, for training a classification model in CAM (239 WSIs) and PAIP (40 WSIs) datasets. By adjusting the IoT and BoT parameters, we generated eight training sets and corresponding models for each dataset. The performance of PathEX was assessed on the testing set comprising 13,076 tile images from 48 WSIs of CAM dataset and 6,391 tile images from 10 WSIs of PAIP dataset. RESULTS PathEX could extract tile images around boundaries of annotated region differently by adjusting the IoT parameter, while exclusion of blank tile images within annotated regions achieved by setting the BoT parameter. As adjusting IoT from 0.1 to 1.0, and 1-BoT from 0.0 to 0.5, we got 8 train sets. Experimentation revealed that set C demonstrates potential as the most optimal candidate. Nevertheless, a combination of IoT values ranging from 0.2 to 0.5 and 1-BoT values ranging from 0.2 to 0.5 also yielded favorable outcomes. CONCLUSIONS In this study, we proposed PathEX, a framework that integrates IoT and BoT algorithms for tile image extraction at the boundaries of annotated regions while excluding blank tiles within these regions. Researchers can conveniently set the thresholds for IoT and BoT to facilitate tile image extraction in their own studies. The insights gained from this research provide valuable guidance for tile image extraction in digital pathology applications.
Collapse
Affiliation(s)
- Xinda Yang
- Renmin University of China School of Information, Beijing, P.R. China
| | - Ranze Zhang
- Breast Tumor Center, Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
- Breast Tumor Center, Sun Yat-sen Breast Tumor Hospital, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yuan Yang
- Department of Research and Development, Health Data (Beijing) Technology Co., Ltd, Guangzhou, Guangdong, P.R. China
| | - Yu Zhang
- Renmin University of China School of Information, Beijing, P.R. China
| | - Kai Chen
- Breast Tumor Center, Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
- Breast Tumor Center, Sun Yat-sen Breast Tumor Hospital, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
- Artificial Intelligence Lab, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| |
Collapse
|
24
|
He T, Shi S, Liu Y, Zhu L, Wei Y, Zhang F, Shi H, He Y, Han A. Pathology diagnosis of intraoperative frozen thyroid lesions assisted by deep learning. BMC Cancer 2024; 24:1069. [PMID: 39210289 PMCID: PMC11363383 DOI: 10.1186/s12885-024-12849-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 08/26/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND Thyroid cancer is a common thyroid malignancy. The majority of thyroid lesion needs intraoperative frozen pathology diagnosis, which provides important information for precision operation. As digital whole slide images (WSIs) develop, deep learning methods for histopathological classification of the thyroid gland (paraffin sections) have achieved outstanding results. Our current study is to clarify whether deep learning assists pathology diagnosis for intraoperative frozen thyroid lesions or not. METHODS We propose an artificial intelligence-assisted diagnostic system for frozen thyroid lesions that applies prior knowledge in tandem with a dichotomous judgment of whether the lesion is cancerous or not and a quadratic judgment of the type of cancerous lesion to categorize the frozen thyroid lesions into five categories: papillary thyroid carcinoma, medullary thyroid carcinoma, anaplastic thyroid carcinoma, follicular thyroid tumor, and non-cancerous lesion. We obtained 4409 frozen digital pathology sections (WSI) of thyroid from the First Affiliated Hospital of Sun Yat-sen University (SYSUFH) to train and test the model, and the performance was validated by a six-fold cross validation, 101 papillary microcarcinoma sections of thyroid were used to validate the system's sensitivity, and 1388 WSIs of thyroid were used for the evaluation of the external dataset. The deep learning models were compared in terms of several metrics such as accuracy, F1 score, recall, precision and AUC (Area Under Curve). RESULTS We developed the first deep learning-based frozen thyroid diagnostic classifier for histopathological WSI classification of papillary carcinoma, medullary carcinoma, follicular tumor, anaplastic carcinoma, and non-carcinoma lesion. On test slides, the system had an accuracy of 0.9459, a precision of 0.9475, and an AUC of 0.9955. In the papillary carcinoma test slides, the system was able to accurately predict even lesions as small as 2 mm in diameter. Tested with the acceleration component, the cut processing can be performed in 346.12 s and the visual inference prediction results can be obtained in 98.61 s, thus meeting the time requirements for intraoperative diagnosis. Our study employs a deep learning approach for high-precision classification of intraoperative frozen thyroid lesion distribution in the clinical setting, which has potential clinical implications for assisting pathologists and precision surgery of thyroid lesions.
Collapse
MESH Headings
- Humans
- Deep Learning
- Thyroid Neoplasms/pathology
- Thyroid Neoplasms/diagnosis
- Thyroid Neoplasms/surgery
- Frozen Sections
- Thyroid Cancer, Papillary/pathology
- Thyroid Cancer, Papillary/diagnosis
- Thyroid Cancer, Papillary/surgery
- Carcinoma, Papillary/pathology
- Carcinoma, Papillary/surgery
- Carcinoma, Papillary/diagnosis
- Adenocarcinoma, Follicular/pathology
- Adenocarcinoma, Follicular/diagnosis
- Adenocarcinoma, Follicular/surgery
- Thyroid Gland/pathology
- Thyroid Gland/surgery
- Carcinoma, Neuroendocrine/pathology
- Carcinoma, Neuroendocrine/diagnosis
- Carcinoma, Neuroendocrine/surgery
- Female
- Male
- Middle Aged
- Adult
- Intraoperative Period
- Thyroid Carcinoma, Anaplastic/pathology
- Thyroid Carcinoma, Anaplastic/diagnosis
- Thyroid Carcinoma, Anaplastic/surgery
Collapse
Affiliation(s)
- Tingting He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, Guangdong, China
| | - Shanshan Shi
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, Guangdong, China
| | - Yiqing Liu
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, Guangdong, China
| | - Lianghui Zhu
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, Guangdong, China
| | - Yani Wei
- Department of Pathology, the First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Fenfen Zhang
- Department of Pathology, the First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Huijuan Shi
- Department of Pathology, the First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China.
| | - Yonghong He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, Guangdong, China.
| | - Anjia Han
- Department of Pathology, the First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
25
|
Hussain I, Boza J, Lukande R, Ayanga R, Semeere A, Cesarman E, Martin J, Maurer T, Erickson D. Automated detection of Kaposi sarcoma-associated herpesvirus infected cells in immunohistochemical images of skin biopsies. RESEARCH SQUARE 2024:rs.3.rs-4736178. [PMID: 39184072 PMCID: PMC11343169 DOI: 10.21203/rs.3.rs-4736178/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/27/2024]
Abstract
Immunohistochemical (IHC) staining for the antigen of Kaposi sarcoma-associated herpesvirus (KSHV), latency-associated nuclear antigen (LANA), is helpful in diagnosing Kaposi sarcoma (KS). A challenge, however, lies in distinguishing anti-LANA-positive cells from morphologically similar brown counterparts. In this work, we demonstrate a framework for automated localization and quantification of LANA positivity in whole slide images (WSI) of skin biopsies, leveraging weakly supervised multiple instance learning (MIL) while reducing false positive predictions by introducing a novel morphology-based slide aggregation method. Our framework generates interpretable heatmaps, offering insights into precise anti-LANA-positive cell localization within WSIs and a quantitative value for the percentage of positive tiles, which may assist with histological subtyping. We trained and tested our framework with an anti-LANA-stained KS pathology dataset prepared by pathologists in the United States from skin biopsies of KS-suspected patients investigated in Uganda. We achieved an area under the receiver operating characteristic curve (AUC) of 0.99 with a sensitivity and specificity of 98.15% and 96.00% in predicting anti-LANA-positive WSIs in a test dataset. We believe that the framework can provide promise for automated detection of LANA in skin biopsies, which may be especially impactful in resource-limited areas that lack trained pathologists.
Collapse
|
26
|
Scalco R, Oliveira LC, Lai Z, Harvey DJ, Abujamil L, DeCarli C, Jin LW, Chuah CN, Dugger BN. Machine learning quantification of Amyloid-β deposits in the temporal lobe of 131 brain bank cases. Acta Neuropathol Commun 2024; 12:134. [PMID: 39154006 PMCID: PMC11330038 DOI: 10.1186/s40478-024-01827-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Accepted: 06/18/2024] [Indexed: 08/19/2024] Open
Abstract
Accurate and scalable quantification of amyloid-β (Aβ) pathology is crucial for deeper disease phenotyping and furthering research in Alzheimer Disease (AD). This multidisciplinary study addresses the current limitations on neuropathology by leveraging a machine learning (ML) pipeline to perform a granular quantification of Aβ deposits and assess their distribution in the temporal lobe. Utilizing 131 whole-slide-images from consecutive autopsied cases at the University of California Davis Alzheimer Disease Research Center, our objectives were threefold: (1) Validate an automatic workflow for Aβ deposit quantification in white matter (WM) and gray matter (GM); (2) define the distributions of different Aβ deposit types in GM and WM, and (3) investigate correlates of Aβ deposits with dementia status and the presence of mixed pathology. Our methodology highlights the robustness and efficacy of the ML pipeline, demonstrating proficiency akin to experts' evaluations. We provide comprehensive insights into the quantification and distribution of Aβ deposits in the temporal GM and WM revealing a progressive increase in tandem with the severity of established diagnostic criteria (NIA-AA). We also present correlations of Aβ load with clinical diagnosis as well as presence/absence of mixed pathology. This study introduces a reproducible workflow, showcasing the practical use of ML approaches in the field of neuropathology, and use of the output data for correlative analyses. Acknowledging limitations, such as potential biases in the ML model and current ML classifications, we propose avenues for future research to refine and expand the methodology. We hope to contribute to the broader landscape of neuropathology advancements, ML applications, and precision medicine, paving the way for deep phenotyping of AD brain cases and establishing a foundation for further advancements in neuropathological research.
Collapse
Affiliation(s)
- Rebeca Scalco
- Department of Pathology and Laboratory Medicine, University of California Davis, 4645 2nd Ave. 3400a research building III, Sacramento, CA, 95817, USA
- Institute of Animal Pathology, Vetsuisse Faculty, University of Bern, Länggassstrasse 122, 3012 Bern, Switzerland
| | - Luca C Oliveira
- Department of Pathology and Laboratory Medicine, University of California Davis, 4645 2nd Ave. 3400a research building III, Sacramento, CA, 95817, USA
- Department of Electrical and Computer Engineering, University of California Davis, Davis, CA, USA
| | - Zhengfeng Lai
- Department of Pathology and Laboratory Medicine, University of California Davis, 4645 2nd Ave. 3400a research building III, Sacramento, CA, 95817, USA
- Department of Electrical and Computer Engineering, University of California Davis, Davis, CA, USA
| | - Danielle J Harvey
- Department of Pathology and Laboratory Medicine, University of California Davis, 4645 2nd Ave. 3400a research building III, Sacramento, CA, 95817, USA
- Department of Public Health Sciences, University of California Davis, School of Medicine, Sacramento, CA, USA
| | - Lana Abujamil
- Department of Pathology and Laboratory Medicine, University of California Davis, 4645 2nd Ave. 3400a research building III, Sacramento, CA, 95817, USA
| | - Charles DeCarli
- Department of Neurology, University of California Davis, School of Medicine, Sacramento, CA, USA
| | - Lee-Way Jin
- Department of Pathology and Laboratory Medicine, University of California Davis, 4645 2nd Ave. 3400a research building III, Sacramento, CA, 95817, USA
| | - Chen-Nee Chuah
- Department of Electrical and Computer Engineering, University of California Davis, Davis, CA, USA
| | - Brittany N Dugger
- Department of Pathology and Laboratory Medicine, University of California Davis, 4645 2nd Ave. 3400a research building III, Sacramento, CA, 95817, USA.
| |
Collapse
|
27
|
Bazargani R, Fazli L, Gleave M, Goldenberg L, Bashashati A, Salcudean S. Multi-scale relational graph convolutional network for multiple instance learning in histopathology images. Med Image Anal 2024; 96:103197. [PMID: 38805765 DOI: 10.1016/j.media.2024.103197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 04/11/2024] [Accepted: 05/02/2024] [Indexed: 05/30/2024]
Abstract
Graph convolutional neural networks have shown significant potential in natural and histopathology images. However, their use has only been studied in a single magnification or multi-magnification with either homogeneous graphs or only different node types. In order to leverage the multi-magnification information and improve message passing with graph convolutional networks, we handle different embedding spaces at each magnification by introducing the Multi-Scale Relational Graph Convolutional Network (MS-RGCN) as a multiple instance learning method. We model histopathology image patches and their relation with neighboring patches and patches at other scales (i.e., magnifications) as a graph. We define separate message-passing neural networks based on node and edge types to pass the information between different magnification embedding spaces. We experiment on prostate cancer histopathology images to predict the grade groups based on the extracted features from patches. We also compare our MS-RGCN with multiple state-of-the-art methods with evaluations on several source and held-out datasets. Our method outperforms the state-of-the-art on all of the datasets and image types consisting of tissue microarrays, whole-mount slide regions, and whole-slide images. Through an ablation study, we test and show the value of the pertinent design features of the MS-RGCN.
Collapse
Affiliation(s)
- Roozbeh Bazargani
- Electrical and Computer Engineering, University of British Columbia, 2332 Main Mall, Vancouver, BC V6T 1Z4, Canada.
| | - Ladan Fazli
- The Vancouver Prostate Centre, 2660 Oak St, Vancouver, BC V6H 3Z6, Canada; Department of Urologic Sciences, University of British Columbia, 2775 Laurel Street, Vancouver, BC V5Z 1M9, Canada
| | - Martin Gleave
- The Vancouver Prostate Centre, 2660 Oak St, Vancouver, BC V6H 3Z6, Canada; Department of Urologic Sciences, University of British Columbia, 2775 Laurel Street, Vancouver, BC V5Z 1M9, Canada
| | - Larry Goldenberg
- The Vancouver Prostate Centre, 2660 Oak St, Vancouver, BC V6H 3Z6, Canada; Department of Urologic Sciences, University of British Columbia, 2775 Laurel Street, Vancouver, BC V5Z 1M9, Canada
| | - Ali Bashashati
- School of Biomedical Engineering, University of British Columbia, 2222 Health Sciences Mall, Vancouver, BC V6T 1Z3, Canada; Department of Pathology & Laboratory Medicine, University of British Columbia, 2211 Wesbrook Mall, Vancouver, BC V6T 1Z7, Canada.
| | - Septimiu Salcudean
- Electrical and Computer Engineering, University of British Columbia, 2332 Main Mall, Vancouver, BC V6T 1Z4, Canada; School of Biomedical Engineering, University of British Columbia, 2222 Health Sciences Mall, Vancouver, BC V6T 1Z3, Canada.
| |
Collapse
|
28
|
Javed S, Mahmood A, Qaiser T, Werghi N, Rajpoot N. Unsupervised mutual transformer learning for multi-gigapixel Whole Slide Image classification. Med Image Anal 2024; 96:103203. [PMID: 38810517 DOI: 10.1016/j.media.2024.103203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 03/30/2024] [Accepted: 05/13/2024] [Indexed: 05/31/2024]
Abstract
The classification of gigapixel Whole Slide Images (WSIs) is an important task in the emerging area of computational pathology. There has been a surge of interest in deep learning models for WSI classification with clinical applications such as cancer detection or prediction of cellular mutations. Most supervised methods require expensive and labor-intensive manual annotations by expert pathologists. Weakly supervised Multiple Instance Learning (MIL) methods have recently demonstrated excellent performance; however, they still require large-scale slide-level labeled training datasets that require a careful inspection of each slide by an expert pathologist. In this work, we propose a fully unsupervised WSI classification algorithm based on mutual transformer learning. The instances (i.e., patches) from gigapixel WSIs are transformed into a latent space and then inverse-transformed to the original space. Using the transformation loss, pseudo labels are generated and cleaned using a transformer label cleaner. The proposed transformer-based pseudo-label generator and cleaner modules mutually train each other iteratively in an unsupervised manner. A discriminative learning mechanism is introduced to improve normal versus cancerous instance labeling. In addition to the unsupervised learning, we demonstrate the effectiveness of the proposed framework for weakly supervised learning and cancer subtype classification as downstream analysis. Extensive experiments on four publicly available datasets show better performance of the proposed algorithm compared to the existing state-of-the-art methods.
Collapse
Affiliation(s)
- Sajid Javed
- Department of Computer Science, Khalifa University of Science and Technology, Abu Dhabi, P.O. Box 127788, United Arab Emirates.
| | - Arif Mahmood
- Department of Computer Science, Information Technology University, Lahore, Pakistan.
| | - Talha Qaiser
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK
| | - Naoufel Werghi
- Department of Computer Science, Khalifa University of Science and Technology, Abu Dhabi, P.O. Box 127788, United Arab Emirates
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK; Department of Pathology, University Hospitals Coventry and Warwickshire, Walsgrave, Coventry, CV2 2DX, UK; The Alan Turing Institute, London, NW1 2DB, UK
| |
Collapse
|
29
|
Shigeyasu Y, Harada S, Yoshizawa A, Terada K, Bise R. Diameter-based pseudo labeling for pathological image segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039994 DOI: 10.1109/embc53108.2024.10782204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
This paper proposes a semi-and-weak supervised pathological image segmentation method that effectively leverages the pre-recorded long-diameter information of a tumor in clinical as weak supervision. By leveraging the tumor diameter, the proposed method can accurately identify candidate tumor regions for pseudo-label selection. The accurate pseudo labels can improve the segmentation performance. The experimental results demonstrate the effectiveness of our method, which achieved the best performance among the comparative methods.
Collapse
|
30
|
Du X, Guo J, Xing Z, Liu M, Xu Z, Ruan C, Wen Y, Wang Y, Cui L, Li H. Hard example mining in Multi-Instance Learning for Whole-Slide Image Classification. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039124 DOI: 10.1109/embc53108.2024.10782609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Multiple instance learning(MIL) has shown superior performance in the classification of whole-slide images(WSIs). The implementation of multiple instance learning for WSI classification typically involves two components, i.e., a feature extractor, which is used to extract features from patches, and an MIL aggregator, responsible for generating WSI features from the patch features, also contributing to the final classification of WSIs. MIL aggregators often employs a specific MIL classification module. To ensure interactive optimization of the feature extractor and the MIL aggregator, existing state-of-the-art methods select patches based on attention scores to optimize the feature extractor. However, they predominantly focus on easy-to-classify instances, leading to inadequate capabilities in discriminating hard-toclassify instances. In this paper, we introduces a novel Multiple Instance Learning method, HPA-MIL (Hard Pseudo-label Assignment), which directly mines hard instances through pseudo-label assignment. Our experiments demonstrate that HPA-MIL achieves an AUC of 0.9523 on the TCGA NSCLC dataset, which outperforms all the existing state-of-the-art methods compared.
Collapse
|
31
|
Li Y, Van Alsten SC, Lee DN, Kim T, Calhoun BC, Perou CM, Wobker SE, Marron JS, Hoadley KA, Troester MA. Visual Intratumor Heterogeneity and Breast Tumor Progression. Cancers (Basel) 2024; 16:2294. [PMID: 39001357 PMCID: PMC11240824 DOI: 10.3390/cancers16132294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 06/13/2024] [Accepted: 06/18/2024] [Indexed: 07/16/2024] Open
Abstract
High intratumoral heterogeneity is thought to be a poor prognostic indicator. However, the source of heterogeneity may also be important, as genomic heterogeneity is not always reflected in histologic or 'visual' heterogeneity. We aimed to develop a predictor of histologic heterogeneity and evaluate its association with outcomes and molecular heterogeneity. We used VGG16 to train an image classifier to identify unique, patient-specific visual features in 1655 breast tumors (5907 core images) from the Carolina Breast Cancer Study (CBCS). Extracted features for images, as well as the epithelial and stromal image components, were hierarchically clustered, and visual heterogeneity was defined as a greater distance between images from the same patient. We assessed the association between visual heterogeneity, clinical features, and DNA-based molecular heterogeneity using generalized linear models, and we used Cox models to estimate the association between visual heterogeneity and tumor recurrence. Basal-like and ER-negative tumors were more likely to have low visual heterogeneity, as were the tumors from younger and Black women. Less heterogeneous tumors had a higher risk of recurrence (hazard ratio = 1.62, 95% confidence interval = 1.22-2.16), and were more likely to come from patients whose tumors were comprised of only one subclone or had a TP53 mutation. Associations were similar regardless of whether the image was based on stroma, epithelium, or both. Histologic heterogeneity adds complementary information to commonly used molecular indicators, with low heterogeneity predicting worse outcomes. Future work integrating multiple sources of heterogeneity may provide a more comprehensive understanding of tumor progression.
Collapse
Affiliation(s)
- Yao Li
- Department of Statistics and Operations Research, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; (Y.L.); (T.K.); (J.S.M.)
| | - Sarah C. Van Alsten
- Department of Epidemiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
| | - Dong Neuck Lee
- Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
| | - Taebin Kim
- Department of Statistics and Operations Research, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; (Y.L.); (T.K.); (J.S.M.)
| | - Benjamin C. Calhoun
- Department of Pathology and Laboratory Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; (B.C.C.); (C.M.P.); (S.E.W.)
- UNC Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
| | - Charles M. Perou
- Department of Pathology and Laboratory Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; (B.C.C.); (C.M.P.); (S.E.W.)
- UNC Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
- Department of Genetics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Sara E. Wobker
- Department of Pathology and Laboratory Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; (B.C.C.); (C.M.P.); (S.E.W.)
- UNC Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
| | - J. S. Marron
- Department of Statistics and Operations Research, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; (Y.L.); (T.K.); (J.S.M.)
- Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
- UNC Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
| | - Katherine A. Hoadley
- UNC Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
- Department of Genetics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Melissa A. Troester
- Department of Epidemiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
- Department of Pathology and Laboratory Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; (B.C.C.); (C.M.P.); (S.E.W.)
- UNC Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
| |
Collapse
|
32
|
Zulqarnain F, Zhao X, Setchell KD, Sharma Y, Fernandes P, Srivastava S, Shrivastava A, Ehsan L, Jain V, Raghavan S, Moskaluk C, Haberman Y, Denson LA, Mehta K, Iqbal NT, Rahman N, Sadiq K, Ahmad Z, Idress R, Iqbal J, Ahmed S, Hotwani A, Umrani F, Amadi B, Kelly P, Brown DE, Moore SR, Ali SA, Syed S. Machine-learning-based integrative -'omics analyses reveal immunologic and metabolic dysregulation in environmental enteric dysfunction. iScience 2024; 27:110013. [PMID: 38868190 PMCID: PMC11167436 DOI: 10.1016/j.isci.2024.110013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 02/18/2024] [Accepted: 05/14/2024] [Indexed: 06/14/2024] Open
Abstract
Environmental enteric dysfunction (EED) is a subclinical enteropathy challenging to diagnose due to an overlap of tissue features with other inflammatory enteropathies. EED subjects (n = 52) from Pakistan, controls (n = 25), and a validation EED cohort (n = 30) from Zambia were used to develop a machine-learning-based image analysis classification model. We extracted histologic feature representations from the Pakistan EED model and correlated them to transcriptomics and clinical biomarkers. In-silico metabolic network modeling was used to characterize alterations in metabolic flux between EED and controls and validated using untargeted lipidomics. Genes encoding beta-ureidopropionase, CYP4F3, and epoxide hydrolase 1 correlated to numerous tissue feature representations. Fatty acid and glycerophospholipid metabolism-related reactions showed altered flux. Increased phosphatidylcholine, lysophosphatidylcholine (LPC), and ether-linked LPCs, and decreased ester-linked LPCs were observed in the duodenal lipidome of Pakistan EED subjects, while plasma levels of glycine-conjugated bile acids were significantly increased. Together, these findings elucidate a multi-omic signature of EED.
Collapse
Affiliation(s)
| | - Xueheng Zhao
- Cincinnati Children’s Hospital Medical Center, University of Cincinnati School of Medicine, Cincinnati, OH, USA
| | - Kenneth D.R. Setchell
- Cincinnati Children’s Hospital Medical Center, University of Cincinnati School of Medicine, Cincinnati, OH, USA
| | - Yash Sharma
- University of Virginia, Charlottesville, VA, USA
| | | | | | | | | | - Varun Jain
- University of Virginia, Charlottesville, VA, USA
| | | | | | - Yael Haberman
- Cincinnati Children’s Hospital Medical Center, University of Cincinnati School of Medicine, Cincinnati, OH, USA
| | - Lee A. Denson
- Cincinnati Children’s Hospital Medical Center, University of Cincinnati School of Medicine, Cincinnati, OH, USA
| | - Khyati Mehta
- Cincinnati Children’s Hospital Medical Center, University of Cincinnati School of Medicine, Cincinnati, OH, USA
| | | | | | | | | | | | | | | | | | | | | | - Paul Kelly
- University Teaching Hospital, Lusaka, Zambia
- Queen Mary University of London, London, UK
| | | | | | | | - Sana Syed
- University of Virginia, Charlottesville, VA, USA
- Aga Khan University, Karachi, Pakistan
| |
Collapse
|
33
|
Kondejkar T, Al-Heejawi SMA, Breggia A, Ahmad B, Christman R, Ryan ST, Amal S. Multi-Scale Digital Pathology Patch-Level Prostate Cancer Grading Using Deep Learning: Use Case Evaluation of DiagSet Dataset. Bioengineering (Basel) 2024; 11:624. [PMID: 38927860 PMCID: PMC11200755 DOI: 10.3390/bioengineering11060624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 06/03/2024] [Accepted: 06/12/2024] [Indexed: 06/28/2024] Open
Abstract
Prostate cancer remains a prevalent health concern, emphasizing the critical need for early diagnosis and precise treatment strategies to mitigate mortality rates. The accurate prediction of cancer grade is paramount for timely interventions. This paper introduces an approach to prostate cancer grading, framing it as a classification problem. Leveraging ResNet models on multi-scale patch-level digital pathology and the Diagset dataset, the proposed method demonstrates notable success, achieving an accuracy of 0.999 in identifying clinically significant prostate cancer. The study contributes to the evolving landscape of cancer diagnostics, offering a promising avenue for improved grading accuracy and, consequently, more effective treatment planning. By integrating innovative deep learning techniques with comprehensive datasets, our approach represents a step forward in the pursuit of personalized and targeted cancer care.
Collapse
Affiliation(s)
- Tanaya Kondejkar
- College of Engineering, Northeastern University, Boston, MA 02115, USA; (T.K.); (S.M.A.A.-H.)
| | | | - Anne Breggia
- MaineHealth Institute for Research, Scarborough, ME 04074, USA;
| | - Bilal Ahmad
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Robert Christman
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Stephen T. Ryan
- Maine Medical Center, Portland, ME 04102, USA; (B.A.); (R.C.); (S.T.R.)
| | - Saeed Amal
- The Roux Institute, Department of Bioengineering, College of Engineering, Northeastern University, Boston, MA 02115, USA
| |
Collapse
|
34
|
Juan Ramon A, Parmar C, Carrasco-Zevallos OM, Csiszer C, Yip SSF, Raciti P, Stone NL, Triantos S, Quiroz MM, Crowley P, Batavia AS, Greshock J, Mansi T, Standish KA. Development and deployment of a histopathology-based deep learning algorithm for patient prescreening in a clinical trial. Nat Commun 2024; 15:4690. [PMID: 38824132 PMCID: PMC11144215 DOI: 10.1038/s41467-024-49153-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 05/24/2024] [Indexed: 06/03/2024] Open
Abstract
Accurate identification of genetic alterations in tumors, such as Fibroblast Growth Factor Receptor, is crucial for treating with targeted therapies; however, molecular testing can delay patient care due to the time and tissue required. Successful development, validation, and deployment of an AI-based, biomarker-detection algorithm could reduce screening cost and accelerate patient recruitment. Here, we develop a deep-learning algorithm using >3000 H&E-stained whole slide images from patients with advanced urothelial cancers, optimized for high sensitivity to avoid ruling out trial-eligible patients. The algorithm is validated on a dataset of 350 patients, achieving an area under the curve of 0.75, specificity of 31.8% at 88.7% sensitivity, and projected 28.7% reduction in molecular testing. We successfully deploy the system in a non-interventional study comprising 89 global study clinical sites and demonstrate its potential to prioritize/deprioritize molecular testing resources and provide substantial cost savings in the drug development and clinical settings.
Collapse
Affiliation(s)
- Albert Juan Ramon
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, San Diego, CA, USA.
| | - Chaitanya Parmar
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, San Diego, CA, USA
| | | | - Carlos Csiszer
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, Titusville, NJ, USA
| | - Stephen S F Yip
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, Cambridge, MA, USA
| | - Patricia Raciti
- Janssen R&D, LLC, a Johnson & Johnson Company. Oncology, Spring House, PA, USA
| | - Nicole L Stone
- Janssen R&D, LLC, a Johnson & Johnson Company. Oncology, Spring House, PA, USA
| | - Spyros Triantos
- Janssen R&D, LLC, a Johnson & Johnson Company. Oncology, Spring House, PA, USA
| | - Michelle M Quiroz
- Janssen R&D, LLC, a Johnson & Johnson Company. Oncology, Spring House, PA, USA
| | - Patrick Crowley
- Janssen R&D, LLC, a Johnson & Johnson Company. Global Development, High Wycombe, UK
| | - Ashita S Batavia
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, Titusville, NJ, USA
| | - Joel Greshock
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, Spring House, PA, USA
| | - Tommaso Mansi
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, Titusville, NJ, USA
| | - Kristopher A Standish
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, San Diego, CA, USA
| |
Collapse
|
35
|
Yilmaz F, Brickman A, Najdawi F, Yakirevich E, Egger R, Resnick MB. Advancing Artificial Intelligence Integration Into the Pathology Workflow: Exploring Opportunities in Gastrointestinal Tract Biopsies. J Transl Med 2024; 104:102043. [PMID: 38431118 DOI: 10.1016/j.labinv.2024.102043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 02/14/2024] [Accepted: 02/26/2024] [Indexed: 03/05/2024] Open
Abstract
This review aims to present a comprehensive overview of the current landscape of artificial intelligence (AI) applications in the analysis of tubular gastrointestinal biopsies. These publications cover a spectrum of conditions, ranging from inflammatory ailments to malignancies. Moving beyond the conventional diagnosis based on hematoxylin and eosin-stained whole-slide images, the review explores additional implications of AI, including its involvement in interpreting immunohistochemical results, molecular subtyping, and the identification of cellular spatial biomarkers. Furthermore, the review examines how AI can contribute to enhancing the quality and control of diagnostic processes, introducing new workflow options, and addressing the limitations and caveats associated with current AI platforms in this context.
Collapse
Affiliation(s)
- Fazilet Yilmaz
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island
| | - Arlen Brickman
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island
| | - Fedaa Najdawi
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island
| | - Evgeny Yakirevich
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island
| | | | - Murray B Resnick
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island.
| |
Collapse
|
36
|
Deng R, Cui C, Remedios LW, Bao S, Womick RM, Chiron S, Li J, Roland JT, Lau KS, Liu Q, Wilson KT, Wang Y, Coburn LA, Landman BA, Huo Y. Cross-scale multi-instance learning for pathological image diagnosis. Med Image Anal 2024; 94:103124. [PMID: 38428271 PMCID: PMC11016375 DOI: 10.1016/j.media.2024.103124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 02/16/2024] [Accepted: 02/26/2024] [Indexed: 03/03/2024]
Abstract
Analyzing high resolution whole slide images (WSIs) with regard to information across multiple scales poses a significant challenge in digital pathology. Multi-instance learning (MIL) is a common solution for working with high resolution images by classifying bags of objects (i.e. sets of smaller image patches). However, such processing is typically performed at a single scale (e.g., 20× magnification) of WSIs, disregarding the vital inter-scale information that is key to diagnoses by human pathologists. In this study, we propose a novel cross-scale MIL algorithm to explicitly aggregate inter-scale relationships into a single MIL network for pathological image diagnosis. The contribution of this paper is three-fold: (1) A novel cross-scale MIL (CS-MIL) algorithm that integrates the multi-scale information and the inter-scale relationships is proposed; (2) A toy dataset with scale-specific morphological features is created and released to examine and visualize differential cross-scale attention; (3) Superior performance on both in-house and public datasets is demonstrated by our simple cross-scale MIL strategy. The official implementation is publicly available at https://github.com/hrlblab/CS-MIL.
Collapse
Affiliation(s)
| | - Can Cui
- Vanderbilt University, Nashville, TN 37215, USA
| | | | | | - R Michael Womick
- The University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, USA
| | - Sophie Chiron
- Vanderbilt University Medical Center, Nashville, TN 37232, USA
| | - Jia Li
- Vanderbilt University Medical Center, Nashville, TN 37232, USA
| | - Joseph T Roland
- Vanderbilt University Medical Center, Nashville, TN 37232, USA
| | - Ken S Lau
- Vanderbilt University, Nashville, TN 37215, USA
| | - Qi Liu
- Vanderbilt University Medical Center, Nashville, TN 37232, USA
| | - Keith T Wilson
- Vanderbilt University Medical Center, Nashville, TN 37232, USA; Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN 37212, USA
| | - Yaohong Wang
- Vanderbilt University Medical Center, Nashville, TN 37232, USA
| | - Lori A Coburn
- Vanderbilt University Medical Center, Nashville, TN 37232, USA; Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN 37212, USA
| | - Bennett A Landman
- Vanderbilt University, Nashville, TN 37215, USA; Vanderbilt University Medical Center, Nashville, TN 37232, USA
| | - Yuankai Huo
- Vanderbilt University, Nashville, TN 37215, USA.
| |
Collapse
|
37
|
Zhao Y, Wang W, Ji Y, Guo Y, Duan J, Liu X, Yan D, Liang D, Li W, Zhang Z, Li ZC. Computational Pathology for Prediction of Isocitrate Dehydrogenase Gene Mutation from Whole Slide Images in Adult Patients with Diffuse Glioma. THE AMERICAN JOURNAL OF PATHOLOGY 2024; 194:747-758. [PMID: 38325551 DOI: 10.1016/j.ajpath.2024.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 01/14/2024] [Accepted: 01/19/2024] [Indexed: 02/09/2024]
Abstract
Isocitrate dehydrogenase gene (IDH) mutation is one of the most important molecular markers of glioma. Accurate detection of IDH status is a crucial step for integrated diagnosis of adult-type diffuse gliomas. Herein, a clustering-based hybrid of a convolutional neural network and a vision transformer deep learning model was developed to detect IDH mutation status from annotation-free hematoxylin and eosin-stained whole slide pathologic images of 2275 adult patients with diffuse gliomas. For comparison, a pure convolutional neural network, a pure vision transformer, and a classic multiple-instance learning model were also assessed. The hybrid model achieved an area under the receiver operating characteristic curve of 0.973 in the validation set and 0.953 in the external test set, outperforming the other models. The hybrid model's ability in IDH detection between difficult subgroups with different IDH status but shared histologic features, achieving areas under the receiver operating characteristic curve ranging from 0.850 to 0.985 in validation and test sets. These data suggest that the proposed hybrid model has a potential to be used as a computational pathology tool for preliminary rapid detection of IDH mutation from whole slide images in adult patients with diffuse gliomas.
Collapse
Affiliation(s)
- Yuanshen Zhao
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Weiwei Wang
- Department of Pathology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yuchen Ji
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yang Guo
- Department of Neurosurgery, Henan Provincial Hospital, Zhengzhou, China
| | - Jingxian Duan
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xianzhi Liu
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Dongming Yan
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Dong Liang
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; The Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China; National Innovation Center for Advanced Medical Devices, Shenzhen, China
| | - Wencai Li
- Department of Pathology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Zhenyu Zhang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China.
| | - Zhi-Cheng Li
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; The Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China; National Innovation Center for Advanced Medical Devices, Shenzhen, China.
| |
Collapse
|
38
|
Wang Y, Zhang W, Chen L, Xie J, Zheng X, Jin Y, Zheng Q, Xue Q, Li B, He C, Chen H, Li Y. Development of an Interpretable Deep Learning Model for Pathological Tumor Response Assessment After Neoadjuvant Therapy. Biol Proced Online 2024; 26:10. [PMID: 38632527 PMCID: PMC11022344 DOI: 10.1186/s12575-024-00234-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 03/22/2024] [Indexed: 04/19/2024] Open
Abstract
BACKGROUND Neoadjuvant therapy followed by surgery has become the standard of care for locally advanced esophageal squamous cell carcinoma (ESCC) and accurate pathological response assessment is critical to assess the therapeutic efficacy. However, it can be laborious and inconsistency between different observers may occur. Hence, we aim to develop an interpretable deep-learning model for efficient pathological response assessment following neoadjuvant therapy in ESCC. METHODS This retrospective study analyzed 337 ESCC resection specimens from 2020-2021 at the Pudong-Branch (Cohort 1) and 114 from 2021-2022 at the Puxi-Branch (External Cohort 2) of Fudan University Shanghai Cancer Center. Whole slide images (WSIs) from these two cohorts were generated using different scanning machines to test the ability of the model in handling color variations. Four pathologists independently assessed the pathological response. The senior pathologists annotated tumor beds and residual tumor percentages on WSIs to determine consensus labels. Furthermore, 1850 image patches were randomly extracted from Cohort 1 WSIs and binarily classified for tumor viability. A deep-learning model employing knowledge distillation was developed to automatically classify positive patches for each WSI and estimate the viable residual tumor percentages. Spatial heatmaps were output for model explanations and visualizations. RESULTS The approach achieved high concordance with pathologist consensus, with an R^2 of 0.8437, a RAcc_0.1 of 0.7586, a RAcc_0.3 of 0.9885, which were comparable to two senior pathologists (R^2 of 0.9202/0.9619, RAcc_0.1 of 8506/0.9425, RAcc_0.3 of 1.000/1.000) and surpassing two junior pathologists (R^2 of 0.5592/0.5474, RAcc_0.1 of 0.5287/0.5287, RAcc_0.3 of 0.9080/0.9310). Visualizations enabled the localization of residual viable tumor to augment microscopic assessment. CONCLUSION This work illustrates deep learning's potential for assisting pathological response assessment. Spatial heatmaps and patch examples provide intuitive explanations of model predictions, engendering clinical trust and adoption (Code and data will be available at https://github.com/WinnieLaugh/ESCC_Percentage once the paper has been conditionally accepted). Integrating interpretable computational pathology could help enhance the efficiency and consistency of tumor response assessment and empower precise oncology treatment decisions.
Collapse
Affiliation(s)
- Yichen Wang
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China, 200032
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China, 200032
| | - Wenhua Zhang
- Shanghai Aitrox Technology Corporation Limited, Shanghai, China
- Department of Future Technology, Shanghai University, Shanghai, China
| | - Lijun Chen
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China, 200032
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China, 200032
| | - Jun Xie
- Shanghai Aitrox Technology Corporation Limited, Shanghai, China
| | - Xuebin Zheng
- Shanghai Aitrox Technology Corporation Limited, Shanghai, China
| | - Yan Jin
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China, 200032
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China, 200032
| | - Qiang Zheng
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China, 200032
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China, 200032
| | - Qianqian Xue
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China, 200032
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China, 200032
| | - Bin Li
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China, 200032
- Department of Thoracic Surgery, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Chuan He
- Shanghai Aitrox Technology Corporation Limited, Shanghai, China
| | - Haiquan Chen
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China, 200032
- Department of Thoracic Surgery, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Yuan Li
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China, 200032.
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China, 200032.
| |
Collapse
|
39
|
Jin D, Liang S, Shmatko A, Arnold A, Horst D, Grünewald TGP, Gerstung M, Bai X. Teacher-student collaborated multiple instance learning for pan-cancer PDL1 expression prediction from histopathology slides. Nat Commun 2024; 15:3063. [PMID: 38594278 PMCID: PMC11004138 DOI: 10.1038/s41467-024-46764-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 03/08/2024] [Indexed: 04/11/2024] Open
Abstract
Programmed cell death ligand 1 (PDL1), as an important biomarker, is quantified by immunohistochemistry (IHC) with few established histopathological patterns. Deep learning aids in histopathological assessment, yet heterogeneity and lacking spatially resolved annotations challenge precise analysis. Here, we present a weakly supervised learning approach using bulk RNA sequencing for PDL1 expression prediction from hematoxylin and eosin (H&E) slides. Our method extends the multiple instance learning paradigm with the teacher-student framework, which assigns dynamic pseudo-labels for intra-slide heterogeneity and retrieves unlabeled instances using temporal ensemble model distillation. The approach, evaluated on 12,299 slides across 20 solid tumor types, achieves a weighted average area under the curve of 0.83 on fresh-frozen and 0.74 on formalin-fixed specimens for 9 tumors with PDL1 as an established biomarker. Our method predicts PDL1 expression patterns, validated by IHC on 20 slides, offering insights into histologies relevant to PDL1. This demonstrates the potential of deep learning in identifying diverse histological patterns for molecular changes from H&E images.
Collapse
Affiliation(s)
- Darui Jin
- Image Processing Center, Beihang University, Beijing, 102206, China
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Shen Yuan Honors College, Beihang University, Beijing, 100191, China
| | - Shangying Liang
- Image Processing Center, Beihang University, Beijing, 102206, China
| | - Artem Shmatko
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Alexander Arnold
- Charité - Universitätsmedizin Berlin, Institute of Pathology, 10117, Berlin, Germany
| | - David Horst
- Charité - Universitätsmedizin Berlin, Institute of Pathology, 10117, Berlin, Germany
- German Cancer Consortium (DKTK), partner site Berlin, a partnership between DKFZ and Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Thomas G P Grünewald
- Institute of Pathology, Heidelberg University Hospital, Heidelberg, Germany.
- Division of Translational Pediatric Sarcoma Research, German Cancer Research Center (DKFZ), German Cancer Consortium (DKTK), Heidelberg, Germany.
- Hopp Children's Cancer Center (KiTZ) Heidelberg, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and Heidelberg University Hospital, Heidelberg, Germany.
| | - Moritz Gerstung
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| | - Xiangzhi Bai
- Image Processing Center, Beihang University, Beijing, 102206, China.
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China.
- Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100083, China.
| |
Collapse
|
40
|
Bao LX, Luo ZM, Zhu XL, Xu YY. Automated identification of protein expression intensity and classification of protein cellular locations in mouse brain regions from immunofluorescence images. Med Biol Eng Comput 2024; 62:1105-1119. [PMID: 38150111 DOI: 10.1007/s11517-023-02985-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 11/28/2023] [Indexed: 12/28/2023]
Abstract
Knowledge of protein expression in mammalian brains at regional and cellular levels can facilitate understanding of protein functions and associated diseases. As the mouse brain is a typical mammalian brain considering cell type and structure, several studies have been conducted to analyze protein expression in mouse brains. However, labeling protein expression using biotechnology is costly and time-consuming. Therefore, automated models that can accurately recognize protein expression are needed. Here, we constructed machine learning models to automatically annotate the protein expression intensity and cellular location in different mouse brain regions from immunofluorescence images. The brain regions and sub-regions were segmented through learning image features using an autoencoder and then performing K-means clustering and registration to align with the anatomical references. The protein expression intensities for those segmented structures were computed on the basis of the statistics of the image pixels, and patch-based weakly supervised methods and multi-instance learning were used to classify the cellular locations. Results demonstrated that the models achieved high accuracy in the expression intensity estimation, and the F1 score of the cellular location prediction was 74.5%. This work established an automated pipeline for analyzing mouse brain images and provided a foundation for further study of protein expression and functions.
Collapse
Affiliation(s)
- Lin-Xia Bao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Imaging Processing, Southern Medical University, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510623, China
| | - Zhuo-Ming Luo
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Imaging Processing, Southern Medical University, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510623, China
| | - Xi-Liang Zhu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Imaging Processing, Southern Medical University, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510623, China
| | - Ying-Ying Xu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China.
- Guangdong Provincial Key Laboratory of Medical Imaging Processing, Southern Medical University, Guangzhou, 510515, China.
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510623, China.
| |
Collapse
|
41
|
Vanea C, Džigurski J, Rukins V, Dodi O, Siigur S, Salumäe L, Meir K, Parks WT, Hochner-Celnikier D, Fraser A, Hochner H, Laisk T, Ernst LM, Lindgren CM, Nellåker C. Mapping cell-to-tissue graphs across human placenta histology whole slide images using deep learning with HAPPY. Nat Commun 2024; 15:2710. [PMID: 38548713 PMCID: PMC10978962 DOI: 10.1038/s41467-024-46986-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 03/15/2024] [Indexed: 04/01/2024] Open
Abstract
Accurate placenta pathology assessment is essential for managing maternal and newborn health, but the placenta's heterogeneity and temporal variability pose challenges for histology analysis. To address this issue, we developed the 'Histology Analysis Pipeline.PY' (HAPPY), a deep learning hierarchical method for quantifying the variability of cells and micro-anatomical tissue structures across placenta histology whole slide images. HAPPY differs from patch-based features or segmentation approaches by following an interpretable biological hierarchy, representing cells and cellular communities within tissues at a single-cell resolution across whole slide images. We present a set of quantitative metrics from healthy term placentas as a baseline for future assessments of placenta health and we show how these metrics deviate in placentas with clinically significant placental infarction. HAPPY's cell and tissue predictions closely replicate those from independent clinical experts and placental biology literature.
Collapse
Affiliation(s)
- Claudia Vanea
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK.
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK.
| | | | | | - Omri Dodi
- Faculty of Medicine, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Siim Siigur
- Department of Pathology, Tartu University Hospital, Tartu, Estonia
| | - Liis Salumäe
- Department of Pathology, Tartu University Hospital, Tartu, Estonia
| | - Karen Meir
- Department of Pathology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - W Tony Parks
- Department of Laboratory Medicine & Pathobiology, University of Toronto, Toronto, Canada
| | | | - Abigail Fraser
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
- MRC Integrative Epidemiology Unit at the University of Bristol, Bristol, UK
| | - Hagit Hochner
- Braun School of Public Health, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Triin Laisk
- Institute of Genomics, University of Tartu, Tartu, Estonia
| | - Linda M Ernst
- Department of Pathology and Laboratory Medicine, NorthShore University HealthSystem, Chicago, USA
- Department of Pathology, University of Chicago Pritzker School of Medicine, Chicago, USA
| | - Cecilia M Lindgren
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK
- Centre for Human Genetics, Nuffield Department, University of Oxford, Oxford, UK
- Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Nuffield Department of Population Health Health, University of Oxford, Oxford, UK
| | - Christoffer Nellåker
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK.
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK.
| |
Collapse
|
42
|
Aalam SW, Ahanger AB, Masoodi TA, Bhat AA, Akil ASAS, Khan MA, Assad A, Macha MA, Bhat MR. Deep learning-based identification of esophageal cancer subtypes through analysis of high-resolution histopathology images. Front Mol Biosci 2024; 11:1346242. [PMID: 38567100 PMCID: PMC10985197 DOI: 10.3389/fmolb.2024.1346242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 02/23/2024] [Indexed: 04/04/2024] Open
Abstract
Esophageal cancer (EC) remains a significant health challenge globally, with increasing incidence and high mortality rates. Despite advances in treatment, there remains a need for improved diagnostic methods and understanding of disease progression. This study addresses the significant challenges in the automatic classification of EC, particularly in distinguishing its primary subtypes: adenocarcinoma and squamous cell carcinoma, using histopathology images. Traditional histopathological diagnosis, while being the gold standard, is subject to subjectivity and human error and imposes a substantial burden on pathologists. This study proposes a binary class classification system for detecting EC subtypes in response to these challenges. The system leverages deep learning techniques and tissue-level labels for enhanced accuracy. We utilized 59 high-resolution histopathological images from The Cancer Genome Atlas (TCGA) Esophageal Carcinoma dataset (TCGA-ESCA). These images were preprocessed, segmented into patches, and analyzed using a pre-trained ResNet101 model for feature extraction. For classification, we employed five machine learning classifiers: Support Vector Classifier (SVC), Logistic Regression (LR), Decision Tree (DT), AdaBoost (AD), Random Forest (RF), and a Feed-Forward Neural Network (FFNN). The classifiers were evaluated based on their prediction accuracy on the test dataset, yielding results of 0.88 (SVC and LR), 0.64 (DT and AD), 0.82 (RF), and 0.94 (FFNN). Notably, the FFNN classifier achieved the highest Area Under the Curve (AUC) score of 0.92, indicating its superior performance, followed closely by SVC and LR, with a score of 0.87. This suggested approach holds promising potential as a decision-support tool for pathologists, particularly in regions with limited resources and expertise. The timely and precise detection of EC subtypes through this system can substantially enhance the likelihood of successful treatment, ultimately leading to reduced mortality rates in patients with this aggressive cancer.
Collapse
Affiliation(s)
- Syed Wajid Aalam
- Department of Computer Science, Islamic University of Science and Technology, Awantipora, India
| | - Abdul Basit Ahanger
- Department of Computer Science, Islamic University of Science and Technology, Awantipora, India
| | - Tariq A. Masoodi
- Human Immunology Department, Research Branch, Sidra Medicine, Doha, Qatar
| | - Ajaz A. Bhat
- Department of Human Genetics-Precision Medicine in Diabetes, Obesity and Cancer Program, Sidra Medicine, Doha, Qatar
| | - Ammira S. Al-Shabeeb Akil
- Department of Human Genetics-Precision Medicine in Diabetes, Obesity and Cancer Program, Sidra Medicine, Doha, Qatar
| | | | - Assif Assad
- Department of Computer Science and Engineering, Islamic University of Science and Technology, Awantipora, India
| | - Muzafar A. Macha
- Watson-Crick Centre for Molecular Medicine, Islamic University of Science and Technology, Awantipora, India
| | - Muzafar Rasool Bhat
- Department of Computer Science, Islamic University of Science and Technology, Awantipora, India
| |
Collapse
|
43
|
Wang Y, Rahman A, Duggar WN, Thomas TV, Roberts PR, Vijayakumar S, Jiao Z, Bian L, Wang H. A gradient mapping guided explainable deep neural network for extracapsular extension identification in 3D head and neck cancer computed tomography images. Med Phys 2024; 51:2007-2019. [PMID: 37643447 DOI: 10.1002/mp.16680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 07/13/2023] [Accepted: 08/03/2023] [Indexed: 08/31/2023] Open
Abstract
BACKGROUND Diagnosis and treatment management for head and neck squamous cell carcinoma (HNSCC) is guided by routine diagnostic head and neck computed tomography (CT) scans to identify tumor and lymph node features. The extracapsular extension (ECE) is a strong predictor of patients' survival outcomes with HNSCC. It is essential to detect the occurrence of ECE as it changes staging and treatment planning for patients. Current clinical ECE detection relies on visual identification and pathologic confirmation conducted by clinicians. However, manual annotation of the lymph node region is a required data preprocessing step in most of the current machine learning-based ECE diagnosis studies. PURPOSE In this paper, we propose a Gradient Mapping Guided Explainable Network (GMGENet) framework to perform ECE identification automatically without requiring annotated lymph node region information. METHODS The gradient-weighted class activation mapping (Grad-CAM) technique is applied to guide the deep learning algorithm to focus on the regions that are highly related to ECE. The proposed framework includes an extractor and a classifier. In a joint training process, informative volumes of interest (VOIs) are extracted by the extractor without labeled lymph node region information, and the classifier learns the pattern to classify the extracted VOIs into ECE positive and negative. RESULTS In evaluation, the proposed methods are well-trained and tested using cross-validation. GMGENet achieved test accuracy and area under the curve (AUC) of 92.2% and 89.3%, respectively. GMGENetV2 achieved 90.3% accuracy and 91.7% AUC in the test. The results were compared with different existing models and further confirmed and explained by generating ECE probability heatmaps via a Grad-CAM technique. The presence or absence of ECE has been analyzed and correlated with ground truth histopathological findings. CONCLUSIONS The proposed deep network can learn meaningful patterns to identify ECE without providing lymph node contours. The introduced ECE heatmaps will contribute to the clinical implementations of the proposed model and reveal unknown features to radiologists. The outcome of this study is expected to promote the implementation of explainable artificial intelligence-assiste ECE detection.
Collapse
Affiliation(s)
- Yibin Wang
- Department of Industrial and Systems Engineering, Mississippi State University, Mississippi State, Mississippi, USA
| | - Abdur Rahman
- Department of Industrial and Systems Engineering, Mississippi State University, Mississippi State, Mississippi, USA
| | - William Neil Duggar
- Department of Radiation Oncology, University of Mississippi Medical Center, Jackson, Mississippi, USA
| | - Toms V Thomas
- Department of Radiation Oncology, University of Mississippi Medical Center, Jackson, Mississippi, USA
| | - Paul Russell Roberts
- Department of Radiation Oncology, University of Mississippi Medical Center, Jackson, Mississippi, USA
| | - Srinivasan Vijayakumar
- Department of Radiation Oncology, University of Mississippi Medical Center, Jackson, Mississippi, USA
| | - Zhicheng Jiao
- Warren Alpert Medical School, Brown University, Providence, Rhode Island, USA
| | - Linkan Bian
- Department of Industrial and Systems Engineering, Mississippi State University, Mississippi State, Mississippi, USA
| | - Haifeng Wang
- Department of Industrial and Systems Engineering, Mississippi State University, Mississippi State, Mississippi, USA
- Department of Radiation Oncology, University of Mississippi Medical Center, Jackson, Mississippi, USA
| |
Collapse
|
44
|
Dimitriou N, Arandjelović O, Harrison DJ. Magnifying Networks for Histopathological Images with Billions of Pixels. Diagnostics (Basel) 2024; 14:524. [PMID: 38472996 DOI: 10.3390/diagnostics14050524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 02/25/2024] [Accepted: 02/26/2024] [Indexed: 03/14/2024] Open
Abstract
Amongst the other benefits conferred by the shift from traditional to digital pathology is the potential to use machine learning for diagnosis, prognosis, and personalization. A major challenge in the realization of this potential emerges from the extremely large size of digitized images, which are often in excess of 100,000 × 100,000 pixels. In this paper, we tackle this challenge head-on by diverging from the existing approaches in the literature-which rely on the splitting of the original images into small patches-and introducing magnifying networks (MagNets). By using an attention mechanism, MagNets identify the regions of the gigapixel image that benefit from an analysis on a finer scale. This process is repeated, resulting in an attention-driven coarse-to-fine analysis of only a small portion of the information contained in the original whole-slide images. Importantly, this is achieved using minimal ground truth annotation, namely, using only global, slide-level labels. The results from our tests on the publicly available Camelyon16 and Camelyon17 datasets demonstrate the effectiveness of MagNets-as well as the proposed optimization framework-in the task of whole-slide image classification. Importantly, MagNets process at least five times fewer patches from each whole-slide image than any of the existing end-to-end approaches.
Collapse
Affiliation(s)
- Neofytos Dimitriou
- Maritime Digitalisation Centre, Cyprus Marine and Maritime Institute, Larnaca 6300, Cyprus
- School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK
| | - Ognjen Arandjelović
- School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK
| | - David J Harrison
- School of Medicine, University of St Andrews, St Andrews KY16 9TF, UK
- NHS Lothian Pathology, Division of Laboratory Medicine, Royal Infirmary of Edinburgh, Edinburgh EH16 4SA, UK
| |
Collapse
|
45
|
Gadermayr M, Tschuchnig M. Multiple instance learning for digital pathology: A review of the state-of-the-art, limitations & future potential. Comput Med Imaging Graph 2024; 112:102337. [PMID: 38228020 DOI: 10.1016/j.compmedimag.2024.102337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 12/04/2023] [Accepted: 01/09/2024] [Indexed: 01/18/2024]
Abstract
Digital whole slides images contain an enormous amount of information providing a strong motivation for the development of automated image analysis tools. Particularly deep neural networks show high potential with respect to various tasks in the field of digital pathology. However, a limitation is given by the fact that typical deep learning algorithms require (manual) annotations in addition to the large amounts of image data, to enable effective training. Multiple instance learning exhibits a powerful tool for training deep neural networks in a scenario without fully annotated data. These methods are particularly effective in the domain of digital pathology, due to the fact that labels for whole slide images are often captured routinely, whereas labels for patches, regions, or pixels are not. This potential resulted in a considerable number of publications, with the vast majority published in the last four years. Besides the availability of digitized data and a high motivation from the medical perspective, the availability of powerful graphics processing units exhibits an accelerator in this field. In this paper, we provide an overview of widely and effectively used concepts of (deep) multiple instance learning approaches and recent advancements. We also critically discuss remaining challenges as well as future potential.
Collapse
Affiliation(s)
- Michael Gadermayr
- Department of Information Technologies and Digitalisation, Salzburg University of Applied Sciences, Austria.
| | - Maximilian Tschuchnig
- Department of Information Technologies and Digitalisation, Salzburg University of Applied Sciences, Austria; Department of Artificial Intelligence and Human Interfaces, University of Salzburg, Austria
| |
Collapse
|
46
|
Elazab N, Gab-Allah WA, Elmogy M. A multi-class brain tumor grading system based on histopathological images using a hybrid YOLO and RESNET networks. Sci Rep 2024; 14:4584. [PMID: 38403597 PMCID: PMC10894864 DOI: 10.1038/s41598-024-54864-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 02/17/2024] [Indexed: 02/27/2024] Open
Abstract
Gliomas are primary brain tumors caused by glial cells. These cancers' classification and grading are crucial for prognosis and treatment planning. Deep learning (DL) can potentially improve the digital pathology investigation of brain tumors. In this paper, we developed a technique for visualizing a predictive tumor grading model on histopathology pictures to help guide doctors by emphasizing characteristics and heterogeneity in forecasts. The proposed technique is a hybrid model based on YOLOv5 and ResNet50. The function of YOLOv5 is to localize and classify the tumor in large histopathological whole slide images (WSIs). The suggested technique incorporates ResNet into the feature extraction of the YOLOv5 framework, and the detection results show that our hybrid network is effective for identifying brain tumors from histopathological images. Next, we estimate the glioma grades using the extreme gradient boosting classifier. The high-dimensional characteristics and nonlinear interactions present in histopathology images are well-handled by this classifier. DL techniques have been used in previous computer-aided diagnosis systems for brain tumor diagnosis. However, by combining the YOLOv5 and ResNet50 architectures into a hybrid model specifically designed for accurate tumor localization and predictive grading within histopathological WSIs, our study presents a new approach that advances the field. By utilizing the advantages of both models, this creative integration goes beyond traditional techniques to produce improved tumor localization accuracy and thorough feature extraction. Additionally, our method ensures stable training dynamics and strong model performance by integrating ResNet50 into the YOLOv5 framework, addressing concerns about gradient explosion. The proposed technique is tested using the cancer genome atlas dataset. During the experiments, our model outperforms the other standard ways on the same dataset. Our results indicate that the proposed hybrid model substantially impacts tumor subtype discrimination between low-grade glioma (LGG) II and LGG III. With 97.2% of accuracy, 97.8% of precision, 98.6% of sensitivity, and the Dice similarity coefficient of 97%, the proposed model performs well in classifying four grades. These results outperform current approaches for identifying LGG from high-grade glioma and provide competitive performance in classifying four categories of glioma in the literature.
Collapse
Affiliation(s)
- Naira Elazab
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Wael A Gab-Allah
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt.
| |
Collapse
|
47
|
Wang R, Qiu Y, Wang T, Wang M, Jin S, Cong F, Zhang Y, Xu H. MIHIC: a multiplex IHC histopathological image classification dataset for lung cancer immune microenvironment quantification. Front Immunol 2024; 15:1334348. [PMID: 38370413 PMCID: PMC10869447 DOI: 10.3389/fimmu.2024.1334348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 01/09/2024] [Indexed: 02/20/2024] Open
Abstract
Background Immunohistochemistry (IHC) is a widely used laboratory technique for cancer diagnosis, which selectively binds specific antibodies to target proteins in tissue samples and then makes the bound proteins visible through chemical staining. Deep learning approaches have the potential to be employed in quantifying tumor immune micro-environment (TIME) in digitized IHC histological slides. However, it lacks of publicly available IHC datasets explicitly collected for the in-depth TIME analysis. Method In this paper, a notable Multiplex IHC Histopathological Image Classification (MIHIC) dataset is created based on manual annotations by pathologists, which is publicly available for exploring deep learning models to quantify variables associated with the TIME in lung cancer. The MIHIC dataset comprises of totally 309,698 multiplex IHC stained histological image patches, encompassing seven distinct tissue types: Alveoli, Immune cells, Necrosis, Stroma, Tumor, Other and Background. By using the MIHIC dataset, we conduct a series of experiments that utilize both convolutional neural networks (CNNs) and transformer models to benchmark IHC stained histological image classifications. We finally quantify lung cancer immune microenvironment variables by using the top-performing model on tissue microarray (TMA) cores, which are subsequently used to predict patients' survival outcomes. Result Experiments show that transformer models tend to provide slightly better performances than CNN models in histological image classifications, although both types of models provide the highest accuracy of 0.811 on the testing dataset in MIHIC. The automatically quantified TIME variables, which reflect proportions of immune cells over stroma and tumor over tissue core, show prognostic value for overall survival of lung cancer patients. Conclusion To the best of our knowledge, MIHIC is the first publicly available lung cancer IHC histopathological dataset that includes images with 12 different IHC stains, meticulously annotated by multiple pathologists across 7 distinct categories. This dataset holds significant potential for researchers to explore novel techniques for quantifying the TIME and advancing our understanding of the interactions between the immune system and tumors.
Collapse
Affiliation(s)
- Ranran Wang
- Affiliated Cancer Hospital, Dalian University of Technology, Dalian, China
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, China
| | - Yusong Qiu
- Department of Pathology, Liaoning Cancer Hospital and Institute, Shenyang, China
| | - Tong Wang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, China
| | - Mingkang Wang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, China
| | - Shan Jin
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, China
| | - Fengyu Cong
- Affiliated Cancer Hospital, Dalian University of Technology, Dalian, China
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, China
- Key Laboratory of Integrated Circuit and Biomedical Electronic System, Dalian University of Technology, Dalian, Liaoning, China
- Faculty of Information Technology, University of Jyvaskyla, Jyvaskyla, Finland
| | - Yong Zhang
- Department of Pathology, Liaoning Cancer Hospital and Institute, Shenyang, China
| | - Hongming Xu
- Affiliated Cancer Hospital, Dalian University of Technology, Dalian, China
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, China
- Key Laboratory of Integrated Circuit and Biomedical Electronic System, Dalian University of Technology, Dalian, Liaoning, China
| |
Collapse
|
48
|
Kwak S, Akbari H, Garcia JA, Mohan S, Davatzikos C. Fully automatic mpMRI analysis using deep learning predicts peritumoral glioblastoma infiltration and subsequent recurrence. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2024; 12926:129261N. [PMID: 38742150 PMCID: PMC11089715 DOI: 10.1117/12.3001752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Glioblastoma (GBM) is most aggressive and common adult brain tumor. The standard treatments typically include maximal surgical resection, followed adjuvant radiotherapy and chemotherapy. However, the efficacy of these treatment is often limited, as tumor often infiltrate into the surrounding brain tissue, often extending beyond the radiologically defined margins. This infiltration contributes to the high recurrence rate and poor prognosis associated with GBM patients, necessitating advanced methods for early and accurate detection of tumor infiltration. Despite the great promise traditional supervised machine learning shows in predicting tumor infiltration beyond resectable margins, these methods are heavily reliant on expert-drawn Regions of Interest (ROIs), which are used to construct multi-variate models of different Magnetic Resonance (MR) signal characteristics associated with tumor infiltration. This process is both time consuming and resource intensive. Addressing this limitation, our study proposes a novel integration of fully automatic methods for generating ROIs with deep learning algorithms to create predictive maps of tumor infiltration. This approach uses pre-operative multi-parametric MRI (mpMRI) scans, encompassing T1, T1Gd, T2, T2-FLAIR, and ADC sequences, to fully leverage the knowledge from previously drawn ROIs. Subsequently, a patch based Convolutional Neural Network (CNN) model is trained on these automatically generated ROIs to predict areas of potential tumor infiltration. The performance of this model was evaluated using a leave-one-out cross-validation approach. Generated predictive maps binarized for comparison against post-recurrence mpMRI scans. The model demonstrates robust predictive capability, evidenced by the average cross-validated accuracy of 0.87, specificity of 0.88, and sensitivity of 0.90. Notably, the odds ratio of 8.62 indicates that regions identified as high-risk on the predictive map were significantly more likely to exhibit tumor recurrence than low-risk regions. The proposed method demonstrates that a fully automatic mpMRI analysis using deep learning can successfully predict tumor infiltration in peritumoral region for GBM patients while bypassing the intensive requirement for expert-drawn ROIs.
Collapse
Affiliation(s)
- Sunwoo Kwak
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA 19104
| | - Hamed Akbari
- Department of Bioengineering, Santa Clara University, Santa Clara, CA, USA 95053
| | - Jose A Garcia
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA 19104
| | - Suyash Mohan
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA 19104
| | - Christos Davatzikos
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA 19104
| |
Collapse
|
49
|
Acharya V, Choi D, Yener B, Beamer G. Prediction of Tuberculosis From Lung Tissue Images of Diversity Outbred Mice Using Jump Knowledge Based Cell Graph Neural Network. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2024; 12:17164-17194. [PMID: 38515959 PMCID: PMC10956573 DOI: 10.1109/access.2024.3359989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
Tuberculosis (TB), primarily affecting the lungs, is caused by the bacterium Mycobacterium tuberculosis and poses a significant health risk. Detecting acid-fast bacilli (AFB) in stained samples is critical for TB diagnosis. Whole Slide (WS) Imaging allows for digitally examining these stained samples. However, current deep-learning approaches to analyzing large-sized whole slide images (WSIs) often employ patch-wise analysis, potentially missing the complex spatial patterns observed in the granuloma essential for accurate TB classification. To address this limitation, we propose an approach that models cell characteristics and interactions as a graph, capturing both cell-level information and the overall tissue micro-architecture. This method differs from the strategies in related cell graph-based works that rely on edge thresholds based on sparsity/density in cell graph construction, emphasizing a biologically informed threshold determination instead. We introduce a cell graph-based jumping knowledge neural network (CG-JKNN) that operates on the cell graphs where the edge thresholds are selected based on the length of the mycobacteria's cords and the activated macrophage nucleus's size to reflect the actual biological interactions observed in the tissue. The primary process involves training a Convolutional Neural Network (CNN) to segment AFBs and macrophage nuclei, followed by converting large (42831*41159 pixels) lung histology images into cell graphs where an activated macrophage nucleus/AFB represents each node within the graph and their interactions are denoted as edges. To enhance the interpretability of our model, we employ Integrated Gradients and Shapely Additive Explanations (SHAP). Our analysis incorporated a combination of 33 graph metrics and 20 cell morphology features. In terms of traditional machine learning models, Extreme Gradient Boosting (XGBoost) was the best performer, achieving an F1 score of 0.9813 and an Area under the Precision-Recall Curve (AUPRC) of 0.9848 on the test set. Among graph-based models, our CG-JKNN was the top performer, attaining an F1 score of 0.9549 and an AUPRC of 0.9846 on the held-out test set. The integration of graph-based and morphological features proved highly effective, with CG-JKNN and XGBoost showing promising results in classifying instances into AFB and activated macrophage nucleus. The features identified as significant by our models closely align with the criteria used by pathologists in practice, highlighting the clinical applicability of our approach. Future work will explore knowledge distillation techniques and graph-level classification into distinct TB progression categories.
Collapse
Affiliation(s)
| | - Diana Choi
- Cummings School of Veterinary Medicine, Tufts University, North Grafton, MA 02155, USA
| | - BüLENT Yener
- Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Gillian Beamer
- Research Pathology, Aiforia Technologies, Cambridge, MA 02142, USA
- Texas Biomedical Research Institute, San Antonio, TX 78227, USA
| |
Collapse
|
50
|
Kang J, Le VNT, Lee DW, Kim S. Diagnosing oral and maxillofacial diseases using deep learning. Sci Rep 2024; 14:2497. [PMID: 38291068 PMCID: PMC10827796 DOI: 10.1038/s41598-024-52929-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Accepted: 01/25/2024] [Indexed: 02/01/2024] Open
Abstract
The classification and localization of odontogenic lesions from panoramic radiographs is a challenging task due to the positional biases and class imbalances of the lesions. To address these challenges, a novel neural network, DOLNet, is proposed that uses mutually influencing hierarchical attention across different image scales to jointly learn the global representation of the entire jaw and the local discrepancy between normal tissue and lesions. The proposed approach uses local attention to learn representations within a patch. From the patch-level representations, we generate inter-patch, i.e., global, attention maps to represent the positional prior of lesions in the whole image. Global attention enables the reciprocal calibration of path-level representations by considering non-local information from other patches, thereby improving the generation of whole-image-level representation. To address class imbalances, we propose an effective data augmentation technique that involves merging lesion crops with normal images, thereby synthesizing new abnormal cases for effective model training. Our approach outperforms recent studies, enhancing the classification performance by up to 42.4% and 44.2% in recall and F1 scores, respectively, and ensuring robust lesion localization with respect to lesion size variations and positional biases. Our approach further outperforms human expert clinicians in classification by 10.7 % and 10.8 % in recall and F1 score, respectively.
Collapse
Affiliation(s)
| | - Van Nhat Thang Le
- Faculty of Odonto-Stomatology, Hue University of Medicine and Pharmacy, Hue University, Hue, 49120, Vietnam
| | - Dae-Woo Lee
- The Department of Pediatric Dentistry, Jeonbuk National University, Jeonju, 54896, Korea.
- Biomedical Research Institute of Jeonbuk National University Hospital, Jeonbuk National University, Jeonju, 54896, Korea.
- Research Institute of Clinical Medicine of Jeonbuk National University, Jeonju, 54896, Korea.
| | - Sungchan Kim
- The Department of Computer Science and Artificial Intelligence, Jeonbuk National University, Jeonju, 54896, Korea.
- Center for Advanced Image Information Technology, Jeonbuk National University, Jeonju, 54896, Korea.
| |
Collapse
|