1
|
Li X, Lin Z, Qiu C, Zhang Y, Lei C, Shen S, Zhang W, Lai C, Li W, Huang H, Qiu T. Transfer learning drives automatic HER2 scoring on HE-stained WSIs for breast cancer: a multi-cohort study. Breast Cancer Res 2025; 27:62. [PMID: 40269991 PMCID: PMC12020254 DOI: 10.1186/s13058-025-02008-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 03/24/2025] [Indexed: 04/25/2025] Open
Abstract
BACKGROUND Streamlining the clinical procedure of human epidermal growth factor receptor 2 (HER2) examination is challenging. Previous studies neglected the intra-class variability within both HER2-positive and -negative groups and lacked multi-cohort validation. To address this deficiency, this study collected data from multiple cohorts to develop a robust model for HER2 scoring utilizing only Hematoxylin&Eosin-stained whole slide images (WSIs). METHODS A total of 578 WSIs were collected from five cohorts, including three public and two private datasets. Each WSI underwent adaptive scale cropping. The transfer-learning-based probabilistic aggregation (TL-PA) model and multi-instance learning (MIL)-based models were compared, both of which were trained on Cohort A and validated on Cohorts B-D. The model demonstrating superior performance was further evaluated in the neoadjuvant therapy (NAT) cohort. Scoring performance was assessed using the area under the receiver operating characteristic curve (AUC). Correlation between the model scores and specific grades (HER2 levels, pathological complete response (pCR) status, residual cancer burden (RCB) grades) were evaluated using Spearman rank correlation and Dunn's test. Patch analysis was performed with manually defined features. RESULTS For HER2 scoring, the TL-PA significantly outperformed the MIL-based models, achieving robust AUCs in four validation cohorts (Cohort A: 0.75, Cohort B: 0.75, Cohort C: 0.77, Cohort D: 0.77). Correlation analysis confirmed a moderate association between model scores and manual reader-defined HER2-IHC status (Coefficient(Spearman) = 0.37, P(Spearman) = 0.001) as well as RCB grades (Coefficient(Spearman) = 0.45, P(Spearman) = 0.0006). In Cohort NAT, with the non-pCR as the positive control, the AUC was 0.77. Patch analysis revealed a core-to-peritumoral probability decrease pattern as malignancy spread outward from the lesion's core. CONCLUSION TL-PA shows robust generalization for HER2 scoring with minimal data; however, it still inadequately capture intra-class variability. This indicates that future deep-learning endeavors should incorporate more detailed annotations to better align the model's focus with the reasoning of pathologists.
Collapse
Affiliation(s)
- Xiaoping Li
- Breast Department, Jiangmen Central Hospital, Jiangmen, China
| | | | - Chaoran Qiu
- Breast Department, Jiangmen Central Hospital, Jiangmen, China
| | - Yiwen Zhang
- Breast Department, Jiangmen Central Hospital, Jiangmen, China
| | - Chuqian Lei
- Breast Department, Jiangmen Central Hospital, Jiangmen, China
| | - Shaofei Shen
- Shanxi Key Lab for Modernization of TCVM, College of Life Science, Shanxi Agricultural University, Taiyuan, 030000, Shanxi, China
| | - Weibin Zhang
- Department of Pathology, Jiangmen Central Hospital, Jiangmen, China
| | - Chan Lai
- Radiology Department, Jiangmen Central Hospital, Jiangmen, China
| | - Weiwen Li
- Breast Department, Jiangmen Central Hospital, Jiangmen, China
| | - Hui Huang
- Department of Breast Surgery, Jiangmen Maternity and Child Health Care Hospital, Jiangmen, 529000, Guangdong, China.
| | - Tian Qiu
- Wuyi University, 99 Yinbin Avenue, Jiangmen, 529000, Guangdong, China.
| |
Collapse
|
2
|
Du J, Shi J, Sun D, Wang Y, Liu G, Chen J, Wang W, Zhou W, Zheng Y, Wu H. Machine learning prediction of HER2-low expression in breast cancers based on hematoxylin-eosin-stained slides. Breast Cancer Res 2025; 27:57. [PMID: 40251691 PMCID: PMC12008878 DOI: 10.1186/s13058-025-01998-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2024] [Accepted: 03/10/2025] [Indexed: 04/20/2025] Open
Abstract
BACKGROUND Treatment with HER2-targeted therapies is recommended for HER2-positive breast cancer patients with HER2 gene amplification or protein overexpression. Interestingly, recent clinical trials of novel HER2-targeted therapies demonstrated promising efficacy in HER2-low breast cancers, raising the prospect of including a HER2-low category (immunohistochemistry, IHC) score of 1 + or 2 + with non-amplified in-situ hybridization for HER2-targeted treatments, which necessitated the accurate detection and evaluation of HER2 expression in tumors. Traditionally, HER2 protein levels are routinely assessed by IHC in clinical practice, which not only requires significant time consumption and financial investment but is also technically challenging for many basic hospitals in developing countries. Therefore, directly predicting HER2 expression by hematoxylin-eosin (HE) staining should be of significant clinical values, and machine learning may be a potent technology to achieve this goal. METHODS In this study, we developed an artificial intelligence (AI) classification model using whole slide image of HE-stained slides to automatically assess HER2 status. RESULTS A publicly available TCGA-BRCA dataset and an in-house USTC-BC dataset were applied to evaluate our AI model and the state-of-the-art method SlideGraph + in terms of accuracy (ACC), the area under the receiver operating characteristic curve (AUC), and F1 score. Overall, our AI model achieved the superior performance in HER2 scoring in both datasets with AUC of 0.795 ± 0.028 and 0.688 ± 0.008 on the USCT-BC and TCGA-BRCA datasets, respectively. In addition, we visualized the results generated from our AI model by attention heatmaps, which proved that our AI model had strong interpretability. CONCLUSION Our AI model is able to directly predict HER2 expression through HE images with strong interpretability, and has a better ACC particularly in HER2-low breast cancers, which provides a method for AI evaluation of HER2 status and helps to perform HER2 evaluation economically and efficiently. It has the potential to assist pathologists to improve diagnosis and assess biomarkers for companion diagnostics.
Collapse
Affiliation(s)
- Jun Du
- Department of Pathology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui, China
- Intelligent Pathology Institute, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui, China
| | - Jun Shi
- School of Software, Hefei University of Technology, Hefei, 230601, China
| | - Dongdong Sun
- School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, 230601, Anhui, China
| | - Yifei Wang
- School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, 230601, Anhui, China
| | - Guanfeng Liu
- School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230027, Anhui, China
| | - Jingru Chen
- School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230027, Anhui, China
| | - Wei Wang
- Department of Pathology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui, China
- Intelligent Pathology Institute, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui, China
| | - Wenchao Zhou
- Intelligent Pathology Institute, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui, China.
- Department of Pathology, Centre for Leading Medicine and Advanced Technologies of IHM, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230001, Anhui, China.
| | - Yushan Zheng
- School of Engineering Medicine, Beijing Advanced Innovation Center on Biomedical Engineering, Beihang University, Beijing, 100191, China.
| | - Haibo Wu
- Department of Pathology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui, China.
- Intelligent Pathology Institute, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui, China.
- Department of Pathology, Centre for Leading Medicine and Advanced Technologies of IHM, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230001, Anhui, China.
| |
Collapse
|
3
|
Chyrmang G, Barua B, Bora K, Ahmed GN, Das AK, Kakoti L, Lemos B, Mallik S. Self-HER2Net: A generative self-supervised framework for HER2 classification in IHC histopathology of breast cancer. Pathol Res Pract 2025; 270:155961. [PMID: 40245674 DOI: 10.1016/j.prp.2025.155961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2025] [Accepted: 04/08/2025] [Indexed: 04/19/2025]
Abstract
Breast cancer is a significant global health concern, where precise identification of proteins like Human Epidermal Growth Factor Receptor 2 (HER2) in cancer cells via Immunohistochemistry (IHC) is pivotal for treatment decisions. HER2 overexpression is evaluated through HER2 scoring on a scale from 0 to 3 + based on staining patterns and intensity. Recent efforts have been made to automate HER2 scoring using image processing and AI techniques. However, existing methods require large manually annotated datasets as these follow supervised learning paradigms. Therefore, we proposed a generative self-supervised learning (SSL) framework "Self-HER2Net" for the classification of HER2 scoring, to reduce dependence on large manually annotated data by leveraging one of best performing four novel generative self-supervised tasks, that we proposed. The first two SSL tasks HER2hsl and HER2hsv are domain-agnostic and the other two HER2dab and HER2hae are domain-specific SSL tasks focusing on domain-agnostic and domain-specific staining patterns and intensity representation. Our approach is evaluated under different budget scenarios (2 %, 15 %, & 100 % labeled datasets) and also out distribution test. For tile-level assessment, HER2hsv achieved the best performance with AUC-ROC of 0.965 ± 0.037. Our self-supervised learning approach shows potential for application in scenarios with limited annotated data for HER2 analysis.
Collapse
Affiliation(s)
- Genevieve Chyrmang
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, India.
| | - Barun Barua
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, India.
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, India.
| | - Gazi N Ahmed
- North East Cancer Hospital and Research Institute, Guwahati, Assam, India.
| | - Anup Kr Das
- Arya Wellness centre, Guwahati, Assam, India.
| | | | - Bernardo Lemos
- Department of Environmental Health, Harvard T H Chan School of Public Health, Boston, MA 02115, USA; Department of Pharmacology & Toxicology, University of Arizona, AZ 85721, USA.
| | - Saurav Mallik
- Department of Environmental Health, Harvard T H Chan School of Public Health, Boston, MA 02115, USA; Department of Pharmacology & Toxicology, University of Arizona, AZ 85721, USA.
| |
Collapse
|
4
|
Shi J, Sun D, Jiang Z, Du J, Wang W, Zheng Y, Wu H. Weakly supervised multi-modal contrastive learning framework for predicting the HER2 scores in breast cancer. Comput Med Imaging Graph 2025; 121:102502. [PMID: 39919535 DOI: 10.1016/j.compmedimag.2025.102502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 08/22/2024] [Accepted: 01/25/2025] [Indexed: 02/09/2025]
Abstract
Human epidermal growth factor receptor 2 (HER2) is an important biomarker for prognosis and prediction of treatment response in breast cancer (BC). HER2 scoring is typically evaluated by pathologist microscopic observation on immunohistochemistry (IHC) images, which is labor-intensive and results in observational biases among different pathologists. Most existing methods generally use hand-crafted features or deep learning models in unimodal (hematoxylin and eosin (H&E) or IHC) to predict HER2 scores through supervised or weakly supervised learning. Consequently, the information from different modalities is not effectively integrated into feature learning which can help improve HER2 scoring performance. In this paper, we propose a novel weakly supervised multi-modal contrastive learning (WSMCL) framework to predict the HER2 scores in BC at the whole slide image (WSI) level. It aims to leverage multi-modal (H&E and IHC) joint learning under the weak supervision of WSI label to achieve the HER2 score prediction. Specifically, the patch features within H&E and IHC WSIs are respectively extracted and then the multi-head self-attention (MHSA) is used to explore the global dependencies of the patches within each modality. The patch features corresponding to top-k and bottom-k attention scores generated by MHSA in each modality are selected as the candidates for multi-modal joint learning. Particularly, a multi-modal attentive contrastive learning (MACL) module is designed to guarantee the semantic alignment of the candidate features from different modalities. Extensive experiments demonstrate the proposed WSMCL has the better HER2 scoring performance and outperforms the state-of-the-art methods. The code is available at https://github.com/HFUT-miaLab/WSMCL.
Collapse
Affiliation(s)
- Jun Shi
- School of Software, Hefei University of Technology, Hefei, 230601, Anhui Province, China
| | - Dongdong Sun
- School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, 230601, Anhui Province, China
| | - Zhiguo Jiang
- Image Processing Center, School of Astronautics, Beihang University, Beijing, 102206, China
| | - Jun Du
- Department of Pathology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China; Intelligent Pathology Institute, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China
| | - Wei Wang
- Department of Pathology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China; Intelligent Pathology Institute, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China
| | - Yushan Zheng
- School of Engineering Medicine, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China.
| | - Haibo Wu
- Department of Pathology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China; Intelligent Pathology Institute, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China; Department of Pathology, Centre for Leading Medicine and Advanced Technologies of IHM, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230001, Anhui Province, China.
| |
Collapse
|
5
|
Hussain I, Boza J, Lukande R, Ayanga R, Semeere A, Cesarman E, Martin J, Maurer T, Erickson D. Automated Detection of Kaposi Sarcoma-Associated Herpesvirus-Infected Cells in Immunohistochemical Images of Skin Biopsies. JCO Glob Oncol 2025; 11:e2400536. [PMID: 40239145 DOI: 10.1200/go-24-00536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2024] [Revised: 01/11/2025] [Accepted: 02/18/2025] [Indexed: 04/18/2025] Open
Abstract
PURPOSE Immunohistochemical staining for the antigen of Kaposi sarcoma (KS)-associated herpesvirus, latency-associated nuclear antigen (LANA), is helpful in diagnosing KS. A challenge lies in distinguishing anti-LANA-positive cells from morphologically similar brown counterparts. This work aims to develop an automated framework for localization and quantification of LANA positivity in whole-slide images (WSI) of skin biopsies. METHODS The proposed framework leverages weakly supervised multiple-instance learning (MIL) to reduce false-positive predictions. A novel morphology-based slide aggregation method is introduced to improve accuracy. The framework generates interpretable heatmaps for cell localization and provides quantitative values for the percentage of positive tiles. The framework was trained and tested with a KS pathology data set prepared from skin biopsies of KS-suspected patients in Uganda. RESULTS The developed MIL framework achieved an area under the receiver operating characteristic curve of 0.99, with a sensitivity of 98.15% and specificity of 96.00% in predicting anti-LANA-positive WSIs in a test data set. CONCLUSION The framework shows promise for the automated detection of LANA in skin biopsies, offering a reliable and accurate tool for identifying anti-LANA-positive cells. This method may be especially impactful in resource-limited areas that lack trained pathologists, potentially improving diagnostic capabilities in settings with limited access to expert analysis.
Collapse
Affiliation(s)
- Iftak Hussain
- Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY
| | - Juan Boza
- Meinig School of Biomedical Engineering, Cornell University, Ithaca, NY
| | - Robert Lukande
- Department of Pathology, Makerere University College of Health Sciences, Kampala, Uganda
| | - Racheal Ayanga
- Infectious Diseases Institute, Makerere University College of Health Sciences, Kampala, Uganda
| | - Aggrey Semeere
- Infectious Diseases Institute, Makerere University College of Health Sciences, Kampala, Uganda
| | - Ethel Cesarman
- Pathology and Laboratory Medicine, Weill Cornell Medical College, New York, NY
| | - Jeffrey Martin
- Department of Epidemiology and Biostatistics, University of California, San Francisco, CA
| | - Toby Maurer
- Department of Dermatology, Indiana University School of Medicine, Indianapolis, IN
| | - David Erickson
- Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY
- Meinig School of Biomedical Engineering, Cornell University, Ithaca, NY
| |
Collapse
|
6
|
Liu A, Zhang J, Li T, Zheng D, Ling Y, Lu L, Zhang Y, Cai J. Explainable attention-enhanced heuristic paradigm for multi-view prognostic risk score development in hepatocellular carcinoma. Hepatol Int 2025:10.1007/s12072-025-10793-8. [PMID: 40089963 DOI: 10.1007/s12072-025-10793-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/19/2024] [Accepted: 02/07/2025] [Indexed: 03/18/2025]
Abstract
PURPOSE Existing prognostic staging systems depend on expensive manual extraction by pathologists, potentially overlooking latent patterns critical for prognosis, or use black-box deep learning models, limiting clinical acceptance. This study introduces a novel deep learning-assisted paradigm that complements existing approaches by generating interpretable, multi-view risk scores to stratify prognostic risk in hepatocellular carcinoma (HCC) patients. METHODS 510 HCC patients were enrolled in an internal dataset (SYSUCC) as training and validation cohorts to develop the Hybrid Deep Score (HDS). The Attention Activator (ATAT) was designed to heuristically identify tissues with high prognostic risk, and a multi-view risk-scoring system based on ATAT established HDS from microscopic to macroscopic levels. HDS was also validated on an external testing cohort (TCGA-LIHC) with 341 HCC patients. We assessed prognostic significance using Cox regression and the concordance index (c-index). RESULTS The ATAT first heuristically identified regions where necrosis, lymphocytes, and tumor tissues converge, particularly focusing on their junctions in high-risk patients. From this, this study developed three independent risk factors: microscopic morphological, co-localization, and deep global indicators, which were concatenated and then input into a neural network to generate the final HDS for each patient. The HDS demonstrated competitive results with hazard ratios (HR) (HR 3.24, 95% confidence interval (CI) 1.91-5.43 in SYSUCC; HR 2.34, 95% CI 1.58-3.47 in TCGA-LIHC) and c-index values (0.751 in SYSUCC; 0.729 in TCGA-LIHC) for Disease-Free Survival (DFS). Furthermore, integrating HDS into existing clinical staging systems allows for more refined stratification, which enables the identification of potential high-risk patients within low-risk groups. CONCLUSION This novel paradigm, from identifying high-risk tissues to constructing prognostic risk scores, offers fresh insights into HCC research. Additionally, the integration of HDS complements the existing clinical staging system by facilitating more detailed stratification in DFS and Overall Survival (OS).
Collapse
Affiliation(s)
- Anran Liu
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, 11 Yuk Choi Road, Hong Kong SAR, China
| | - Jiang Zhang
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, 11 Yuk Choi Road, Hong Kong SAR, China
| | - Tong Li
- Division of Computational & Data Sciences, Washington University in St. Louis, One Brookings Drive, St. Louis, MO, 63130, USA
| | - Danyang Zheng
- Department of Anesthesiology, First Affiliated Hospital of Sun Yat-sen University, No. 58 Zhongshan Road 2, Guangzhou, 510060, Guangdong, China
| | - Yihong Ling
- Department of Pathology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, 651 Dongfeng East Road, Guangzhou, 510060, Guangdong, China
| | - Lianghe Lu
- Department of Liver Surgery, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, 651 Dongfeng East Road, Guangzhou, 510060, Guangdong, China
| | - Yuanpeng Zhang
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, 11 Yuk Choi Road, Hong Kong SAR, China
| | - Jing Cai
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, 11 Yuk Choi Road, Hong Kong SAR, China.
| |
Collapse
|
7
|
Chyrmang G, Bora K, Das AK, Ahmed GN, Kakoti L. Insights into AI advances in immunohistochemistry for effective breast cancer treatment: a literature review of ER, PR, and HER2 scoring. Curr Med Res Opin 2025; 41:115-134. [PMID: 39705612 DOI: 10.1080/03007995.2024.2445142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Revised: 12/15/2024] [Accepted: 12/17/2024] [Indexed: 12/22/2024]
Abstract
Breast cancer is a significant health challenge, with accurate and timely diagnosis being critical to effective treatment. Immunohistochemistry (IHC) staining is a widely used technique for the evaluation of breast cancer markers, but manual scoring is time-consuming and can be subject to variability. With the rise of Artificial Intelligence (AI), there is an increasing interest in using machine learning and deep learning approaches to automate the scoring of ER, PR, and HER2 biomarkers in IHC-stained images for effective treatment. This narrative literature review focuses on AI-based techniques for the automated scoring of breast cancer markers in IHC-stained images, specifically Allred, Histochemical (H-Score) and HER2 scoring. We aim to identify the current state-of-the-art approaches, challenges, and potential future research prospects for this area of study. By conducting a comprehensive review of the existing literature, we aim to contribute to the ultimate goal of improving the accuracy and efficiency of breast cancer diagnosis and treatment.
Collapse
Affiliation(s)
- Genevieve Chyrmang
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, India
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, India
| | - Anup Kr Das
- Arya Wellness Centre, Guwahati, Assam, India
| | - Gazi N Ahmed
- North East Cancer Hospital and Research Institute, Guwahati, Assam, India
| | | |
Collapse
|
8
|
Raza M, Awan R, Bashir RMS, Qaiser T, Rajpoot NM. Dual attention model with reinforcement learning for classification of histology whole-slide images. Comput Med Imaging Graph 2024; 118:102466. [PMID: 39579453 DOI: 10.1016/j.compmedimag.2024.102466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2024] [Revised: 11/05/2024] [Accepted: 11/05/2024] [Indexed: 11/25/2024]
Abstract
Digital whole slide images (WSIs) are generally captured at microscopic resolution and encompass extensive spatial data (several billions of pixels per image). Directly feeding these images to deep learning models is computationally intractable due to memory constraints, while downsampling the WSIs risks incurring information loss. Alternatively, splitting the WSIs into smaller patches (or tiles) may result in a loss of important contextual information. In this paper, we propose a novel dual attention approach, consisting of two main components, both inspired by the visual examination process of a pathologist: The first soft attention model processes a low magnification view of the WSI to identify relevant regions of interest (ROIs), followed by a custom sampling method to extract diverse and spatially distinct image tiles from the selected ROIs. The second component, the hard attention classification model further extracts a sequence of multi-resolution glimpses from each tile for classification. Since hard attention is non-differentiable, we train this component using reinforcement learning to predict the location of the glimpses. This approach allows the model to focus on essential regions instead of processing the entire tile, thereby aligning with a pathologist's way of diagnosis. The two components are trained in an end-to-end fashion using a joint loss function to demonstrate the efficacy of the model. The proposed model was evaluated on two WSI-level classification problems: Human epidermal growth factor receptor 2 (HER2) scoring on breast cancer histology images and prediction of Intact/Loss status of two Mismatch Repair (MMR) biomarkers from colorectal cancer histology images. We show that the proposed model achieves performance better than or comparable to the state-of-the-art methods while processing less than 10% of the WSI at the highest magnification and reducing the time required to infer the WSI-level label by more than 75%. The code is available at github.
Collapse
Affiliation(s)
- Manahil Raza
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom.
| | - Ruqayya Awan
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom.
| | | | - Talha Qaiser
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom.
| | - Nasir M Rajpoot
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; The Alan Turing Institute, London, United Kingdom.
| |
Collapse
|
9
|
Rehman ZU, Ahmad Fauzi MF, Wan Ahmad WSHM, Abas FS, Cheah PL, Chiew SF, Looi LM. Deep-Learning-Based Approach in Cancer-Region Assessment from HER2-SISH Breast Histopathology Whole Slide Images. Cancers (Basel) 2024; 16:3794. [PMID: 39594748 PMCID: PMC11593209 DOI: 10.3390/cancers16223794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2024] [Revised: 11/04/2024] [Accepted: 11/07/2024] [Indexed: 11/28/2024] Open
Abstract
Fluorescence in situ hybridization (FISH) is widely regarded as the gold standard for evaluating human epidermal growth factor receptor 2 (HER2) status in breast cancer; however, it poses challenges such as the need for specialized training and issues related to signal degradation from dye quenching. Silver-enhanced in situ hybridization (SISH) serves as an automated alternative, employing permanent staining suitable for bright-field microscopy. Determining HER2 status involves distinguishing between "Amplified" and "Non-Amplified" regions by assessing HER2 and centromere 17 (CEN17) signals in SISH-stained slides. This study is the first to leverage deep learning for classifying Normal, Amplified, and Non-Amplified regions within HER2-SISH whole slide images (WSIs), which are notably more complex to analyze compared to hematoxylin and eosin (H&E)-stained slides. Our proposed approach consists of a two-stage process: first, we evaluate deep-learning models on annotated image regions, and then we apply the most effective model to WSIs for regional identification and localization. Subsequently, pseudo-color maps representing each class are overlaid, and the WSIs are reconstructed with these mapped regions. Using a private dataset of HER2-SISH breast cancer slides digitized at 40× magnification, we achieved a patch-level classification accuracy of 99.9% and a generalization accuracy of 78.8% by applying transfer learning with a Vision Transformer (ViT) model. The robustness of the model was further evaluated through k-fold cross-validation, yielding an average performance accuracy of 98%, with metrics reported alongside 95% confidence intervals to ensure statistical reliability. This method shows significant promise for clinical applications, particularly in assessing HER2 expression status in HER2-SISH histopathology images. It provides an automated solution that can aid pathologists in efficiently identifying HER2-amplified regions, thus enhancing diagnostic outcomes for breast cancer treatment.
Collapse
Affiliation(s)
- Zaka Ur Rehman
- Faculty of Engineering, Multimedia University, Cyberjaya 63100, Malaysia; (Z.U.R.); (W.S.H.M.W.A.)
| | | | - Wan Siti Halimatul Munirah Wan Ahmad
- Faculty of Engineering, Multimedia University, Cyberjaya 63100, Malaysia; (Z.U.R.); (W.S.H.M.W.A.)
- Institute for Research, Development and Innovation, IMU University, Bukit Jalil, Kuala Lumpur 57000, Malaysia
| | - Fazly Salleh Abas
- Faculty of Engineering and Technology, Multimedia University, Bukit Beruang, Melaka 75450, Malaysia;
| | - Phaik-Leng Cheah
- Department of Pathology, University Malaya-Medical Center, Kuala Lumpur 50603, Malaysia; (P.-L.C.); (S.-F.C.); (L.-M.L.)
| | - Seow-Fan Chiew
- Department of Pathology, University Malaya-Medical Center, Kuala Lumpur 50603, Malaysia; (P.-L.C.); (S.-F.C.); (L.-M.L.)
| | - Lai-Meng Looi
- Department of Pathology, University Malaya-Medical Center, Kuala Lumpur 50603, Malaysia; (P.-L.C.); (S.-F.C.); (L.-M.L.)
| |
Collapse
|
10
|
Hussain I, Boza J, Lukande R, Ayanga R, Semeere A, Cesarman E, Martin J, Maurer T, Erickson D. Automated detection of Kaposi sarcoma-associated herpesvirus infected cells in immunohistochemical images of skin biopsies. RESEARCH SQUARE 2024:rs.3.rs-4736178. [PMID: 39184072 PMCID: PMC11343169 DOI: 10.21203/rs.3.rs-4736178/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/27/2024]
Abstract
Immunohistochemical (IHC) staining for the antigen of Kaposi sarcoma-associated herpesvirus (KSHV), latency-associated nuclear antigen (LANA), is helpful in diagnosing Kaposi sarcoma (KS). A challenge, however, lies in distinguishing anti-LANA-positive cells from morphologically similar brown counterparts. In this work, we demonstrate a framework for automated localization and quantification of LANA positivity in whole slide images (WSI) of skin biopsies, leveraging weakly supervised multiple instance learning (MIL) while reducing false positive predictions by introducing a novel morphology-based slide aggregation method. Our framework generates interpretable heatmaps, offering insights into precise anti-LANA-positive cell localization within WSIs and a quantitative value for the percentage of positive tiles, which may assist with histological subtyping. We trained and tested our framework with an anti-LANA-stained KS pathology dataset prepared by pathologists in the United States from skin biopsies of KS-suspected patients investigated in Uganda. We achieved an area under the receiver operating characteristic curve (AUC) of 0.99 with a sensitivity and specificity of 98.15% and 96.00% in predicting anti-LANA-positive WSIs in a test dataset. We believe that the framework can provide promise for automated detection of LANA in skin biopsies, which may be especially impactful in resource-limited areas that lack trained pathologists.
Collapse
|
11
|
Selcuk SY, Yang X, Bai B, Zhang Y, Li Y, Aydin M, Unal AF, Gomatam A, Guo Z, Angus DM, Kolodney G, Atlan K, Haran TK, Pillar N, Ozcan A. Automated HER2 Scoring in Breast Cancer Images Using Deep Learning and Pyramid Sampling. BME FRONTIERS 2024; 5:0048. [PMID: 39045139 PMCID: PMC11265840 DOI: 10.34133/bmef.0048] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 06/14/2024] [Indexed: 07/25/2024] Open
Abstract
Objective and Impact Statement: Human epidermal growth factor receptor 2 (HER2) is a critical protein in cancer cell growth that signifies the aggressiveness of breast cancer (BC) and helps predict its prognosis. Here, we introduce a deep learning-based approach utilizing pyramid sampling for the automated classification of HER2 status in immunohistochemically (IHC) stained BC tissue images. Introduction: Accurate assessment of IHC-stained tissue slides for HER2 expression levels is essential for both treatment guidance and understanding of cancer mechanisms. Nevertheless, the traditional workflow of manual examination by board-certified pathologists encounters challenges, including inter- and intra-observer inconsistency and extended turnaround times. Methods: Our deep learning-based method analyzes morphological features at various spatial scales, efficiently managing the computational load and facilitating a detailed examination of cellular and larger-scale tissue-level details. Results: This approach addresses the tissue heterogeneity of HER2 expression by providing a comprehensive view, leading to a blind testing classification accuracy of 84.70%, on a dataset of 523 core images from tissue microarrays. Conclusion: This automated system, proving reliable as an adjunct pathology tool, has the potential to enhance diagnostic precision and evaluation speed, and might substantially impact cancer treatment planning.
Collapse
Affiliation(s)
- Sahan Yoruc Selcuk
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Xilin Yang
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Bijie Bai
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Musa Aydin
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Aras Firat Unal
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Aditya Gomatam
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Zhen Guo
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Darrow Morgan Angus
- Department of Pathology and Laboratory Medicine,
University of California at Davis, Sacramento, CA, USA
| | | | - Karine Atlan
- Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | | | - Nir Pillar
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
- David Geffen School of Medicine,
University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
12
|
Fu X, Ma W, Zuo Q, Qi Y, Zhang S, Zhao Y. Application of machine learning for high-throughput tumor marker screening. Life Sci 2024; 348:122634. [PMID: 38685558 DOI: 10.1016/j.lfs.2024.122634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 03/26/2024] [Accepted: 04/10/2024] [Indexed: 05/02/2024]
Abstract
High-throughput sequencing and multiomics technologies have allowed increasing numbers of biomarkers to be mined and used for disease diagnosis, risk stratification, efficacy assessment, and prognosis prediction. However, the large number and complexity of tumor markers make screening them a substantial challenge. Machine learning (ML) offers new and effective ways to solve the screening problem. ML goes beyond mere data processing and is instrumental in recognizing intricate patterns within data. ML also has a crucial role in modeling dynamic changes associated with diseases. Used together, ML techniques have been included in automatic pipelines for tumor marker screening, thereby enhancing the efficiency and accuracy of the screening process. In this review, we discuss the general processes and common ML algorithms, and highlight recent applications of ML in tumor marker screening of genomic, transcriptomic, proteomic, and metabolomic data of patients with various types of cancers. Finally, the challenges and future prospects of the application of ML in tumor therapy are discussed.
Collapse
Affiliation(s)
- Xingxing Fu
- Key Laboratory of Biotechnology and Bioresources Utilization of Ministry of Education, Dalian Minzu University, Dalian 116600, China
| | - Wanting Ma
- Key Laboratory of Biotechnology and Bioresources Utilization of Ministry of Education, Dalian Minzu University, Dalian 116600, China
| | - Qi Zuo
- Key Laboratory of Biotechnology and Bioresources Utilization of Ministry of Education, Dalian Minzu University, Dalian 116600, China
| | - Yanfei Qi
- Centenary Institute, The University of Sydney, Sydney, NSW 2050, Australia
| | - Shubiao Zhang
- Key Laboratory of Biotechnology and Bioresources Utilization of Ministry of Education, Dalian Minzu University, Dalian 116600, China.
| | - Yinan Zhao
- Key Laboratory of Biotechnology and Bioresources Utilization of Ministry of Education, Dalian Minzu University, Dalian 116600, China
| |
Collapse
|
13
|
Ye Q, Yang H, Lin B, Wang M, Song L, Xie Z, Lu Z, Feng Q, Zhao Y. Automatic detection, segmentation, and classification of primary bone tumors and bone infections using an ensemble multi-task deep learning framework on multi-parametric MRIs: a multi-center study. Eur Radiol 2024; 34:4287-4299. [PMID: 38127073 DOI: 10.1007/s00330-023-10506-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 10/09/2023] [Accepted: 10/29/2023] [Indexed: 12/23/2023]
Abstract
OBJECTIVES To develop an ensemble multi-task deep learning (DL) framework for automatic and simultaneous detection, segmentation, and classification of primary bone tumors (PBTs) and bone infections based on multi-parametric MRI from multi-center. METHODS This retrospective study divided 749 patients with PBTs or bone infections from two hospitals into a training set (N = 557), an internal validation set (N = 139), and an external validation set (N = 53). The ensemble framework was constructed using T1-weighted image (T1WI), T2-weighted image (T2WI), and clinical characteristics for binary (PBTs/bone infections) and three-category (benign/intermediate/malignant PBTs) classification. The detection and segmentation performances were evaluated using Intersection over Union (IoU) and Dice score. The classification performance was evaluated using the receiver operating characteristic (ROC) curve and compared with radiologist interpretations. RESULT On the external validation set, the single T1WI-based and T2WI-based multi-task models obtained IoUs of 0.71 ± 0.25/0.65 ± 0.30 for detection and Dice scores of 0.75 ± 0.26/0.70 ± 0.33 for segmentation. The framework achieved AUCs of 0.959 (95%CI, 0.955-1.000)/0.900 (95%CI, 0.773-0.100) and accuracies of 90.6% (95%CI, 79.7-95.9%)/78.3% (95%CI, 58.1-90.3%) for the binary/three-category classification. Meanwhile, for the three-category classification, the performance of the framework was superior to that of three junior radiologists (accuracy: 65.2%, 69.6%, and 69.6%, respectively) and comparable to that of two senior radiologists (accuracy: 78.3% and 78.3%). CONCLUSION The MRI-based ensemble multi-task framework shows promising performance in automatically and simultaneously detecting, segmenting, and classifying PBTs and bone infections, which was preferable to junior radiologists. CLINICAL RELEVANCE STATEMENT Compared with junior radiologists, the ensemble multi-task deep learning framework effectively improves differential diagnosis for patients with primary bone tumors or bone infections. This finding may help physicians make treatment decisions and enable timely treatment of patients. KEY POINTS • The ensemble framework fusing multi-parametric MRI and clinical characteristics effectively improves the classification ability of single-modality models. • The ensemble multi-task deep learning framework performed well in detecting, segmenting, and classifying primary bone tumors and bone infections. • The ensemble framework achieves an optimal classification performance superior to junior radiologists' interpretations, assisting the clinical differential diagnosis of primary bone tumors and bone infections.
Collapse
Affiliation(s)
- Qiang Ye
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China
| | - Hening Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
| | - Bomiao Lin
- Department of Radiology, ZhuJiang Hospital of Southern Medical University, Guangzhou, China
| | - Menghong Wang
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China
| | - Liwen Song
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China
| | - Zhuoyao Xie
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China
| | - Zixiao Lu
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China.
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China.
| | - Yinghua Zhao
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China.
| |
Collapse
|
14
|
Dimitriou N, Arandjelović O, Harrison DJ. Magnifying Networks for Histopathological Images with Billions of Pixels. Diagnostics (Basel) 2024; 14:524. [PMID: 38472996 DOI: 10.3390/diagnostics14050524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 02/25/2024] [Accepted: 02/26/2024] [Indexed: 03/14/2024] Open
Abstract
Amongst the other benefits conferred by the shift from traditional to digital pathology is the potential to use machine learning for diagnosis, prognosis, and personalization. A major challenge in the realization of this potential emerges from the extremely large size of digitized images, which are often in excess of 100,000 × 100,000 pixels. In this paper, we tackle this challenge head-on by diverging from the existing approaches in the literature-which rely on the splitting of the original images into small patches-and introducing magnifying networks (MagNets). By using an attention mechanism, MagNets identify the regions of the gigapixel image that benefit from an analysis on a finer scale. This process is repeated, resulting in an attention-driven coarse-to-fine analysis of only a small portion of the information contained in the original whole-slide images. Importantly, this is achieved using minimal ground truth annotation, namely, using only global, slide-level labels. The results from our tests on the publicly available Camelyon16 and Camelyon17 datasets demonstrate the effectiveness of MagNets-as well as the proposed optimization framework-in the task of whole-slide image classification. Importantly, MagNets process at least five times fewer patches from each whole-slide image than any of the existing end-to-end approaches.
Collapse
Affiliation(s)
- Neofytos Dimitriou
- Maritime Digitalisation Centre, Cyprus Marine and Maritime Institute, Larnaca 6300, Cyprus
- School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK
| | - Ognjen Arandjelović
- School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK
| | - David J Harrison
- School of Medicine, University of St Andrews, St Andrews KY16 9TF, UK
- NHS Lothian Pathology, Division of Laboratory Medicine, Royal Infirmary of Edinburgh, Edinburgh EH16 4SA, UK
| |
Collapse
|
15
|
Rodríguez-Candela Mateos M, Azmat M, Santiago-Freijanes P, Galán-Moya EM, Fernández-Delgado M, Aponte RB, Mosquera J, Acea B, Cernadas E, Mayán MD. Software BreastAnalyser for the semi-automatic analysis of breast cancer immunohistochemical images. Sci Rep 2024; 14:2995. [PMID: 38316810 PMCID: PMC10844656 DOI: 10.1038/s41598-024-53002-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 01/25/2024] [Indexed: 02/07/2024] Open
Abstract
Breast cancer is the most diagnosed cancer worldwide and represents the fifth cause of cancer mortality globally. It is a highly heterogeneous disease, that comprises various molecular subtypes, often diagnosed by immunohistochemistry. This technique is widely employed in basic, translational and pathological anatomy research, where it can support the oncological diagnosis, therapeutic decisions and biomarker discovery. Nevertheless, its evaluation is often qualitative, raising the need for accurate quantitation methodologies. We present the software BreastAnalyser, a valuable and reliable tool to automatically measure the area of 3,3'-diaminobenzidine tetrahydrocholoride (DAB)-brown-stained proteins detected by immunohistochemistry. BreastAnalyser also automatically counts cell nuclei and classifies them according to their DAB-brown-staining level. This is performed using sophisticated segmentation algorithms that consider intrinsic image variability and save image normalization time. BreastAnalyser has a clean, friendly and intuitive interface that allows to supervise the quantitations performed by the user, to annotate images and to unify the experts' criteria. BreastAnalyser was validated in representative human breast cancer immunohistochemistry images detecting various antigens. According to the automatic processing, the DAB-brown area was almost perfectly recognized, being the average difference between true and computer DAB-brown percentage lower than 0.7 points for all sets. The detection of nuclei allowed proper cell density relativization of the brown signal for comparison purposes between the different patients. BreastAnalyser obtained a score of 85.5 using the system usability scale questionnaire, which means that the tool is perceived as excellent by the experts. In the biomedical context, the connexin43 (Cx43) protein was found to be significantly downregulated in human core needle invasive breast cancer samples when compared to normal breast, with a trend to decrease as the subtype malignancy increased. Higher Cx43 protein levels were significantly associated to lower cancer recurrence risk in Oncotype DX-tested luminal B HER2- breast cancer tissues. BreastAnalyser and the annotated images are publically available https://citius.usc.es/transferencia/software/breastanalyser for research purposes.
Collapse
Affiliation(s)
- Marina Rodríguez-Candela Mateos
- Institute of Biomedical Research of A Coruña (INIBIC), Complexo Hospitalario Universitario A Coruña (CHUAC), SERGAS, A Coruña, Spain
| | - Maria Azmat
- CiTIUS - Centro Singular de Investigación en Tecnoloxías Intelixentes da USC, Universidade de Santiago de Compostela, Santiago de Compostela, Spain
| | - Paz Santiago-Freijanes
- Institute of Biomedical Research of A Coruña (INIBIC), Complexo Hospitalario Universitario A Coruña (CHUAC), SERGAS, A Coruña, Spain
- Department of Pathology, Complexo Hospitalario Universitario A Coruña (CHUAC), SERGAS, A Coruña, Spain
| | - Eva María Galán-Moya
- Physiology and Cell Dynamics, Centro Regional de Investigaciones Biomédicas (CRIB) and Faculty of Nursing, Universidad de Castilla-La Mancha, Albacete, Spain
- Grupo Mixto de Oncología Traslacional UCLM-GAI Albacete, Universidad de Castilla-La Mancha, Servicio de Salud de Castilla-La Mancha, Ciudad Real, Spain
| | - Manuel Fernández-Delgado
- CiTIUS - Centro Singular de Investigación en Tecnoloxías Intelixentes da USC, Universidade de Santiago de Compostela, Santiago de Compostela, Spain
| | - Rosa Barbella Aponte
- Anatomic Pathology Unit, Hospital General Universitario de Albacete, Albacete, Spain
| | - Joaquín Mosquera
- Institute of Biomedical Research of A Coruña (INIBIC), Complexo Hospitalario Universitario A Coruña (CHUAC), SERGAS, A Coruña, Spain
- Breast Unit, Complexo Hospitalario Universitario A Coruña (CHUAC), SERGAS, A Coruña, Spain
| | - Benigno Acea
- Institute of Biomedical Research of A Coruña (INIBIC), Complexo Hospitalario Universitario A Coruña (CHUAC), SERGAS, A Coruña, Spain
- Breast Unit, Complexo Hospitalario Universitario A Coruña (CHUAC), SERGAS, A Coruña, Spain
| | - Eva Cernadas
- CiTIUS - Centro Singular de Investigación en Tecnoloxías Intelixentes da USC, Universidade de Santiago de Compostela, Santiago de Compostela, Spain.
| | - María D Mayán
- Institute of Biomedical Research of A Coruña (INIBIC), Complexo Hospitalario Universitario A Coruña (CHUAC), SERGAS, A Coruña, Spain.
- CELLCOM Research Group. Biomedical Research Center (CINBIO) and Institute of Biomedical Research of Ourense-Pontevedra-Vigo (IBI), University of Vigo. Edificio Olimpia Valencia, Campus Universitario Lagoas Marcosende, 36310, Pontevedra, Spain.
| |
Collapse
|
16
|
Mirimoghaddam MM, Majidpour J, Pashaei F, Arabalibeik H, Samizadeh E, Roshan NM, Rashid TA. HER2GAN: Overcome the Scarcity of HER2 Breast Cancer Dataset Based on Transfer Learning and GAN Model. Clin Breast Cancer 2024; 24:53-64. [PMID: 37926662 DOI: 10.1016/j.clbc.2023.09.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 09/06/2023] [Accepted: 09/24/2023] [Indexed: 11/07/2023]
Abstract
INTRODUCTION Immunohistochemistry (IHC) is crucial for breast cancer diagnosis, classification, and individualized treatment. IHC is used to measure the levels of expression of hormone receptors (estrogen and progesterone receptors), human epidermal growth factor receptor 2 (HER2), and other biomarkers, which are used to make treatment decisions and predict how well a patient will do. The evaluation of the breast cancer score on IHC slides, taking into account structural and morphological features as well as a scarcity of relevant data, is one of the most important issues in the IHC debate. Several recent studies have utilized machine learning and deep learning techniques to resolve these issues. MATERIALS AND METHODS This paper introduces a new approach for addressing the issue based on supervised deep learning. A GAN-based model is proposed for generating high-quality HER2 images and identifying and classifying HER2 levels. Using transfer learning methodologies, the original and generated images were evaluated. RESULTS AND CONCLUSION All of the models have been trained and evaluated using publicly accessible and private data sets, respectively. The InceptionV3 and InceptionResNetV2 models achieved a high accuracy of 93% with the combined generated and original images used for training and testing, demonstrating the exceptional quality of the details in the synthesized images.
Collapse
Affiliation(s)
| | - Jafar Majidpour
- Department of Computer Science, University of Raparin, Rania, Iraq.
| | - Fakhereh Pashaei
- Radiation Sciences Research Center (RSRC), Aja University of Medical Sciences, Tehran, Iran.
| | - Hossein Arabalibeik
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran
| | - Esmaeil Samizadeh
- Department of Pathology, School of Medicine and Imam Reza Hospital, AJA University of Medical Sciences, Tehran, Iran
| | | | - Tarik A Rashid
- Computer Science and Engineering Department, University of Kurdistan Hewlêr, Erbil, Iraq
| |
Collapse
|
17
|
Liu Y, Zhen T, Fu Y, Wang Y, He Y, Han A, Shi H. AI-Powered Segmentation of Invasive Carcinoma Regions in Breast Cancer Immunohistochemical Whole-Slide Images. Cancers (Basel) 2023; 16:167. [PMID: 38201594 PMCID: PMC10778369 DOI: 10.3390/cancers16010167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 12/24/2023] [Accepted: 12/26/2023] [Indexed: 01/12/2024] Open
Abstract
AIMS The automation of quantitative evaluation for breast immunohistochemistry (IHC) plays a crucial role in reducing the workload of pathologists and enhancing the objectivity of diagnoses. However, current methods face challenges in achieving fully automated immunohistochemistry quantification due to the complexity of segmenting the tumor area into distinct ductal carcinoma in situ (DCIS) and invasive carcinoma (IC) regions. Moreover, the quantitative analysis of immunohistochemistry requires a specific focus on invasive carcinoma regions. METHODS AND RESULTS In this study, we propose an innovative approach to automatically identify invasive carcinoma regions in breast cancer immunohistochemistry whole-slide images (WSIs). Our method leverages a neural network that combines multi-scale morphological features with boundary features, enabling precise segmentation of invasive carcinoma regions without the need for additional H&E and P63 staining slides. In addition, we introduced an advanced semi-supervised learning algorithm, allowing efficient training of the model using unlabeled data. To evaluate the effectiveness of our approach, we constructed a dataset consisting of 618 IHC-stained WSIs from 170 cases, including four types of staining (ER, PR, HER2, and Ki-67). Notably, the model demonstrated an impressive intersection over union (IoU) score exceeding 80% on the test set. Furthermore, to ascertain the practical utility of our model in IHC quantitative evaluation, we constructed a fully automated Ki-67 scoring system based on the model's predictions. Comparative experiments convincingly demonstrated that our system exhibited high consistency with the scores given by experienced pathologists. CONCLUSIONS Our developed model excels in accurately distinguishing between DCIS and invasive carcinoma regions in breast cancer immunohistochemistry WSIs. This method paves the way for a clinically available, fully automated immunohistochemistry quantitative scoring system.
Collapse
Affiliation(s)
- Yiqing Liu
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China; (Y.L.); (Y.F.); (Y.W.); (Y.H.)
| | - Tiantian Zhen
- Department of Pathology, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, China;
| | - Yuqiu Fu
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China; (Y.L.); (Y.F.); (Y.W.); (Y.H.)
| | - Yizhi Wang
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China; (Y.L.); (Y.F.); (Y.W.); (Y.H.)
| | - Yonghong He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China; (Y.L.); (Y.F.); (Y.W.); (Y.H.)
| | - Anjia Han
- Department of Pathology, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, China;
| | - Huijuan Shi
- Department of Pathology, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510080, China;
| |
Collapse
|
18
|
Zheng T, Chen W, Li S, Quan H, Zou M, Zheng S, Zhao Y, Gao X, Cui X. Learning how to detect: A deep reinforcement learning method for whole-slide melanoma histopathology images. Comput Med Imaging Graph 2023; 108:102275. [PMID: 37567046 DOI: 10.1016/j.compmedimag.2023.102275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 07/18/2023] [Accepted: 07/22/2023] [Indexed: 08/13/2023]
Abstract
Cutaneous melanoma represents one of the most life-threatening malignancies. Histopathological image analysis serves as a vital tool for early melanoma detection. Deep neural network (DNN) models are frequently employed to aid pathologists in enhancing the efficiency and accuracy of diagnoses. However, due to the paucity of well-annotated, high-resolution, whole-slide histopathology image (WSI) datasets, WSIs are typically fragmented into numerous patches during the model training and testing stages. This process disregards the inherent interconnectedness among patches, potentially impeding the models' performance. Additionally, the presence of excess, non-contributing patches extends processing times and introduces substantial computational burdens. To mitigate these issues, we draw inspiration from the clinical decision-making processes of dermatopathologists to propose an innovative, weakly supervised deep reinforcement learning framework, titled Fast medical decision-making in melanoma histopathology images (FastMDP-RL). This framework expedites model inference by reducing the number of irrelevant patches identified within WSIs. FastMDP-RL integrates two DNN-based agents: the search agent (SeAgent) and the decision agent (DeAgent). The SeAgent initiates actions, steered by the image features observed in the current viewing field at various magnifications. Simultaneously, the DeAgent provides labeling probabilities for each patch. We utilize multi-instance learning (MIL) to construct a teacher-guided model (MILTG), serving a dual purpose: rewarding the SeAgent and guiding the DeAgent. Our evaluations were conducted using two melanoma datasets: the publicly accessible TCIA-CM dataset and the proprietary MELSC dataset. Our experimental findings affirm FastMDP-RL's ability to expedite inference and accurately predict WSIs, even in the absence of pixel-level annotations. Moreover, our research investigates the WSI-based interactive environment, encompassing the design of agents, state and reward functions, and feature extractors suitable for melanoma tissue images. This investigation offers valuable insights and references for researchers engaged in related studies. The code is available at: https://github.com/titizheng/FastMDP-RL.
Collapse
Affiliation(s)
- Tingting Zheng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Weixing Chen
- Shenzhen College of Advanced Technology, University of the Chinese Academy of Sciences, Beijing, China
| | - Shuqin Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Hao Quan
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Mingchen Zou
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Song Zheng
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Xinghua Gao
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| |
Collapse
|
19
|
Pham MD, Balezo G, Tilmant C, Petit S, Salmon I, Hadj SB, Fick RHJ. Interpretable HER2 scoring by evaluating clinical guidelines through a weakly supervised, constrained deep learning approach. Comput Med Imaging Graph 2023; 108:102261. [PMID: 37356357 DOI: 10.1016/j.compmedimag.2023.102261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 04/30/2023] [Accepted: 06/11/2023] [Indexed: 06/27/2023]
Abstract
The evaluation of the Human Epidermal growth factor Receptor-2 (HER2) expression is an important prognostic biomarker for breast cancer treatment selection. However, HER2 scoring has notoriously high interobserver variability due to stain variations between centers and the need to estimate visually the staining intensity in specific percentages of tumor area. In this paper, focusing on the interpretability of HER2 scoring by a pathologist, we propose a semi-automatic, two-stage deep learning approach that directly evaluates the clinical HER2 guidelines defined by the American Society of Clinical Oncology/ College of American Pathologists (ASCO/CAP). In the first stage, we segment the invasive tumor over the user-indicated Region of Interest (ROI). Then, in the second stage, we classify the tumor tissue into four HER2 classes. For the classification stage, we use weakly supervised, constrained optimization to find a model that classifies cancerous patches such that the tumor surface percentage meets the guidelines specification of each HER2 class. We end the second stage by freezing the model and refining its output logits in a supervised way to all slide labels in the training set. To ensure the quality of our dataset's labels, we conducted a multi-pathologist HER2 scoring consensus. For the assessment of doubtful cases where no consensus was found, our model can help by interpreting its HER2 class percentages output. We achieve a performance of 0.78 in F1-score on the test set while keeping our model interpretable for the pathologist, hopefully contributing to interpretable AI models in digital pathology.
Collapse
Affiliation(s)
- Manh-Dan Pham
- Tribun Health, 2 Rue du Capitaine Scott, 75015 Paris, France
| | | | | | | | | | - Saïma Ben Hadj
- Tribun Health, 2 Rue du Capitaine Scott, 75015 Paris, France
| | - Rutger H J Fick
- Tribun Health, 2 Rue du Capitaine Scott, 75015 Paris, France.
| |
Collapse
|
20
|
Bashir RMS, Shephard AJ, Mahmood H, Azarmehr N, Raza SEA, Khurram SA, Rajpoot NM. A digital score of peri-epithelial lymphocytic activity predicts malignant transformation in oral epithelial dysplasia. J Pathol 2023; 260:431-442. [PMID: 37294162 PMCID: PMC10952946 DOI: 10.1002/path.6094] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 04/15/2023] [Accepted: 05/02/2023] [Indexed: 06/10/2023]
Abstract
Oral squamous cell carcinoma (OSCC) is amongst the most common cancers, with more than 377,000 new cases worldwide each year. OSCC prognosis remains poor, related to cancer presentation at a late stage, indicating the need for early detection to improve patient prognosis. OSCC is often preceded by a premalignant state known as oral epithelial dysplasia (OED), which is diagnosed and graded using subjective histological criteria leading to variability and prognostic unreliability. In this work, we propose a deep learning approach for the development of prognostic models for malignant transformation and their association with clinical outcomes in histology whole slide images (WSIs) of OED tissue sections. We train a weakly supervised method on OED cases (n = 137) with malignant transformation (n = 50) and mean malignant transformation time of 6.51 years (±5.35 SD). Stratified five-fold cross-validation achieved an average area under the receiver-operator characteristic curve (AUROC) of 0.78 for predicting malignant transformation in OED. Hotspot analysis revealed various features of nuclei in the epithelium and peri-epithelial tissue to be significant prognostic factors for malignant transformation, including the count of peri-epithelial lymphocytes (PELs) (p < 0.05), epithelial layer nuclei count (NC) (p < 0.05), and basal layer NC (p < 0.05). Progression-free survival (PFS) using the epithelial layer NC (p < 0.05, C-index = 0.73), basal layer NC (p < 0.05, C-index = 0.70), and PELs count (p < 0.05, C-index = 0.73) all showed association of these features with a high risk of malignant transformation in our univariate analysis. Our work shows the application of deep learning for the prognostication and prediction of PFS of OED for the first time and offers potential to aid patient management. Further evaluation and testing on multi-centre data is required for validation and translation to clinical practice. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
| | - Adam J Shephard
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| | - Hanya Mahmood
- Academic Unit of Oral & Maxillofacial Surgery, School of Clinical DentistryUniversity of SheffieldSheffieldUK
- Unit of Oral & Maxillofacial Pathology, School of Clinical DentistryUniversity of SheffieldSheffieldUK
| | - Neda Azarmehr
- Unit of Oral & Maxillofacial Pathology, School of Clinical DentistryUniversity of SheffieldSheffieldUK
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| | - Syed Ali Khurram
- Academic Unit of Oral & Maxillofacial Surgery, School of Clinical DentistryUniversity of SheffieldSheffieldUK
- Unit of Oral & Maxillofacial Pathology, School of Clinical DentistryUniversity of SheffieldSheffieldUK
| | - Nasir M Rajpoot
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| |
Collapse
|
21
|
Kutluer N, Solmaz OA, Yamacli V, Eristi B, Eristi H. Classification of breast tumors by using a novel approach based on deep learning methods and feature selection. Breast Cancer Res Treat 2023:10.1007/s10549-023-06970-8. [PMID: 37210703 DOI: 10.1007/s10549-023-06970-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2023] [Accepted: 05/03/2023] [Indexed: 05/22/2023]
Abstract
PURPOSE Cancer is one of the most insidious diseases that the most important factor in overcoming the cancer is early diagnosis and detection. The histo-pathological images are used to determine whether the tissue is cancerous and the type of cancer. As the result of examination on tissue images by the expert personnel, the cancer type, and stage of the tissue can be determined. However, this situation can cause both time and energy loss as well as personnel-related inspection errors. By the increased usage of computer-based decision methods in the last decades, it would be more efficient and accurate to detect and classify the cancerous tissues with computer-aided systems. METHODS As classical image processing methods were used for cancer-type detection in early studies, advanced deep learning methods based on recurrent neural networks and convolutional neural networks have been used more recently. In this paper, popular deep learning methods such as ResNet-50, GoogLeNet, InceptionV3, and MobilNetV2 are employed by implementing novel feature selection method in order to classify cancer type on a local binary class dataset and multi-class BACH dataset. RESULTS The classification performance of the proposed feature selection implemented deep learning methods follows as for the local binary class dataset 98.89% and 92.17% for BACH dataset which is much better than most of the obtained results in literature. CONCLUSION The obtained findings on both datasets indicates that the proposed methods can detect and classify the cancerous type of a tissue with high accuracy and efficiency.
Collapse
Affiliation(s)
- Nizamettin Kutluer
- Private Doğu Anadolu Hospital, Clinic of General Surgery, Elazig, Turkey.
| | - Ozgen Arslan Solmaz
- Department of Pathology, Elazığ Fethi Sekin City Hospital, University of Health Sciences, Elazig, Turkey
| | - Volkan Yamacli
- Computer Engineering Department, Engineering Faculty, Mersin University, Mersin, Turkey
| | - Belkis Eristi
- Electrical and Energy Department, Vocational School of Technical Sciences, Mersin University, Mersin, Turkey
| | - Huseyin Eristi
- Electrical and Electronics Engineering Department, Engineering Faculty, Mersin University, Mersin, Turkey
| |
Collapse
|
22
|
Che Y, Ren F, Zhang X, Cui L, Wu H, Zhao Z. Immunohistochemical HER2 Recognition and Analysis of Breast Cancer Based on Deep Learning. Diagnostics (Basel) 2023; 13:263. [PMID: 36673073 PMCID: PMC9858188 DOI: 10.3390/diagnostics13020263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 01/05/2023] [Accepted: 01/06/2023] [Indexed: 01/13/2023] Open
Abstract
Breast cancer is one of the common malignant tumors in women. It seriously endangers women's life and health. The human epidermal growth factor receptor 2 (HER2) protein is responsible for the division and growth of healthy breast cells. The overexpression of the HER2 protein is generally evaluated by immunohistochemistry (IHC). The IHC evaluation criteria mainly includes three indexes: staining intensity, circumferential membrane staining pattern, and proportion of positive cells. Manually scoring HER2 IHC images is an error-prone, variable, and time-consuming work. To solve these problems, this study proposes an automated predictive method for scoring whole-slide images (WSI) of HER2 slides based on a deep learning network. A total of 95 HER2 pathological slides from September 2021 to December 2021 were included. The average patch level precision and f1 score were 95.77% and 83.09%, respectively. The overall accuracy of automated scoring for slide-level classification was 97.9%. The proposed method showed excellent specificity for all IHC 0 and 3+ slides and most 1+ and 2+ slides. The evaluation effect of the integrated method is better than the effect of using the staining result only.
Collapse
Affiliation(s)
- Yuxuan Che
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
- School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing 101408, China
- Jinfeng Laboratory, Chongqing 401329, China
| | - Fei Ren
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Xueyuan Zhang
- Beijing Zhijian Life Technology Co., Ltd., Beijing 100036, China
| | - Li Cui
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Huanwen Wu
- Department of Pathology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Ze Zhao
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| |
Collapse
|
23
|
Using Whole Slide Gray Value Map to Predict HER2 Expression and FISH Status in Breast Cancer. Cancers (Basel) 2022; 14:cancers14246233. [PMID: 36551720 PMCID: PMC9777488 DOI: 10.3390/cancers14246233] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/12/2022] [Accepted: 12/14/2022] [Indexed: 12/24/2022] Open
Abstract
Accurate detection of HER2 expression through immunohistochemistry (IHC) is of great clinical significance in the treatment of breast cancer. However, manual interpretation of HER2 is challenging, due to the interobserver variability among pathologists. We sought to explore a deep learning method to predict HER2 expression level and gene status based on a Whole Slide Image (WSI) of the HER2 IHC section. When applied to 228 invasive breast carcinoma of no special type (IBC-NST) DAB-stained slides, our GrayMap+ convolutional neural network (CNN) model accurately classified HER2 IHC level with mean accuracy 0.952 ± 0.029 and predicted HER2 FISH status with mean accuracy 0.921 ± 0.029. Our result also demonstrated strong consistency in HER2 expression score between our system and experienced pathologists (intraclass correlation coefficient (ICC) = 0.903, Cohen's κ = 0.875). The discordant cases were found to be largely caused by high intra-tumor staining heterogeneity in the HER2 IHC group and low copy number in the HER2 FISH group.
Collapse
|
24
|
Shen Y, Shen D, Ke J. Identify Representative Samples by Conditional Random Field of Cancer Histology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3835-3848. [PMID: 35951579 DOI: 10.1109/tmi.2022.3198526] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Pathology analysis is crucial to precise cancer diagnoses and the succeeding treatment plan as well. To detect abnormality in histopathology images with prevailing patch-based convolutional neural networks (CNNs), contextual information often serves as a powerful cue. However, as whole-slide images (WSIs) are characterized by intense morphological heterogeneity and extensive tissue scale, a straightforward visual span to a larger context may not well capture the information closely associated with the focal patch. In this paper, we propose a novel pixel-offset based patch-location method to identify high-representative tissues, with a CNN backbone. Pathology Deformable Conditional Random Field (PDCRF) is proposed to learn the offsets and weights of neighboring contexts in a spatial-adaptive manner, to search for high-representative patches. A CNN structure with the localized patches as training input is then capable of consistently reaching superior classification outcomes for histology images. Overall, the proposed method has achieved state-of-the-art performance, in terms of the test classification accuracy improvement to the baseline by 1.15-2.60%, 0.78-1.78%, and 1.47-2.18% on TCGA public datasets of TCGA-STAD, TCGA-COAD, and TCGA-READ respectively. It also achieves 88.95% test accuracy and 0.920 test AUC on Camelyon 16. To show the effectiveness of the proposed framework on downstream tasks, we take a further step by incorporating an active learning model, which noticeably reduces the number of manual annotations by PDCRF to reach a parallel patch-based histology classifier.
Collapse
|
25
|
Alzoubi I, Bao G, Zheng Y, Wang X, Graeber MB. Artificial intelligence techniques for neuropathological diagnostics and research. Neuropathology 2022. [PMID: 36443935 DOI: 10.1111/neup.12880] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 10/17/2022] [Accepted: 10/23/2022] [Indexed: 12/03/2022]
Abstract
Artificial intelligence (AI) research began in theoretical neurophysiology, and the resulting classical paper on the McCulloch-Pitts mathematical neuron was written in a psychiatry department almost 80 years ago. However, the application of AI in digital neuropathology is still in its infancy. Rapid progress is now being made, which prompted this article. Human brain diseases represent distinct system states that fall outside the normal spectrum. Many differ not only in functional but also in structural terms, and the morphology of abnormal nervous tissue forms the traditional basis of neuropathological disease classifications. However, only a few countries have the medical specialty of neuropathology, and, given the sheer number of newly developed histological tools that can be applied to the study of brain diseases, a tremendous shortage of qualified hands and eyes at the microscope is obvious. Similarly, in neuroanatomy, human observers no longer have the capacity to process the vast amounts of connectomics data. Therefore, it is reasonable to assume that advances in AI technology and, especially, whole-slide image (WSI) analysis will greatly aid neuropathological practice. In this paper, we discuss machine learning (ML) techniques that are important for understanding WSI analysis, such as traditional ML and deep learning, introduce a recently developed neuropathological AI termed PathoFusion, and present thoughts on some of the challenges that must be overcome before the full potential of AI in digital neuropathology can be realized.
Collapse
Affiliation(s)
- Islam Alzoubi
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Guoqing Bao
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Yuqi Zheng
- Ken Parker Brain Tumour Research Laboratories Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney Camperdown New South Wales Australia
| | - Xiuying Wang
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Manuel B. Graeber
- Ken Parker Brain Tumour Research Laboratories Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney Camperdown New South Wales Australia
| |
Collapse
|
26
|
Lian C, Liu M, Wang L, Shen D. Multi-Task Weakly-Supervised Attention Network for Dementia Status Estimation With Structural MRI. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4056-4068. [PMID: 33656999 PMCID: PMC8413399 DOI: 10.1109/tnnls.2021.3055772] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Accurate prediction of clinical scores (of neuropsychological tests) based on noninvasive structural magnetic resonance imaging (MRI) helps understand the pathological stage of dementia (e.g., Alzheimer's disease (AD)) and forecast its progression. Existing machine/deep learning approaches typically preselect dementia-sensitive brain locations for MRI feature extraction and model construction, potentially leading to undesired heterogeneity between different stages and degraded prediction performance. Besides, these methods usually rely on prior anatomical knowledge (e.g., brain atlas) and time-consuming nonlinear registration for the preselection of brain locations, thereby ignoring individual-specific structural changes during dementia progression because all subjects share the same preselected brain regions. In this article, we propose a multi-task weakly-supervised attention network (MWAN) for the joint regression of multiple clinical scores from baseline MRI scans. Three sequential components are included in MWAN: 1) a backbone fully convolutional network for extracting MRI features; 2) a weakly supervised dementia attention block for automatically identifying subject-specific discriminative brain locations; and 3) an attention-aware multitask regression block for jointly predicting multiple clinical scores. The proposed MWAN is an end-to-end and fully trainable deep learning model in which dementia-aware holistic feature learning and multitask regression model construction are integrated into a unified framework. Our MWAN method was evaluated on two public AD data sets for estimating clinical scores of mini-mental state examination (MMSE), clinical dementia rating sum of boxes (CDRSB), and AD assessment scale cognitive subscale (ADAS-Cog). Quantitative experimental results demonstrate that our method produces superior regression performance compared with state-of-the-art methods. Importantly, qualitative results indicate that the dementia-sensitive brain locations automatically identified by our MWAN method well retain individual specificities and are biologically meaningful.
Collapse
|
27
|
Zheng T, Zheng S, Wang K, Quan H, Bai Q, Li S, Qi R, Zhao Y, Cui X, Gao X. Automatic CD30 scoring method for whole slide images of primary cutaneous CD30 + lymphoproliferative diseases. J Clin Pathol 2022; 76:jclinpath-2022-208344. [PMID: 35863885 DOI: 10.1136/jcp-2022-208344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 07/07/2022] [Indexed: 11/03/2022]
Abstract
AIMS Deep-learning methods for scoring biomarkers are an active research topic. However, the superior performance of many studies relies on large datasets collected from clinical samples. In addition, there are fewer studies on immunohistochemical marker assessment for dermatological diseases. Accordingly, we developed a method for scoring CD30 based on convolutional neural networks for a few primary cutaneous CD30+ lymphoproliferative disorders and used this method to evaluate other biomarkers. METHODS A multipatch spatial attention mechanism and conditional random field algorithm were used to fully fuse tumour tissue characteristics on immunohistochemical slides and alleviate the few sample feature deficits. We trained and tested 28 CD30+ immunohistochemical whole slide images (WSIs), evaluated them with a performance index, and compared them with the diagnoses of senior dermatologists. Finally, the model's performance was further demonstrated on the publicly available Yale HER2 cohort. RESULTS Compared with the diagnoses by senior dermatologists, this method can better locate the tumour area and reduce the misdiagnosis rate. The prediction of CD3 and Ki-67 validated the model's ability to identify other biomarkers. CONCLUSIONS In this study, using a few immunohistochemical WSIs, our model can accurately identify CD30, CD3 and Ki-67 markers. In addition, the model could be applied to additional tumour identification tasks to aid pathologists in diagnosis and benefit clinical evaluation.
Collapse
Affiliation(s)
- Tingting Zheng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Song Zheng
- Department of Dermatology, The First Hospital of China Medical University, Shenyang, Liaoning, China
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics No, Heping District, Liaoning Province, China
- NHC Key Laboratory of Immunodermatology, Heping District, Liaoning Province, China
| | - Ke Wang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Hao Quan
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Qun Bai
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Shuqin Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Ruiqun Qi
- Department of Dermatology, The First Hospital of China Medical University, Shenyang, Liaoning, China
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics No, Heping District, Liaoning Province, China
- NHC Key Laboratory of Immunodermatology, Heping District, Liaoning Province, China
| | - Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics No, Heping District, Liaoning Province, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Xinghua Gao
- Department of Dermatology, The First Hospital of China Medical University, Shenyang, Liaoning, China
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics No, Heping District, Liaoning Province, China
- NHC Key Laboratory of Immunodermatology, Heping District, Liaoning Province, China
| |
Collapse
|
28
|
Shen Y, Ke J. Sampling Based Tumor Recognition in Whole-Slide Histology Image With Deep Learning Approaches. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2022; 19:2431-2441. [PMID: 33630739 DOI: 10.1109/tcbb.2021.3062230] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Histopathological identification of tumor tissue is one of the routine pathological diagnoses for pathologists. Recently, computational pathology has been successfully interpreted by a variety of deep learning-based applications. Nevertheless, the high-efficient and spatial-correlated processing of individual patches have always attracted attention in whole-slide image (WSI) analysis. In this paper, we propose a high-throughput system to detect tumor regions in colorectal cancer histology slides precisely. We train a deep convolutional neural network (CNN) model and design a Monte Carlo (MC) adaptive sampling method to estimate the most representative patches in a WSI. Two conditional random field (CRF) models are designed, namely the correction CRF and the prediction CRF are integrated for spatial dependencies of patches. We use three datasets of colorectal cancer from The Cancer Genome Atlas (TCGA) to evaluate the performance of the system. The overall diagnostic time can be reduced from 56.7 percent to 71.7 percent on the slides of a varying tumor distribution, with an increase in classification accuracy.
Collapse
|
29
|
Qaiser T, Lee CY, Vandenberghe M, Yeh J, Gavrielides MA, Hipp J, Scott M, Reischl J. Usability of deep learning and H&E images predict disease outcome-emerging tool to optimize clinical trials. NPJ Precis Oncol 2022; 6:37. [PMID: 35705792 PMCID: PMC9200764 DOI: 10.1038/s41698-022-00275-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 04/27/2022] [Indexed: 11/24/2022] Open
Abstract
Understanding factors that impact prognosis for cancer patients have high clinical relevance for treatment decisions and monitoring of the disease outcome. Advances in artificial intelligence (AI) and digital pathology offer an exciting opportunity to capitalize on the use of whole slide images (WSIs) of hematoxylin and eosin (H&E) stained tumor tissue for objective prognosis and prediction of response to targeted therapies. AI models often require hand-delineated annotations for effective training which may not be readily available for larger data sets. In this study, we investigated whether AI models can be trained without region-level annotations and solely on patient-level survival data. We present a weakly supervised survival convolutional neural network (WSS-CNN) approach equipped with a visual attention mechanism for predicting overall survival. The inclusion of visual attention provides insights into regions of the tumor microenvironment with the pathological interpretation which may improve our understanding of the disease pathomechanism. We performed this analysis on two independent, multi-center patient data sets of lung (which is publicly available data) and bladder urothelial carcinoma. We perform univariable and multivariable analysis and show that WSS-CNN features are prognostic of overall survival in both tumor indications. The presented results highlight the significance of computational pathology algorithms for predicting prognosis using H&E stained images alone and underpin the use of computational methods to improve the efficiency of clinical trial studies.
Collapse
Affiliation(s)
- Talha Qaiser
- Precision Medicine and Biosamples, Oncology R&D, AstraZeneca, Cambridge, UK.
| | | | | | - Joe Yeh
- AetherAI, Taipei City, Taiwan
| | | | - Jason Hipp
- Early Oncology, Oncology R&D, AstraZeneca, Cambridge, UK
| | - Marietta Scott
- Precision Medicine and Biosamples, Oncology R&D, AstraZeneca, Cambridge, UK
| | - Joachim Reischl
- Precision Medicine and Biosamples, Oncology R&D, AstraZeneca, Cambridge, UK
| |
Collapse
|
30
|
Han Z, Lan J, Wang T, Hu Z, Huang Y, Deng Y, Zhang H, Wang J, Chen M, Jiang H, Lee RG, Gao Q, Du M, Tong T, Chen G. A Deep Learning Quantification Algorithm for HER2 Scoring of Gastric Cancer. Front Neurosci 2022; 16:877229. [PMID: 35706692 PMCID: PMC9190202 DOI: 10.3389/fnins.2022.877229] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 04/19/2022] [Indexed: 11/13/2022] Open
Abstract
Gastric cancer is the third most common cause of cancer-related death in the world. Human epidermal growth factor receptor 2 (HER2) positive is an important subtype of gastric cancer, which can provide significant diagnostic information for gastric cancer pathologists. However, pathologists usually use a semi-quantitative assessment method to assign HER2 scores for gastric cancer by repeatedly comparing hematoxylin and eosin (H&E) whole slide images (WSIs) with their HER2 immunohistochemical WSIs one by one under the microscope. It is a repetitive, tedious, and highly subjective process. Additionally, WSIs have billions of pixels in an image, which poses computational challenges to Computer-Aided Diagnosis (CAD) systems. This study proposed a deep learning algorithm for HER2 quantification evaluation of gastric cancer. Different from other studies that use convolutional neural networks for extracting feature maps or pre-processing on WSIs, we proposed a novel automatic HER2 scoring framework in this study. In order to accelerate the computational process, we proposed to use the re-parameterization scheme to separate the training model from the deployment model, which significantly speedup the inference process. To the best of our knowledge, this is the first study to provide a deep learning quantification algorithm for HER2 scoring of gastric cancer to assist the pathologist's diagnosis. Experiment results have demonstrated the effectiveness of our proposed method with an accuracy of 0.94 for the HER2 scoring prediction.
Collapse
Affiliation(s)
- Zixin Han
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| | - Junlin Lan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| | - Tao Wang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| | - Ziwei Hu
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| | - Yuxiu Huang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| | - Yanglin Deng
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| | - Hejun Zhang
- Department of Pathology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, China
| | - Jianchao Wang
- Department of Pathology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, China
| | - Musheng Chen
- Department of Pathology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, China
| | - Haiyan Jiang
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
- College of Electrical Engineering and Automation, Fuzhou University, Fuzhou, China
| | - Ren-Guey Lee
- Department of Electronic Engineering, National Taipei University of Technology, Taipei, Taiwan
| | - Qinquan Gao
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
- Imperial Vision Technology, Fuzhou, China
| | - Ming Du
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
- Imperial Vision Technology, Fuzhou, China
| | - Gang Chen
- College of Electrical Engineering and Automation, Fuzhou University, Fuzhou, China
- Fujian Provincial Key Laboratory of Translational Cancer Medicin, Fuzhou, China
| |
Collapse
|
31
|
Tewary S, Mukhopadhyay S. AutoIHCNet: CNN architecture and decision fusion for automated HER2 scoring. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
32
|
Garberis I, Andre F, Lacroix-Triki M. L’intelligence artificielle pourrait-elle intervenir dans l’aide au diagnostic des cancers du sein ? – L’exemple de HER2. Bull Cancer 2022; 108:11S35-11S45. [PMID: 34969514 DOI: 10.1016/s0007-4551(21)00635-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
HER2 is an important prognostic and predictive biomarker in breast cancer. Its detection makes it possible to define which patients will benefit from a targeted treatment. While assessment of HER2 status by immunohistochemistry in positive vs negative categories is well implemented and reproducible, the introduction of a new "HER2-low" category could raise some concerns about its scoring and reproducibility. We herein described the current HER2 testing methods and the application of innovative machine learning techniques to improve these determinations, as well as the main challenges and opportunities related to the implementation of digital pathology in the up-and-coming AI era.
Collapse
Affiliation(s)
- Ingrid Garberis
- Inserm UMR 981, Gustave Roussy Cancer Campus, Villejuif, France; Université Paris-Saclay, 94270 Le Kremlin-Bicêtre, France.
| | - Fabrice Andre
- Inserm UMR 981, Gustave Roussy Cancer Campus, Villejuif, France; Université Paris-Saclay, 94270 Le Kremlin-Bicêtre, France; Département d'oncologie médicale, Gustave-Roussy, Villejuif, France
| | - Magali Lacroix-Triki
- Inserm UMR 981, Gustave Roussy Cancer Campus, Villejuif, France; Département d'anatomie et cytologie pathologiques, Gustave-Roussy, Villejuif, France
| |
Collapse
|
33
|
McGenity C, Wright A, Treanor D. AIM in Surgical Pathology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
34
|
Rakha EA, Vougas K, Tan PH. Digital Technology in Diagnostic Breast Pathology and Immunohistochemistry. Pathobiology 2021; 89:334-342. [DOI: 10.1159/000521149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 11/11/2021] [Indexed: 11/19/2022] Open
Abstract
Digital technology has been used in the field of diagnostic breast pathology and immunohistochemistry (IHC) for decades. Examples include automated tissue processing and staining, digital data processing, storing and management, voice recognition systems, and digital technology-based production of antibodies and other IHC reagents. However, the recent application of whole slide imaging technology and artificial intelligence (AI)-based tools has attracted a lot of attention. The use of AI tools in breast pathology is discussed briefly as it is covered in other reviews. Here, we present the main application of digital technology in IHC. This includes automation of IHC staining, using image analysis systems and computer vision technology to interpret IHC staining, and the use of AI-based tools to predict marker expression from haematoxylin and eosin-stained digitalized images.
Collapse
|
35
|
Liu J, Zheng Q, Mu X, Zuo Y, Xu B, Jin Y, Wang Y, Tian H, Yang Y, Xue Q, Huang Z, Chen L, Gu B, Hou X, Shen L, Guo Y, Li Y. Automated tumor proportion score analysis for PD-L1 (22C3) expression in lung squamous cell carcinoma. Sci Rep 2021; 11:15907. [PMID: 34354151 PMCID: PMC8342621 DOI: 10.1038/s41598-021-95372-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 07/21/2021] [Indexed: 01/10/2023] Open
Abstract
Programmed cell death ligend-1 (PD-L1) expression by immunohistochemistry (IHC) assays is a predictive marker of anti-PD-1/PD-L1 therapy response. With the popularity of anti-PD-1/PD-L1 inhibitor drugs, quantitative assessment of PD-L1 expression becomes a new labor for pathologists. Manually counting the PD-L1 positive stained tumor cells is an obviously subjective and time-consuming process. In this paper, we developed a new computer aided Automated Tumor Proportion Scoring System (ATPSS) to determine the comparability of image analysis with pathologist scores. A three-stage process was performed using both image processing and deep learning techniques to mimic the actual diagnostic flow of the pathologists. We conducted a multi-reader multi-case study to evaluate the agreement between pathologists and ATPSS. Fifty-one surgically resected lung squamous cell carcinoma were prepared and stained using the Dako PD-L1 (22C3) assay, and six pathologists with different experience levels were involved in this study. The TPS predicted by the proposed model had high and statistically significant correlation with sub-specialty pathologists' scores with Mean Absolute Error (MAE) of 8.65 (95% confidence interval (CI): 6.42-10.90) and Pearson Correlation Coefficient (PCC) of 0.9436 ([Formula: see text]), and the performance on PD-L1 positive cases achieved by our method surpassed that of non-subspecialty and trainee pathologists. Those experimental results indicate that the proposed automated system can be a powerful tool to improve the PD-L1 TPS assessment of pathologists.
Collapse
Affiliation(s)
- Jingxin Liu
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China
- Histo Pathology Diagnostic Center, Shanghai, China
| | - Qiang Zheng
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Xiao Mu
- Histo Pathology Diagnostic Center, Shanghai, China
| | - Yanfei Zuo
- Histo Pathology Diagnostic Center, Shanghai, China
| | - Bo Xu
- Histo Pathology Diagnostic Center, Shanghai, China
| | - Yan Jin
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Yue Wang
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Hua Tian
- Department of Pathology, Yangzhou Jiangdu People's Hospital, Yangzhou, China
| | - Yongguo Yang
- Department of Pathology, Yangzhou Jiangdu People's Hospital, Yangzhou, China
| | - Qianqian Xue
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Ziling Huang
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Lijun Chen
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Bin Gu
- Histo Pathology Diagnostic Center, Shanghai, China
| | - Xianxu Hou
- Computer Vision Institute, School of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China
| | - Linlin Shen
- Computer Vision Institute, School of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China
- AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Shenzhen, China
| | - Yan Guo
- Histo Pathology Diagnostic Center, Shanghai, China
| | - Yuan Li
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China.
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.
| |
Collapse
|
36
|
Zhou SK, Le HN, Luu K, V Nguyen H, Ayache N. Deep reinforcement learning in medical imaging: A literature review. Med Image Anal 2021; 73:102193. [PMID: 34371440 DOI: 10.1016/j.media.2021.102193] [Citation(s) in RCA: 57] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 05/22/2021] [Accepted: 07/20/2021] [Indexed: 12/29/2022]
Abstract
Deep reinforcement learning (DRL) augments the reinforcement learning framework, which learns a sequence of actions that maximizes the expected reward, with the representative power of deep neural networks. Recent works have demonstrated the great potential of DRL in medicine and healthcare. This paper presents a literature review of DRL in medical imaging. We start with a comprehensive tutorial of DRL, including the latest model-free and model-based algorithms. We then cover existing DRL applications for medical imaging, which are roughly divided into three main categories: (i) parametric medical image analysis tasks including landmark detection, object/lesion detection, registration, and view plane localization; (ii) solving optimization tasks including hyperparameter tuning, selecting augmentation strategies, and neural architecture search; and (iii) miscellaneous applications including surgical gesture segmentation, personalized mobile health intervention, and computational model personalization. The paper concludes with discussions of future perspectives.
Collapse
Affiliation(s)
- S Kevin Zhou
- Medical Imaging, Robotics, and Analytic Computing Laboratory and Enigineering (MIRACLE) Center, School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China; Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, China.
| | | | - Khoa Luu
- CSCE Department, University of Arkansas, US
| | | | | |
Collapse
|
37
|
Yue M, Zhang J, Wang X, Yan K, Cai L, Tian K, Niu S, Han X, Yu Y, Huang J, Han D, Yao J, Liu Y. Can AI-assisted microscope facilitate breast HER2 interpretation? A multi-institutional ring study. Virchows Arch 2021; 479:443-449. [PMID: 34279719 DOI: 10.1007/s00428-021-03154-x] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 06/20/2021] [Accepted: 07/03/2021] [Indexed: 11/26/2022]
Abstract
The level of human epidermal growth factor receptor-2 (HER2) protein and gene expression in breast cancer is an essential factor in judging the prognosis of breast cancer patients. Several investigations have shown high intraobserver and interobserver variability in the evaluation of HER2 staining by visual examination. In this study, we aim to propose an artificial intelligence (AI)-assisted microscope to improve the HER2 assessment accuracy and reliability. Our AI-assisted microscope was equipped with a conventional microscope with a cell-level classification-based HER2 scoring algorithm and an augmented reality module to enable pathologists to obtain AI results in real time. We organized a three-round ring study of 50 infiltrating duct carcinoma not otherwise specified (NOS) cases without neoadjuvant treatment, and recruited 33 pathologists from 6 hospitals. In the first ring study (RS1), the pathologists read 50 HER2 whole-slide images (WSIs) through an online system. After a 2-week washout period, they read the HER2 slides using a conventional microscope in RS2. After another 2-week washout period, the pathologists used our AI microscope for assisted interpretation in RS3. The consistency and accuracy of HER2 assessment by the AI-assisted microscope were significantly improved (p < 0.001) over those obtained using a conventional microscope and online WSI. Specifically, our AI-assisted microscope improved the precision of immunohistochemistry (IHC) 3 + and 2 + scoring while ensuring the recall of fluorescent in situ hybridization (FISH)-positive results in IHC 2 + . Also, the average acceptance rate of AI for all pathologists was 0.90, demonstrating that the pathologists agreed with most AI scoring results.
Collapse
MESH Headings
- Artificial Intelligence
- Automation, Laboratory
- Biomarkers, Tumor/analysis
- Biomarkers, Tumor/genetics
- Breast Neoplasms/chemistry
- Breast Neoplasms/genetics
- Breast Neoplasms/pathology
- Carcinoma, Ductal, Breast/chemistry
- Carcinoma, Ductal, Breast/genetics
- Carcinoma, Ductal, Breast/pathology
- China
- Female
- Humans
- Image Interpretation, Computer-Assisted
- Immunohistochemistry
- In Situ Hybridization, Fluorescence
- Microscopy/instrumentation
- Observer Variation
- Predictive Value of Tests
- Receptor, ErbB-2/analysis
- Receptor, ErbB-2/genetics
- Reproducibility of Results
- Retrospective Studies
Collapse
Affiliation(s)
- Meng Yue
- Department of Pathology, The Fourth Hospital of Hebei Medical University, No. 12 Jiankang Road, Shijiazhuang, 050011, Hebei, China
| | - Jun Zhang
- Tencent AI Lab, Nanshan District, Tencent Binhai Building, No. 33, Haitian Second Road, Shenzhen, 518054, Guangdong, China
| | - Xinran Wang
- Department of Pathology, The Fourth Hospital of Hebei Medical University, No. 12 Jiankang Road, Shijiazhuang, 050011, Hebei, China
| | - Kezhou Yan
- Tencent AI Lab, Nanshan District, Tencent Binhai Building, No. 33, Haitian Second Road, Shenzhen, 518054, Guangdong, China
| | - Lijing Cai
- Department of Pathology, The Fourth Hospital of Hebei Medical University, No. 12 Jiankang Road, Shijiazhuang, 050011, Hebei, China
| | - Kuan Tian
- Tencent AI Lab, Nanshan District, Tencent Binhai Building, No. 33, Haitian Second Road, Shenzhen, 518054, Guangdong, China
| | - Shuyao Niu
- Department of Pathology, The Fourth Hospital of Hebei Medical University, No. 12 Jiankang Road, Shijiazhuang, 050011, Hebei, China
| | - Xiao Han
- Tencent AI Lab, Nanshan District, Tencent Binhai Building, No. 33, Haitian Second Road, Shenzhen, 518054, Guangdong, China
| | - Yongqiang Yu
- Department of Pathology, The Fourth Hospital of Hebei Medical University, No. 12 Jiankang Road, Shijiazhuang, 050011, Hebei, China
| | - Junzhou Huang
- Tencent AI Lab, Nanshan District, Tencent Binhai Building, No. 33, Haitian Second Road, Shenzhen, 518054, Guangdong, China
| | - Dandan Han
- Department of Pathology, The Fourth Hospital of Hebei Medical University, No. 12 Jiankang Road, Shijiazhuang, 050011, Hebei, China
| | - Jianhua Yao
- Tencent AI Lab, Nanshan District, Tencent Binhai Building, No. 33, Haitian Second Road, Shenzhen, 518054, Guangdong, China.
| | - Yueping Liu
- Department of Pathology, The Fourth Hospital of Hebei Medical University, No. 12 Jiankang Road, Shijiazhuang, 050011, Hebei, China.
| |
Collapse
|
38
|
Huo Y, Deng R, Liu Q, Fogo AB, Yang H. AI applications in renal pathology. Kidney Int 2021; 99:1309-1320. [PMID: 33581198 PMCID: PMC8154730 DOI: 10.1016/j.kint.2021.01.015] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 01/09/2021] [Accepted: 01/13/2021] [Indexed: 12/20/2022]
Abstract
The explosive growth of artificial intelligence (AI) technologies, especially deep learning methods, has been translated at revolutionary speed to efforts in AI-assisted healthcare. New applications of AI to renal pathology have recently become available, driven by the successful AI deployments in digital pathology. However, synergetic developments of renal pathology and AI require close interdisciplinary collaborations between computer scientists and renal pathologists. Computer scientists should understand that not every AI innovation is translatable to renal pathology, while renal pathologists should capture high-level principles of the relevant AI technologies. Herein, we provide an integrated review on current and possible future applications in AI-assisted renal pathology, by including perspectives from computer scientists and renal pathologists. First, the standard stages, from data collection to analysis, in full-stack AI-assisted renal pathology studies are reviewed. Second, representative renal pathology-optimized AI techniques are introduced. Last, we review current clinical AI applications, as well as promising future applications with the recent advances in AI.
Collapse
Affiliation(s)
- Yuankai Huo
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Ruining Deng
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Quan Liu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Agnes B Fogo
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Haichun Yang
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, Tennessee, USA.
| |
Collapse
|
39
|
Tewary S, Mukhopadhyay S. HER2 Molecular Marker Scoring Using Transfer Learning and Decision Level Fusion. J Digit Imaging 2021; 34:667-677. [PMID: 33742331 PMCID: PMC8329150 DOI: 10.1007/s10278-021-00442-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Revised: 01/13/2021] [Accepted: 03/01/2021] [Indexed: 01/28/2023] Open
Abstract
In prognostic evaluation of breast cancer, immunohistochemical (IHC) marker human epidermal growth factor receptor 2 (HER2) is used for prognostic evaluation. Accurate assessment of HER2-stained tissue sample is essential in therapeutic decision making for the patients. In regular clinical settings, expert pathologists assess the HER2-stained tissue slide under microscope for manual scoring based on prior experience. Manual scoring is time consuming, tedious, and often prone to inter-observer variation among group of pathologists. With the recent advancement in the area of computer vision and deep learning, medical image analysis has got significant attention. A number of deep learning architectures have been proposed for classification of different image groups. These networks are also used for transfer learning to classify other image classes. In the presented study, a number of transfer learning architectures are used for HER2 scoring. Five pre-trained architectures viz. VGG16, VGG19, ResNet50, MobileNetV2, and NASNetMobile with decimating the fully connected layers to get 3-class classification have been used for the comparative assessment of the networks as well as further scoring of stained tissue sample image based on statistical voting using mode operator. HER2 Challenge dataset from Warwick University is used in this study. A total of 2130 image patches were extracted to generate the training dataset from 300 training images corresponding to 30 training cases. The output model is then tested on 800 new test image patches from 100 test images acquired from 10 test cases (different from training cases) to report the outcome results. The transfer learning models have shown significant accuracy with VGG19 showing the best accuracy for the test images. The accuracy is found to be 93%, which increases to 98% on the image-based scoring using statistical voting mechanism. The output shows a capable quantification pipeline in automated HER2 score generation.
Collapse
Affiliation(s)
- Suman Tewary
- School of Medical Science and Technology, Indian Institute of Technology Kharagpur, Kharagpur, India
- Computational Instrumentation, CSIR-Central Scientific Instruments Organisation, Chandigarh, India
| | - Sudipta Mukhopadhyay
- Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, Kharagpur, India.
| |
Collapse
|
40
|
Srinidhi CL, Ciga O, Martel AL. Deep neural network models for computational histopathology: A survey. Med Image Anal 2021; 67:101813. [PMID: 33049577 PMCID: PMC7725956 DOI: 10.1016/j.media.2020.101813] [Citation(s) in RCA: 255] [Impact Index Per Article: 63.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 05/12/2020] [Accepted: 08/09/2020] [Indexed: 12/14/2022]
Abstract
Histopathological images contain rich phenotypic information that can be used to monitor underlying mechanisms contributing to disease progression and patient survival outcomes. Recently, deep learning has become the mainstream methodological choice for analyzing and interpreting histology images. In this paper, we present a comprehensive review of state-of-the-art deep learning approaches that have been used in the context of histopathological image analysis. From the survey of over 130 papers, we review the field's progress based on the methodological aspect of different machine learning strategies such as supervised, weakly supervised, unsupervised, transfer learning and various other sub-variants of these methods. We also provide an overview of deep learning based survival models that are applicable for disease-specific prognosis tasks. Finally, we summarize several existing open datasets and highlight critical challenges and limitations with current deep learning approaches, along with possible avenues for future research.
Collapse
Affiliation(s)
- Chetan L Srinidhi
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada.
| | - Ozan Ciga
- Department of Medical Biophysics, University of Toronto, Canada
| | - Anne L Martel
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada
| |
Collapse
|
41
|
Shi J, Wang R, Zheng Y, Jiang Z, Zhang H, Yu L. Cervical cell classification with graph convolutional network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105807. [PMID: 33130497 DOI: 10.1016/j.cmpb.2020.105807] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Accepted: 10/12/2020] [Indexed: 05/26/2023]
Abstract
BACKGROUND AND OBJECTIVE Cervical cell classification has important clinical significance in cervical cancer screening at early stages. In contrast with the conventional classification methods which depend on hand-crafted or engineered features, Convolutional Neural Network (CNN) generally classifies cervical cells via learned deep features. However, the latent correlations of images may be ignored during CNN feature learning and thus influence the representation ability of CNN features. METHODS We propose a novel cervical cell classification method based on Graph Convolutional Network (GCN). It aims to explore the potential relationship of cervical cell images for improving the classification performance. The CNN features of all the cervical cell images are firstly clustered and the intrinsic relationships of images can be preliminarily revealed through the clustering. To further capture the underlying correlations existed among clusters, a graph structure is constructed. GCN is then applied to propagate the node dependencies and thus yield the relation-aware feature representation. The GCN features are finally incorporated to enhance the discriminative ability of CNN features. RESULTS Experiments on the public cervical cell image dataset SIPaKMeD from International Conference on Image Processing in 2018 demonstrate the feasibility and effectiveness of the proposed method. In addition, we introduce a large-scale Motic liquid-based cytology image dataset which provides the large amount of data, some novel cell types with important clinical significance and staining difference and thus presents a great challenge for cervical cell classification. We evaluate the proposed method under two conditions of the consistent staining and different staining. Experimental results show our method outperforms the existing state-of-arts methods according to the quantitative metrics (i.e. accuracy, sensitivity, specificity, F-measure and confusion matrices). CONCLUSIONS The intrinsic relationship exploration of cervical cells contributes significant improvements to the cervical cell classification. The relation-aware features generated by GCN effectively strengthens the representational power of CNN features. The proposed method can achieve the better classification performance and also can be potentially used in automatic screening system of cervical cytology.
Collapse
Affiliation(s)
- Jun Shi
- School of Software, Hefei University of Technology, Hefei 230601, China.
| | - Ruoyu Wang
- School of Software, Hefei University of Technology, Hefei 230601, China.
| | - Yushan Zheng
- Image Processing Center, School of Astronautics, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China; Beijing Key Laboratory of Digital Media, Beihang University, Beijing, 100191, China.
| | - Zhiguo Jiang
- Image Processing Center, School of Astronautics, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China; Beijing Key Laboratory of Digital Media, Beihang University, Beijing, 100191, China.
| | - Haopeng Zhang
- Image Processing Center, School of Astronautics, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China; Beijing Key Laboratory of Digital Media, Beihang University, Beijing, 100191, China.
| | - Lanlan Yu
- Motic (Xiamen) Medical Diagnostic Systems Co. Ltd., Xiamen 361101, China.
| |
Collapse
|
42
|
Feng M, Chen J, Xiang X, Deng Y, Zhou Y, Zhang Z, Zheng Z, Bao J, Bu H. An Advanced Automated Image Analysis Model for Scoring of ER, PR, HER-2 and Ki-67 in Breast Carcinoma. IEEE ACCESS 2021; 9:108441-108451. [DOI: 10.1109/access.2020.3011294] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
43
|
AIM in Surgical Pathology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_278-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
44
|
An end-to-end breast tumour classification model using context-based patch modelling - A BiLSTM approach for image classification. Comput Med Imaging Graph 2020; 87:101838. [PMID: 33340945 DOI: 10.1016/j.compmedimag.2020.101838] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Revised: 10/31/2020] [Accepted: 11/29/2020] [Indexed: 11/20/2022]
Abstract
Researchers working on computational analysis of Whole Slide Images (WSIs) in histopathology have primarily resorted to patch-based modelling due to large resolution of each WSI. The large resolution makes WSIs infeasible to be fed directly into the machine learning models due to computational constraints. However, due to patch-based analysis, most of the current methods fail to exploit the underlying spatial relationship among the patches. In our work, we have tried to integrate this relationship along with feature-based correlation among the extracted patches from the particular tumorous region. The tumour regions extracted from WSI have arbitrary dimensions having the range 20,570 to 195 pixels across width and 17,290 to 226 pixels across height. For the given task of classification, we have used BiLSTMs to model both forward and backward contextual relationship. Also, using RNN based model, the limitation of sequence size is eliminated which allows the modelling of variable size images within a deep learning model. We have also incorporated the effect of spatial continuity by exploring different scanning techniques used to sample patches. To establish the efficiency of our approach, we trained and tested our model on two datasets, microscopy images and WSI tumour regions. Both datasets were published by ICIAR BACH Challenge 2018. Finally, we compared our results with top 5 teams who participated in the BACH challenge and achieved the top accuracy of 90% for microscopy image dataset. For WSI tumour region dataset, we compared the classification results with state of the art deep learning networks such as ResNet, DenseNet, and InceptionV3 using maximum voting technique. We achieved the highest performance accuracy of 84%. We found out that BiLSTMs with CNN features have performed much better in modelling patches into an end-to-end Image classification network. Additionally, the variable dimensions of WSI tumour regions were used for classification without the need for resizing. This suggests that our method is independent of tumour image size and can process large dimensional images without losing the resolution details.
Collapse
|
45
|
Tewary S, Arun I, Ahmed R, Chatterjee S, Mukhopadhyay S. AutoIHC-Analyzer: computer-assisted microscopy for automated membrane extraction/scoring in HER2 molecular markers. J Microsc 2020; 281:87-96. [PMID: 32803890 DOI: 10.1111/jmi.12955] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 08/06/2020] [Accepted: 08/12/2020] [Indexed: 11/30/2022]
Abstract
Human epidermal growth factor receptor 2 (HER2) is one of the widely used Immunohistochemical (IHC) markers for prognostic evaluation amongst the patient of breast cancer. Accurate quantification of cell membrane is essential for HER2 scoring in therapeutic decision making. In modern laboratory practice, expert pathologist visually assesses the HER2-stained tissue sample under the bright field microscope for cell membrane assessment. This manual assessment is time consuming, tedious and quite often results in interobserver variability. Further, the burden of increasing number of patients is a challenge for the pathologists. To address these challenges, there is an urgent need with a rapid HER2 cell membrane extraction method. The proposed study aims at developing an automated IHC scoring system, termed as AutoIHC-Analyzer, for automated cell membrane extraction followed by HER2 molecular expression assessment from stained tissue images. A series of image processing approaches have been used to automatically extract the stained cells and membrane region, followed by automatic assessment of complete and broken membrane. Finally, a set of features are used to automatically classify the tissue under observation for the quantitative scoring as 0/1+, 2+ and 3+. In a set of surgically extracted cases of HER2-stained tissues, obtained from collaborative hospital for the testing and validation of the proposed approach AutoIHC-Analyzer and publicly available open source ImmunoMembrane software are compared for 90 set of randomly acquired images with the scores by expert pathologist where significant correlation is observed [(r = 0.9448; p < 0.001) and (r = 0.8521; p < 0.001)] respectively. The output shows promising quantification in automated scoring. LAY DESCRIPTION: In cancer prognosis amongst the patient of breast cancer, human epidermal growth factor receptor 2 (HER2) is used as Immunohistochemical (IHC) biomarker. The correct assessment of HER2 leads to the therapeutic decision making. In regular practice, the stained tissue sample is observed under a bright microscope and the expert pathologists score the sample as negative (0/1+), equivocal (2+) and positive (3+) case. The scoring is based on the standard guidelines relating the complete and broken cell membrane as well as intensity of staining in the membrane boundary. Such evaluation is time consuming, tedious and quite often results in interobserver variability. To assist in rapid HER2 cell membrane assessment, the proposed study aims at developing an automated IHC scoring system, termed as AutoIHC-Analyzer, for automated cell membrane extraction followed by HER2 molecular expression assessment from stained tissue images. The input image is preprocessed using modified white patch and CMYK and RGB colour space were used in extracting the haematoxylin (negatively stained cells) and diaminobenzidine (DAB) stain observed in the tumour cell membrane. Segmentation and postprocessing are applied to create the masks for each of the stain channels. The membrane mask is then quantified as complete or broken using skeletonisation and morphological operations. Six set of features were assessed for the classification from a set of 180 training images. These features are: complete to broken membrane ratio, amount of stain using area of Blue and Saturation channels to the image size, DAB to haematoxylin ratio from segmented masks and average R, G and B from five largest blobs in segmented DAB-masked image. These features are then used in training the SVM classifier with Gaussian kernel using 5-fold cross-validation. The accuracy in the training sample is found to be 88.3%. The model is then used for 90 set of unknown test sample images and the final labelling of stained cells and HER2 scores (as 0/1+, 2+ and 3+) are compared with the ground truth, that is expert pathologists' score from the collaborative hospital. The test sample images were also fed to ImmunoMembrane software for a comparative assessment. The results from the proposed AutoIHC-Analyzer and ImmunoMembrane software were compared with the expert pathologists' score where significant agreement using Pearson's correlation coefficient [(r = 0.9448; p < 0.001) and (r = 0.8521; p < 0.001) respectively] is observed. The results from AutoIHC-Analyzer show promising quantitative assessment of HER2 scoring.
Collapse
Affiliation(s)
- Suman Tewary
- School of Medical Science & Technology, IIT Kharagpur, Kharagpur, West Bengal, India.,Computational Instrumentation Division, CSIR-CSIO, Chandigarh, India
| | - Indu Arun
- Tata Medical Center, New Town, Rajarhat, Kolkata, West Bengal, India
| | - Rosina Ahmed
- Tata Medical Center, New Town, Rajarhat, Kolkata, West Bengal, India
| | - Sanjoy Chatterjee
- Tata Medical Center, New Town, Rajarhat, Kolkata, West Bengal, India
| | - Sudipta Mukhopadhyay
- Electronics and Electrical Communication Engineering, IIT Kharagpur, Kharagpur, West Bengal, India
| |
Collapse
|
46
|
La Barbera D, Polónia A, Roitero K, Conde-Sousa E, Della Mea V. Detection of HER2 from Haematoxylin-Eosin Slides Through a Cascade of Deep Learning Classifiers via Multi-Instance Learning. J Imaging 2020; 6:82. [PMID: 34460739 PMCID: PMC8321042 DOI: 10.3390/jimaging6090082] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 08/14/2020] [Accepted: 08/18/2020] [Indexed: 12/28/2022] Open
Abstract
Breast cancer is the most frequently diagnosed cancer in woman. The correct identification of the HER2 receptor is a matter of major importance when dealing with breast cancer: an over-expression of HER2 is associated with aggressive clinical behaviour; moreover, HER2 targeted therapy results in a significant improvement in the overall survival rate. In this work, we employ a pipeline based on a cascade of deep neural network classifiers and multi-instance learning to detect the presence of HER2 from Haematoxylin-Eosin slides, which partly mimics the pathologist's behaviour by first recognizing cancer and then evaluating HER2. Our results show that the proposed system presents a good overall effectiveness. Furthermore, the system design is prone to further improvements that can be easily deployed in order to increase the effectiveness score.
Collapse
Affiliation(s)
- David La Barbera
- Department of Mathematics, Computer Science and Physics, University of Udine, 33100 Udine, Italy; (D.L.B.); (K.R.)
| | - António Polónia
- Department of Pathology, Ipatimup Diagnostics, Institute of Molecular Pathology and Immunology, University of Porto, 4169-007 Porto, Portugal;
- i3S—Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4169-007 Porto, Portugal;
| | - Kevin Roitero
- Department of Mathematics, Computer Science and Physics, University of Udine, 33100 Udine, Italy; (D.L.B.); (K.R.)
| | - Eduardo Conde-Sousa
- i3S—Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4169-007 Porto, Portugal;
- INEB—Instituto de Engenharia Biomédica, Universidade do Porto, 4169-007 Porto, Portugal
| | - Vincenzo Della Mea
- Department of Mathematics, Computer Science and Physics, University of Udine, 33100 Udine, Italy; (D.L.B.); (K.R.)
| |
Collapse
|
47
|
Xu B, Liu J, Hou X, Liu B, Garibaldi J, Ellis IO, Green A, Shen L, Qiu G. Attention by Selection: A Deep Selective Attention Approach to Breast Cancer Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1930-1941. [PMID: 31880545 DOI: 10.1109/tmi.2019.2962013] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Deep learning approaches are widely applied to histopathological image analysis due to the impressive levels of performance achieved. However, when dealing with high-resolution histopathological images, utilizing the original image as input to the deep learning model is computationally expensive, while resizing the original image to achieve low resolution incurs information loss. Some hard-attention based approaches have emerged to select possible lesion regions from images to avoid processing the original image. However, these hard-attention based approaches usually take a long time to converge with weak guidance, and valueless patches may be trained by the classifier. To overcome this problem, we propose a deep selective attention approach that aims to select valuable regions in the original images for classification. In our approach, a decision network is developed to decide where to crop and whether the cropped patch is necessary for classification. These selected patches are then trained by the classification network, which then provides feedback to the decision network to update its selection policy. With such a co-evolution training strategy, we show that our approach can achieve a fast convergence rate and high classification accuracy. Our approach is evaluated on a public breast cancer histopathological image database, where it demonstrates superior performance compared to state-of-the-art deep learning approaches, achieving approximately 98% classification accuracy while only taking 50% of the training time of the previous hard-attention approach.
Collapse
|
48
|
A High-Throughput Tumor Location System with Deep Learning for Colorectal Cancer Histopathology Image. Artif Intell Med 2020. [DOI: 10.1007/978-3-030-59137-3_24] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
49
|
Dimitriou N, Arandjelović O, Caie PD. Deep Learning for Whole Slide Image Analysis: An Overview. Front Med (Lausanne) 2019; 6:264. [PMID: 31824952 PMCID: PMC6882930 DOI: 10.3389/fmed.2019.00264] [Citation(s) in RCA: 139] [Impact Index Per Article: 23.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 10/29/2019] [Indexed: 12/15/2022] Open
Abstract
The widespread adoption of whole slide imaging has increased the demand for effective and efficient gigapixel image analysis. Deep learning is at the forefront of computer vision, showcasing significant improvements over previous methodologies on visual understanding. However, whole slide images have billions of pixels and suffer from high morphological heterogeneity as well as from different types of artifacts. Collectively, these impede the conventional use of deep learning. For the clinical translation of deep learning solutions to become a reality, these challenges need to be addressed. In this paper, we review work on the interdisciplinary attempt of training deep neural networks using whole slide images, and highlight the different ideas underlying these methodologies.
Collapse
Affiliation(s)
- Neofytos Dimitriou
- School of Computer Science, University of St Andrews, St Andrews, United Kingdom
| | - Ognjen Arandjelović
- School of Computer Science, University of St Andrews, St Andrews, United Kingdom
| | - Peter D Caie
- School of Medicine, University of St Andrews, St Andrews, United Kingdom
| |
Collapse
|