1
|
Dutta K, Pal D, Li S, Shyam C, Shoghi KI. Corr-A-Net: Interpretable Attention-Based Correlated Feature Learning framework for predicting of HER2 Score in Breast Cancer from H&E Images. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2025:2025.04.22.25326227. [PMID: 40313277 PMCID: PMC12045401 DOI: 10.1101/2025.04.22.25326227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 05/03/2025]
Abstract
Human epidermal growth factor receptor 2 (HER2) expression is a critical biomarker for assessing breast cancer (BC) severity and guiding targeted anti-HER2 therapies. The standard method for measuring HER2 expression is manual assessment of IHC slides by pathologists, which is both time intensive and prone to inter- and intra-observer variability. To address these challenges, we developed an interpretable deep-learning pipeline with Correlational Attention Neural Network (Corr-A-Net) to predict HER2 score from H&E images. Each prediction was accompanied with a confidence score generated by the surrogate confidence score estimation network trained using incentivized mechanism. The shared correlated representations generated using the attention mechanism of Corr-A-Net achieved the best predictive accuracy of 0.93 and AUC-ROC of 0.98. Additionally, correlated representations demonstrated the highest mean effective confidence (MEC) score of 0.85 indicating robust confidence level estimation for prediction. The Corr-A-Net can have profound implications in facilitating prediction of HER2 status from H&E images.
Collapse
|
2
|
Du J, Shi J, Sun D, Wang Y, Liu G, Chen J, Wang W, Zhou W, Zheng Y, Wu H. Machine learning prediction of HER2-low expression in breast cancers based on hematoxylin-eosin-stained slides. Breast Cancer Res 2025; 27:57. [PMID: 40251691 PMCID: PMC12008878 DOI: 10.1186/s13058-025-01998-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2024] [Accepted: 03/10/2025] [Indexed: 04/20/2025] Open
Abstract
BACKGROUND Treatment with HER2-targeted therapies is recommended for HER2-positive breast cancer patients with HER2 gene amplification or protein overexpression. Interestingly, recent clinical trials of novel HER2-targeted therapies demonstrated promising efficacy in HER2-low breast cancers, raising the prospect of including a HER2-low category (immunohistochemistry, IHC) score of 1 + or 2 + with non-amplified in-situ hybridization for HER2-targeted treatments, which necessitated the accurate detection and evaluation of HER2 expression in tumors. Traditionally, HER2 protein levels are routinely assessed by IHC in clinical practice, which not only requires significant time consumption and financial investment but is also technically challenging for many basic hospitals in developing countries. Therefore, directly predicting HER2 expression by hematoxylin-eosin (HE) staining should be of significant clinical values, and machine learning may be a potent technology to achieve this goal. METHODS In this study, we developed an artificial intelligence (AI) classification model using whole slide image of HE-stained slides to automatically assess HER2 status. RESULTS A publicly available TCGA-BRCA dataset and an in-house USTC-BC dataset were applied to evaluate our AI model and the state-of-the-art method SlideGraph + in terms of accuracy (ACC), the area under the receiver operating characteristic curve (AUC), and F1 score. Overall, our AI model achieved the superior performance in HER2 scoring in both datasets with AUC of 0.795 ± 0.028 and 0.688 ± 0.008 on the USCT-BC and TCGA-BRCA datasets, respectively. In addition, we visualized the results generated from our AI model by attention heatmaps, which proved that our AI model had strong interpretability. CONCLUSION Our AI model is able to directly predict HER2 expression through HE images with strong interpretability, and has a better ACC particularly in HER2-low breast cancers, which provides a method for AI evaluation of HER2 status and helps to perform HER2 evaluation economically and efficiently. It has the potential to assist pathologists to improve diagnosis and assess biomarkers for companion diagnostics.
Collapse
Affiliation(s)
- Jun Du
- Department of Pathology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui, China
- Intelligent Pathology Institute, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui, China
| | - Jun Shi
- School of Software, Hefei University of Technology, Hefei, 230601, China
| | - Dongdong Sun
- School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, 230601, Anhui, China
| | - Yifei Wang
- School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, 230601, Anhui, China
| | - Guanfeng Liu
- School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230027, Anhui, China
| | - Jingru Chen
- School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230027, Anhui, China
| | - Wei Wang
- Department of Pathology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui, China
- Intelligent Pathology Institute, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui, China
| | - Wenchao Zhou
- Intelligent Pathology Institute, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui, China.
- Department of Pathology, Centre for Leading Medicine and Advanced Technologies of IHM, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230001, Anhui, China.
| | - Yushan Zheng
- School of Engineering Medicine, Beijing Advanced Innovation Center on Biomedical Engineering, Beihang University, Beijing, 100191, China.
| | - Haibo Wu
- Department of Pathology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui, China.
- Intelligent Pathology Institute, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui, China.
- Department of Pathology, Centre for Leading Medicine and Advanced Technologies of IHM, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230001, Anhui, China.
| |
Collapse
|
3
|
Chyrmang G, Barua B, Bora K, Ahmed GN, Das AK, Kakoti L, Lemos B, Mallik S. Self-HER2Net: A generative self-supervised framework for HER2 classification in IHC histopathology of breast cancer. Pathol Res Pract 2025; 270:155961. [PMID: 40245674 DOI: 10.1016/j.prp.2025.155961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2025] [Accepted: 04/08/2025] [Indexed: 04/19/2025]
Abstract
Breast cancer is a significant global health concern, where precise identification of proteins like Human Epidermal Growth Factor Receptor 2 (HER2) in cancer cells via Immunohistochemistry (IHC) is pivotal for treatment decisions. HER2 overexpression is evaluated through HER2 scoring on a scale from 0 to 3 + based on staining patterns and intensity. Recent efforts have been made to automate HER2 scoring using image processing and AI techniques. However, existing methods require large manually annotated datasets as these follow supervised learning paradigms. Therefore, we proposed a generative self-supervised learning (SSL) framework "Self-HER2Net" for the classification of HER2 scoring, to reduce dependence on large manually annotated data by leveraging one of best performing four novel generative self-supervised tasks, that we proposed. The first two SSL tasks HER2hsl and HER2hsv are domain-agnostic and the other two HER2dab and HER2hae are domain-specific SSL tasks focusing on domain-agnostic and domain-specific staining patterns and intensity representation. Our approach is evaluated under different budget scenarios (2 %, 15 %, & 100 % labeled datasets) and also out distribution test. For tile-level assessment, HER2hsv achieved the best performance with AUC-ROC of 0.965 ± 0.037. Our self-supervised learning approach shows potential for application in scenarios with limited annotated data for HER2 analysis.
Collapse
Affiliation(s)
- Genevieve Chyrmang
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, India.
| | - Barun Barua
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, India.
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, India.
| | - Gazi N Ahmed
- North East Cancer Hospital and Research Institute, Guwahati, Assam, India.
| | - Anup Kr Das
- Arya Wellness centre, Guwahati, Assam, India.
| | | | - Bernardo Lemos
- Department of Environmental Health, Harvard T H Chan School of Public Health, Boston, MA 02115, USA; Department of Pharmacology & Toxicology, University of Arizona, AZ 85721, USA.
| | - Saurav Mallik
- Department of Environmental Health, Harvard T H Chan School of Public Health, Boston, MA 02115, USA; Department of Pharmacology & Toxicology, University of Arizona, AZ 85721, USA.
| |
Collapse
|
4
|
Shi J, Sun D, Jiang Z, Du J, Wang W, Zheng Y, Wu H. Weakly supervised multi-modal contrastive learning framework for predicting the HER2 scores in breast cancer. Comput Med Imaging Graph 2025; 121:102502. [PMID: 39919535 DOI: 10.1016/j.compmedimag.2025.102502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 08/22/2024] [Accepted: 01/25/2025] [Indexed: 02/09/2025]
Abstract
Human epidermal growth factor receptor 2 (HER2) is an important biomarker for prognosis and prediction of treatment response in breast cancer (BC). HER2 scoring is typically evaluated by pathologist microscopic observation on immunohistochemistry (IHC) images, which is labor-intensive and results in observational biases among different pathologists. Most existing methods generally use hand-crafted features or deep learning models in unimodal (hematoxylin and eosin (H&E) or IHC) to predict HER2 scores through supervised or weakly supervised learning. Consequently, the information from different modalities is not effectively integrated into feature learning which can help improve HER2 scoring performance. In this paper, we propose a novel weakly supervised multi-modal contrastive learning (WSMCL) framework to predict the HER2 scores in BC at the whole slide image (WSI) level. It aims to leverage multi-modal (H&E and IHC) joint learning under the weak supervision of WSI label to achieve the HER2 score prediction. Specifically, the patch features within H&E and IHC WSIs are respectively extracted and then the multi-head self-attention (MHSA) is used to explore the global dependencies of the patches within each modality. The patch features corresponding to top-k and bottom-k attention scores generated by MHSA in each modality are selected as the candidates for multi-modal joint learning. Particularly, a multi-modal attentive contrastive learning (MACL) module is designed to guarantee the semantic alignment of the candidate features from different modalities. Extensive experiments demonstrate the proposed WSMCL has the better HER2 scoring performance and outperforms the state-of-the-art methods. The code is available at https://github.com/HFUT-miaLab/WSMCL.
Collapse
Affiliation(s)
- Jun Shi
- School of Software, Hefei University of Technology, Hefei, 230601, Anhui Province, China
| | - Dongdong Sun
- School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, 230601, Anhui Province, China
| | - Zhiguo Jiang
- Image Processing Center, School of Astronautics, Beihang University, Beijing, 102206, China
| | - Jun Du
- Department of Pathology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China; Intelligent Pathology Institute, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China
| | - Wei Wang
- Department of Pathology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China; Intelligent Pathology Institute, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China
| | - Yushan Zheng
- School of Engineering Medicine, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China.
| | - Haibo Wu
- Department of Pathology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China; Intelligent Pathology Institute, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China; Department of Pathology, Centre for Leading Medicine and Advanced Technologies of IHM, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230001, Anhui Province, China.
| |
Collapse
|
5
|
Öttl M, Steenpass J, Wilm F, Qiu J, Rübner M, Lang-Schwarz C, Taverna C, Tava F, Hartmann A, Huebner H, Beckmann MW, Fasching PA, Maier A, Erber R, Breininger K. Fully automatic HER2 tissue segmentation for interpretable HER2 scoring. J Pathol Inform 2025; 17:100435. [PMID: 40236564 PMCID: PMC11999220 DOI: 10.1016/j.jpi.2025.100435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2025] [Revised: 03/03/2025] [Accepted: 03/09/2025] [Indexed: 04/17/2025] Open
Abstract
Breast cancer is the most common cancer in women, with HER2 (human epidermal growth factor receptor 2) overexpression playing a critical role in regulating cell growth and division. HER2 status, assessed according to established scoring guidelines, offers important information for treatment selection. However, the complexity of the task leads to variability in human rater assessments. In this work, we propose a fully automated, interpretable HER2 scoring pipeline based on pixel-level semantic segmentations, designed to align with clinical guidelines. Using polygon annotations, our method balances annotation effort with the ability to capture fine-grained details and larger structures, such as non-invasive tumor tissue. To enhance HER2 segmentation, we propose the use of a Wasserstein Dice loss to model class relationships, ensuring robust segmentation and HER2 scoring performance. Additionally, based on observations of pathologists' behavior in clinical practice, we propose a calibration step to the scoring rules, which positively impacts the accuracy and consistency of automated HER2 scoring. Our approach achieves an F1 score of 0.832 on HER2 scoring, demonstrating its effectiveness. This work establishes a potent segmentation pipeline that can be further leveraged to analyze HER2 expression in breast cancer tissue.
Collapse
Affiliation(s)
- Mathias Öttl
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Jana Steenpass
- Institute of Pathology, University Hospital Erlangen, Erlangen, Germany
| | - Frauke Wilm
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Jingna Qiu
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Matthias Rübner
- Department of Gynecology and Obstetrics, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | | | - Cecilia Taverna
- Surgical Pathology Unit, Azienda Sanitaria Locale, Presidio Ospedaliero, Ospedale San Giacomo, Novi Ligure, Italy
| | - Francesca Tava
- Surgical Pathology Unit, Azienda Sanitaria Locale, Presidio Ospedaliero, Ospedale San Giacomo, Novi Ligure, Italy
| | - Arndt Hartmann
- Institute of Pathology, University Hospital Erlangen, Erlangen, Germany
| | - Hanna Huebner
- Department of Gynecology and Obstetrics, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Matthias W. Beckmann
- Department of Gynecology and Obstetrics, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Peter A. Fasching
- Department of Gynecology and Obstetrics, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Ramona Erber
- Institute of Pathology, University Hospital Erlangen, Erlangen, Germany
- Institute of Pathology, University Regensburg, Regensburg, Germany
| | - Katharina Breininger
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Center for AI and Data Science (CAIDAS), Universität Würzburg, Würzburg, Germany
| |
Collapse
|
6
|
P MD, A M, Ali Y, V S. Effective BCDNet-based breast cancer classification model using hybrid deep learning with VGG16-based optimal feature extraction. BMC Med Imaging 2025; 25:12. [PMID: 39780045 PMCID: PMC11707918 DOI: 10.1186/s12880-024-01538-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2024] [Accepted: 12/17/2024] [Indexed: 01/11/2025] Open
Abstract
PROBLEM Breast cancer is a leading cause of death among women, and early detection is crucial for improving survival rates. The manual breast cancer diagnosis utilizes more time and is subjective. Also, the previous CAD models mostly depend on manmade visual details that are complex to generalize across ultrasound images utilizing distinct techniques. Distinct imaging tools have been utilized in previous works such as mammography and MRI. However, these imaging tools are costly and less portable than ultrasound imaging. Also, ultrasound imaging is a non-invasive method commonly used for breast cancer screening. Hence, the paper presents a novel deep learning model, BCDNet, for classifying breast tumors as benign or malignant using ultrasound images. AIM The primary aim of the study is to design an effective breast cancer diagnosis model that can accurately classify tumors in their early stages, thus reducing mortality rates. The model aims to optimize the weight and parameters using the RPAOSM-ESO algorithm to enhance accuracy and minimize false negative rates. METHODS The BCDNet model utilizes transfer learning from a pre-trained VGG16 network for feature extraction and employs an AHDNAM classification approach, which includes ASPP, DTCN, 1DCNN, and an attention mechanism. The RPAOSM-ESO algorithm is used to fine-tune the weights and parameters. RESULTS The RPAOSM-ESO-BCDNet-based breast cancer diagnosis model provided 94.5 accuracy rates. This value is relatively higher than the previous models such as DTCN (88.2), 1DCNN (89.6), MobileNet (91.3), and ASPP-DTC-1DCNN-AM (93.8). Hence, it is guaranteed that the designed RPAOSM-ESO-BCDNet produces relatively accurate solutions for the classification than the previous models. CONCLUSION The BCDNet model, with its sophisticated feature extraction and classification techniques optimized by the RPAOSM-ESO algorithm, shows promise in accurately classifying breast tumors using ultrasound images. The study suggests that the model could be a valuable tool in the early detection of breast cancer, potentially saving lives and reducing the burden on healthcare systems.
Collapse
Affiliation(s)
- Meenakshi Devi P
- Department of Information Technology, K.S.R. College of Engineering, Tiruchengode, Tamilnadu, 637215, India
| | - Muna A
- Centre for Research Impact & Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India
| | - Yasser Ali
- Chitkara Centre for Research and Development, Chitkara University, Baddi, Himachal Pradesh, 174103, India
| | - Sumanth V
- Department of Information Technology, Manipal Institute of Technology Bengaluru, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India.
| |
Collapse
|
7
|
Chyrmang G, Bora K, Das AK, Ahmed GN, Kakoti L. Insights into AI advances in immunohistochemistry for effective breast cancer treatment: a literature review of ER, PR, and HER2 scoring. Curr Med Res Opin 2025; 41:115-134. [PMID: 39705612 DOI: 10.1080/03007995.2024.2445142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Revised: 12/15/2024] [Accepted: 12/17/2024] [Indexed: 12/22/2024]
Abstract
Breast cancer is a significant health challenge, with accurate and timely diagnosis being critical to effective treatment. Immunohistochemistry (IHC) staining is a widely used technique for the evaluation of breast cancer markers, but manual scoring is time-consuming and can be subject to variability. With the rise of Artificial Intelligence (AI), there is an increasing interest in using machine learning and deep learning approaches to automate the scoring of ER, PR, and HER2 biomarkers in IHC-stained images for effective treatment. This narrative literature review focuses on AI-based techniques for the automated scoring of breast cancer markers in IHC-stained images, specifically Allred, Histochemical (H-Score) and HER2 scoring. We aim to identify the current state-of-the-art approaches, challenges, and potential future research prospects for this area of study. By conducting a comprehensive review of the existing literature, we aim to contribute to the ultimate goal of improving the accuracy and efficiency of breast cancer diagnosis and treatment.
Collapse
Affiliation(s)
- Genevieve Chyrmang
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, India
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, India
| | - Anup Kr Das
- Arya Wellness Centre, Guwahati, Assam, India
| | - Gazi N Ahmed
- North East Cancer Hospital and Research Institute, Guwahati, Assam, India
| | | |
Collapse
|
8
|
Raza M, Awan R, Bashir RMS, Qaiser T, Rajpoot NM. Dual attention model with reinforcement learning for classification of histology whole-slide images. Comput Med Imaging Graph 2024; 118:102466. [PMID: 39579453 DOI: 10.1016/j.compmedimag.2024.102466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2024] [Revised: 11/05/2024] [Accepted: 11/05/2024] [Indexed: 11/25/2024]
Abstract
Digital whole slide images (WSIs) are generally captured at microscopic resolution and encompass extensive spatial data (several billions of pixels per image). Directly feeding these images to deep learning models is computationally intractable due to memory constraints, while downsampling the WSIs risks incurring information loss. Alternatively, splitting the WSIs into smaller patches (or tiles) may result in a loss of important contextual information. In this paper, we propose a novel dual attention approach, consisting of two main components, both inspired by the visual examination process of a pathologist: The first soft attention model processes a low magnification view of the WSI to identify relevant regions of interest (ROIs), followed by a custom sampling method to extract diverse and spatially distinct image tiles from the selected ROIs. The second component, the hard attention classification model further extracts a sequence of multi-resolution glimpses from each tile for classification. Since hard attention is non-differentiable, we train this component using reinforcement learning to predict the location of the glimpses. This approach allows the model to focus on essential regions instead of processing the entire tile, thereby aligning with a pathologist's way of diagnosis. The two components are trained in an end-to-end fashion using a joint loss function to demonstrate the efficacy of the model. The proposed model was evaluated on two WSI-level classification problems: Human epidermal growth factor receptor 2 (HER2) scoring on breast cancer histology images and prediction of Intact/Loss status of two Mismatch Repair (MMR) biomarkers from colorectal cancer histology images. We show that the proposed model achieves performance better than or comparable to the state-of-the-art methods while processing less than 10% of the WSI at the highest magnification and reducing the time required to infer the WSI-level label by more than 75%. The code is available at github.
Collapse
Affiliation(s)
- Manahil Raza
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom.
| | - Ruqayya Awan
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom.
| | | | - Talha Qaiser
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom.
| | - Nasir M Rajpoot
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; The Alan Turing Institute, London, United Kingdom.
| |
Collapse
|
9
|
Rehman ZU, Ahmad Fauzi MF, Wan Ahmad WSHM, Abas FS, Cheah PL, Chiew SF, Looi LM. Deep-Learning-Based Approach in Cancer-Region Assessment from HER2-SISH Breast Histopathology Whole Slide Images. Cancers (Basel) 2024; 16:3794. [PMID: 39594748 PMCID: PMC11593209 DOI: 10.3390/cancers16223794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2024] [Revised: 11/04/2024] [Accepted: 11/07/2024] [Indexed: 11/28/2024] Open
Abstract
Fluorescence in situ hybridization (FISH) is widely regarded as the gold standard for evaluating human epidermal growth factor receptor 2 (HER2) status in breast cancer; however, it poses challenges such as the need for specialized training and issues related to signal degradation from dye quenching. Silver-enhanced in situ hybridization (SISH) serves as an automated alternative, employing permanent staining suitable for bright-field microscopy. Determining HER2 status involves distinguishing between "Amplified" and "Non-Amplified" regions by assessing HER2 and centromere 17 (CEN17) signals in SISH-stained slides. This study is the first to leverage deep learning for classifying Normal, Amplified, and Non-Amplified regions within HER2-SISH whole slide images (WSIs), which are notably more complex to analyze compared to hematoxylin and eosin (H&E)-stained slides. Our proposed approach consists of a two-stage process: first, we evaluate deep-learning models on annotated image regions, and then we apply the most effective model to WSIs for regional identification and localization. Subsequently, pseudo-color maps representing each class are overlaid, and the WSIs are reconstructed with these mapped regions. Using a private dataset of HER2-SISH breast cancer slides digitized at 40× magnification, we achieved a patch-level classification accuracy of 99.9% and a generalization accuracy of 78.8% by applying transfer learning with a Vision Transformer (ViT) model. The robustness of the model was further evaluated through k-fold cross-validation, yielding an average performance accuracy of 98%, with metrics reported alongside 95% confidence intervals to ensure statistical reliability. This method shows significant promise for clinical applications, particularly in assessing HER2 expression status in HER2-SISH histopathology images. It provides an automated solution that can aid pathologists in efficiently identifying HER2-amplified regions, thus enhancing diagnostic outcomes for breast cancer treatment.
Collapse
Affiliation(s)
- Zaka Ur Rehman
- Faculty of Engineering, Multimedia University, Cyberjaya 63100, Malaysia; (Z.U.R.); (W.S.H.M.W.A.)
| | | | - Wan Siti Halimatul Munirah Wan Ahmad
- Faculty of Engineering, Multimedia University, Cyberjaya 63100, Malaysia; (Z.U.R.); (W.S.H.M.W.A.)
- Institute for Research, Development and Innovation, IMU University, Bukit Jalil, Kuala Lumpur 57000, Malaysia
| | - Fazly Salleh Abas
- Faculty of Engineering and Technology, Multimedia University, Bukit Beruang, Melaka 75450, Malaysia;
| | - Phaik-Leng Cheah
- Department of Pathology, University Malaya-Medical Center, Kuala Lumpur 50603, Malaysia; (P.-L.C.); (S.-F.C.); (L.-M.L.)
| | - Seow-Fan Chiew
- Department of Pathology, University Malaya-Medical Center, Kuala Lumpur 50603, Malaysia; (P.-L.C.); (S.-F.C.); (L.-M.L.)
| | - Lai-Meng Looi
- Department of Pathology, University Malaya-Medical Center, Kuala Lumpur 50603, Malaysia; (P.-L.C.); (S.-F.C.); (L.-M.L.)
| |
Collapse
|
10
|
Wei X, Sun J, Su P, Wan H, Ning Z. BCL-Former: Localized Transformer Fusion with Balanced Constraint for polyp image segmentation. Comput Biol Med 2024; 182:109182. [PMID: 39341109 DOI: 10.1016/j.compbiomed.2024.109182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 09/18/2024] [Accepted: 09/19/2024] [Indexed: 09/30/2024]
Abstract
Polyp segmentation remains challenging for two reasons: (a) the size and shape of colon polyps are variable and diverse; (b) the distinction between polyps and mucosa is not obvious. To solve the above two challenging problems and enhance the generalization ability of segmentation method, we propose the Localized Transformer Fusion with Balanced Constraint (BCL-Former) for Polyp Segmentation. In BCL-Former, the Strip Local Enhancement module (SLE module) is proposed to capture the enhanced local features. The Progressive Feature Fusion module (PFF module) is presented to make the feature aggregation smoother and eliminate the difference between high-level and low-level features. Moreover, the Tversky-based Appropriate Constrained Loss (TacLoss) is proposed to achieve the balance and constraint between True Positives and False Negatives, improving the ability to generalize across datasets. Extensive experiments are conducted on four benchmark datasets. Results show that our proposed method achieves state-of-the-art performance in both segmentation precision and generalization ability. Also, the proposed method is 5%-8% faster than the benchmark method in training and inference. The code is available at: https://github.com/sjc-lbj/BCL-Former.
Collapse
Affiliation(s)
- Xin Wei
- School of Software, Nanchang University, 235 East Nanjing Road, Nanchang, 330047, China
| | - Jiacheng Sun
- School of Software, Nanchang University, 235 East Nanjing Road, Nanchang, 330047, China
| | - Pengxiang Su
- School of Software, Nanchang University, 235 East Nanjing Road, Nanchang, 330047, China
| | - Huan Wan
- School of Computer Information Engineering, Jiangxi Normal University, 99 Ziyang Avenue, Nanchang, 330022, China.
| | - Zhitao Ning
- School of Software, Nanchang University, 235 East Nanjing Road, Nanchang, 330047, China
| |
Collapse
|
11
|
Rehman ZU, Ahmad Fauzi MF, Wan Ahmad WSHM, Abas FS, Cheah PL, Chiew SF, Looi LM. Computational approach for counting of SISH amplification signals for HER2 status assessment. PeerJ Comput Sci 2024; 10:e2373. [PMID: 39650490 PMCID: PMC11623010 DOI: 10.7717/peerj-cs.2373] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Accepted: 09/09/2024] [Indexed: 12/11/2024]
Abstract
The human epidermal growth factor receptor 2 (HER2) gene is a critical biomarker for determining amplification status and targeting clinical therapies in breast cancer treatment. This study introduces a computer-aided method that automatically measures and scores HER2 gene status from invasive tissue regions of breast cancer using whole slide images (WSI) through silver in situ hybridization (SISH) staining. Image processing and deep learning techniques are employed to isolate untruncated and non-overlapping single nuclei from cancer regions. The Stardist deep learning model is fine-tuned on our HER2-SISH data to identify nuclei regions, followed by post-processing based on identified HER2 and CEP17 signals. Conventional thresholding techniques are used to segment HER2 and CEP17 signals. HER2 amplification status is determined by calculating the HER2-to-CEP17 signal ratio, in accordance with ASCO/CAP 2018 standards. The proposed method significantly reduces the effort and time required for quantification. Experimental results demonstrate a 0.91% correlation coefficient between pathologists manual enumeration and the proposed automatic SISH quantification approach. A one-sided paired t-test confirmed that the differences between the outcomes of the proposed method and the reference standard are statistically insignificant, with p-values exceeding 0.05. This study illustrates how deep learning can effectively automate HER2 status determination, demonstrating improvements over current manual methods and offering a robust, reproducible alternative for clinical practice.
Collapse
Affiliation(s)
- Zaka Ur Rehman
- Faculty of Engineering, Multimedia University, Cyberjaya, Selangor, Malaysia
| | | | - Wan Siti Halimatul Munirah Wan Ahmad
- Faculty of Engineering, Multimedia University, Cyberjaya, Selangor, Malaysia
- Institute for Research, Development and Innovation, International Medical University, Bukit Jalil, Kuala Lumpur, Malaysia
| | - Fazly Salleh Abas
- Faculty of Engineering and Technology, Multimedia University, Ayer Keroh, Malacca, Malaysia
| | - Phaik Leng Cheah
- Department of Pathology, University Malaya Medical Center, Kuala Lumpur, Malaysia
| | - Seow Fan Chiew
- Department of Pathology, University Malaya Medical Center, Kuala Lumpur, Malaysia
| | - Lai-Meng Looi
- Department of Pathology, University Malaya Medical Center, Kuala Lumpur, Malaysia
| |
Collapse
|
12
|
Rehman ZU, Ahmad Fauzi MF, Wan Ahmad WSHM, Abas FS, Cheah PL, Chiew SF, Looi LM. Review of In Situ Hybridization (ISH) Stain Images Using Computational Techniques. Diagnostics (Basel) 2024; 14:2089. [PMID: 39335767 PMCID: PMC11430898 DOI: 10.3390/diagnostics14182089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 09/10/2024] [Accepted: 09/17/2024] [Indexed: 09/30/2024] Open
Abstract
Recent advancements in medical imaging have greatly enhanced the application of computational techniques in digital pathology, particularly for the classification of breast cancer using in situ hybridization (ISH) imaging. HER2 amplification, a key prognostic marker in 20-25% of breast cancers, can be assessed through alterations in gene copy number or protein expression. However, challenges persist due to the heterogeneity of nuclear regions and complexities in cancer biomarker detection. This review examines semi-automated and fully automated computational methods for analyzing ISH images with a focus on HER2 gene amplification. Literature from 1997 to 2023 is analyzed, emphasizing silver-enhanced in situ hybridization (SISH) and its integration with image processing and machine learning techniques. Both conventional machine learning approaches and recent advances in deep learning are compared. The review reveals that automated ISH analysis in combination with bright-field microscopy provides a cost-effective and scalable solution for routine pathology. The integration of deep learning techniques shows promise in improving accuracy over conventional methods, although there are limitations related to data variability and computational demands. Automated ISH analysis can reduce manual labor and increase diagnostic accuracy. Future research should focus on refining these computational methods, particularly in handling the complex nature of HER2 status evaluation, and integrate best practices to further enhance clinical adoption of these techniques.
Collapse
Affiliation(s)
- Zaka Ur Rehman
- Faculty of Engineering, Multimedia University, Cyberjaya 63100, Malaysia
| | | | - Wan Siti Halimatul Munirah Wan Ahmad
- Faculty of Engineering, Multimedia University, Cyberjaya 63100, Malaysia
- Institute for Research, Development and Innovation (IRDI), IMU University, Bukit Jalil, Kuala Lumpur 57000, Malaysia
| | - Fazly Salleh Abas
- Faculty of Engineering and Technology, Multimedia University, Bukit Beruang, Melaka 75450, Malaysia
| | - Phaik Leng Cheah
- Department of Pathology, University Malaya-Medical Center, Kuala Lumpur 50603, Malaysia
| | - Seow Fan Chiew
- Department of Pathology, University Malaya-Medical Center, Kuala Lumpur 50603, Malaysia
| | - Lai-Meng Looi
- Department of Pathology, University Malaya-Medical Center, Kuala Lumpur 50603, Malaysia
| |
Collapse
|
13
|
Zhao S, Zhao X, Huo Z, Zhang F. BMSeNet: Multiscale Context Pyramid Pooling and Spatial Detail Enhancement Network for Real-Time Semantic Segmentation. SENSORS (BASEL, SWITZERLAND) 2024; 24:5145. [PMID: 39204841 PMCID: PMC11360195 DOI: 10.3390/s24165145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Revised: 08/05/2024] [Accepted: 08/06/2024] [Indexed: 09/04/2024]
Abstract
Most real-time semantic segmentation networks use shallow architectures to achieve fast inference speeds. This approach, however, limits a network's receptive field. Concurrently, feature information extraction is restricted to a single scale, which reduces the network's ability to generalize and maintain robustness. Furthermore, loss of image spatial details negatively impacts segmentation accuracy. To address these limitations, this paper proposes a Multiscale Context Pyramid Pooling and Spatial Detail Enhancement Network (BMSeNet). First, to address the limitation of singular semantic feature scales, a Multiscale Context Pyramid Pooling Module (MSCPPM) is introduced. By leveraging various pooling operations, this module efficiently enlarges the receptive field and better aggregates multiscale contextual information. Moreover, a Spatial Detail Enhancement Module (SDEM) is designed, to effectively compensate for lost spatial detail information and significantly enhance the perception of spatial details. Finally, a Bilateral Attention Fusion Module (BAFM) is proposed. This module leverages pixel positional correlations to guide the network in assigning appropriate weights to the features extracted from the two branches, effectively merging the feature information of both branches. Extensive experiments were conducted on the Cityscapes and CamVid datasets. Experimental results show that the proposed BMSeNet achieves a good balance between inference speed and segmentation accuracy, outperforming some state-of-the-art real-time semantic segmentation methods.
Collapse
Affiliation(s)
| | - Xin Zhao
- School of Software, Henan Polytechnic University, Jiaozuo 454000, China
| | | | | |
Collapse
|
14
|
Dunenova G, Kalmataeva Z, Kaidarova D, Dauletbaev N, Semenova Y, Mansurova M, Grjibovski A, Kassymbekova F, Sarsembayev A, Semenov D, Glushkova N. The Performance and Clinical Applicability of HER2 Digital Image Analysis in Breast Cancer: A Systematic Review. Cancers (Basel) 2024; 16:2761. [PMID: 39123488 PMCID: PMC11311684 DOI: 10.3390/cancers16152761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 07/28/2024] [Accepted: 07/30/2024] [Indexed: 08/12/2024] Open
Abstract
This systematic review aims to address the research gap in the performance of computational algorithms for the digital image analysis of HER2 images in clinical settings. While numerous studies have explored various aspects of these algorithms, there is a lack of comprehensive evaluation regarding their effectiveness in real-world clinical applications. We conducted a search of the Web of Science and PubMed databases for studies published from 31 December 2013 to 30 June 2024, focusing on performance effectiveness and components such as dataset size, diversity and source, ground truth, annotation, and validation methods. The study was registered with PROSPERO (CRD42024525404). Key questions guiding this review include the following: How effective are current computational algorithms at detecting HER2 status in digital images? What are the common validation methods and dataset characteristics used in these studies? Is there standardization of algorithm evaluations of clinical applications that can improve the clinical utility and reliability of computational tools for HER2 detection in digital image analysis? We identified 6833 publications, with 25 meeting the inclusion criteria. The accuracy rate with clinical datasets varied from 84.19% to 97.9%. The highest accuracy was achieved on the publicly available Warwick dataset at 98.8% in synthesized datasets. Only 12% of studies used separate datasets for external validation; 64% of studies used a combination of accuracy, precision, recall, and F1 as a set of performance measures. Despite the high accuracy rates reported in these studies, there is a notable absence of direct evidence supporting their clinical application. To facilitate the integration of these technologies into clinical practice, there is an urgent need to address real-world challenges and overreliance on internal validation. Standardizing study designs on real clinical datasets can enhance the reliability and clinical applicability of computational algorithms in improving the detection of HER2 cancer.
Collapse
Affiliation(s)
- Gauhar Dunenova
- Department of Epidemiology, Biostatistics and Evidence-Based Medicine, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan
| | - Zhanna Kalmataeva
- Rector Office, Asfendiyarov Kazakh National Medical University, Almaty 050000, Kazakhstan;
| | - Dilyara Kaidarova
- Kazakh Research Institute of Oncology and Radiology, Almaty 050022, Kazakhstan;
| | - Nurlan Dauletbaev
- Department of Internal, Respiratory and Critical Care Medicine, Philipps University of Marburg, 35037 Marburg, Germany;
- Department of Pediatrics, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC H4A 3J1, Canada
- Faculty of Medicine and Health Care, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan
| | - Yuliya Semenova
- School of Medicine, Nazarbayev University, Astana 010000, Kazakhstan;
| | - Madina Mansurova
- Department of Artificial Intelligence and Big Data, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan;
| | - Andrej Grjibovski
- Central Scientific Research Laboratory, Northern State Medical University, Arkhangelsk 163000, Russia;
- Department of Epidemiology and Modern Vaccination Technologies, I.M. Sechenov First Moscow State Medical University, Moscow 105064, Russia
- Department of Biology, Ecology and Biotechnology, Northern (Arctic) Federal University, Arkhangelsk 163000, Russia
- Department of Health Policy and Management, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan
| | - Fatima Kassymbekova
- Department of Public Health and Social Sciences, Kazakhstan Medical University “KSPH”, Almaty 050060, Kazakhstan;
| | - Aidos Sarsembayev
- School of Digital Technologies, Almaty Management University, Almaty 050060, Kazakhstan;
- Health Research Institute, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan;
| | - Daniil Semenov
- Computer Science and Engineering Program, Astana IT University, Astana 020000, Kazakhstan;
| | - Natalya Glushkova
- Department of Epidemiology, Biostatistics and Evidence-Based Medicine, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan
- Health Research Institute, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan;
| |
Collapse
|
15
|
Selcuk SY, Yang X, Bai B, Zhang Y, Li Y, Aydin M, Unal AF, Gomatam A, Guo Z, Angus DM, Kolodney G, Atlan K, Haran TK, Pillar N, Ozcan A. Automated HER2 Scoring in Breast Cancer Images Using Deep Learning and Pyramid Sampling. BME FRONTIERS 2024; 5:0048. [PMID: 39045139 PMCID: PMC11265840 DOI: 10.34133/bmef.0048] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 06/14/2024] [Indexed: 07/25/2024] Open
Abstract
Objective and Impact Statement: Human epidermal growth factor receptor 2 (HER2) is a critical protein in cancer cell growth that signifies the aggressiveness of breast cancer (BC) and helps predict its prognosis. Here, we introduce a deep learning-based approach utilizing pyramid sampling for the automated classification of HER2 status in immunohistochemically (IHC) stained BC tissue images. Introduction: Accurate assessment of IHC-stained tissue slides for HER2 expression levels is essential for both treatment guidance and understanding of cancer mechanisms. Nevertheless, the traditional workflow of manual examination by board-certified pathologists encounters challenges, including inter- and intra-observer inconsistency and extended turnaround times. Methods: Our deep learning-based method analyzes morphological features at various spatial scales, efficiently managing the computational load and facilitating a detailed examination of cellular and larger-scale tissue-level details. Results: This approach addresses the tissue heterogeneity of HER2 expression by providing a comprehensive view, leading to a blind testing classification accuracy of 84.70%, on a dataset of 523 core images from tissue microarrays. Conclusion: This automated system, proving reliable as an adjunct pathology tool, has the potential to enhance diagnostic precision and evaluation speed, and might substantially impact cancer treatment planning.
Collapse
Affiliation(s)
- Sahan Yoruc Selcuk
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Xilin Yang
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Bijie Bai
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Musa Aydin
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Aras Firat Unal
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Aditya Gomatam
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Zhen Guo
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Darrow Morgan Angus
- Department of Pathology and Laboratory Medicine,
University of California at Davis, Sacramento, CA, USA
| | | | - Karine Atlan
- Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | | | - Nir Pillar
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department,
University of California, Los Angeles, Los Angeles, CA, USA
- California NanoSystems Institute,
University of California, Los Angeles, Los Angeles, CA, USA
- David Geffen School of Medicine,
University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
16
|
Wu S, Li X, Miao J, Xian D, Yue M, Liu H, Fan S, Wei W, Liu Y. Artificial intelligence for assisted HER2 immunohistochemistry evaluation of breast cancer: A systematic review and meta-analysis. Pathol Res Pract 2024; 260:155472. [PMID: 39053133 DOI: 10.1016/j.prp.2024.155472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 07/05/2024] [Accepted: 07/14/2024] [Indexed: 07/27/2024]
Abstract
Accurate assessment of HER2 expression in tumor tissue is crucial for determining HER2-targeted treatment options. Nevertheless, pathologists' assessments of HER2 status are less objective than automated, computer-based evaluations. Artificial Intelligence (AI) promises enhanced accuracy and reproducibility in HER2 interpretation. This study aimed to systematically evaluate current AI algorithms for HER2 immunohistochemical diagnosis, offering insights to guide the development of more adaptable algorithms in response to evolving HER2 assessment practices. A comprehensive data search of the PubMed, Embase, Cochrane, and Web of Science databases was conducted using a combination of subject terms and free text. A total of 4994 computational pathology articles published from inception to September 2023 identifying HER2 expression in breast cancer were retrieved. After applying predefined inclusion and exclusion criteria, seven studies were selected. These seven studies comprised 6867 HER2 identification tasks, with two studies employing the HER2-CONNECT algorithm, two using the CNN algorithm, one with the multi-class logistic regression algorithm, and two using the HER2 4B5 algorithm. AI's sensitivity and specificity for distinguishing HER2 0/1+ were 0.98 [0.92-0.99] and 0.92 [0.80-0.97] respectively. For distinguishing HER2 2+, the sensitivity and specificity were 0.78 [0.50-0.92] and 0.98 [0.93-0.99], respectively. For HER2 3+ distinction, AI exhibited a sensitivity of 0.99 [0.98-1.00] and specificity of 0.99 [0.97-1.00]. Furthermore, due to the lack of HER2-targeted therapies for HER2-negative patients in the past, pathologists may have neglected to distinguish between HER2 0 and 1+, leaving room for improvement in the performance of artificial intelligence (AI) in this differentiation. AI excels in automating the assessment of HER2 immunohistochemistry, showing promising results despite slight variations in performance across different HER2 status. While incorporating AI algorithms into the pathology workflow for HER2 assessment poses challenges in standardization, application patterns, and ethical considerations, ongoing advancements suggest its potential as a widely effective tool for pathologists in clinical practice in the near future.
Collapse
Affiliation(s)
- Si Wu
- Department of Pathology, The Fourth Hospital of Hebei Medical University, No. 12 Jiankang Road, Shijiazhuang, Hebei 050011, China
| | - Xiang Li
- Medical Affairs Department, Betrue AI Lab, Guangzhou 510700, China
| | - Jiaxian Miao
- Department of Pathology, The Fourth Hospital of Hebei Medical University, No. 12 Jiankang Road, Shijiazhuang, Hebei 050011, China
| | - Dongyi Xian
- Medical Affairs Department, Betrue AI Lab, Guangzhou 510700, China
| | - Meng Yue
- Department of Pathology, The Fourth Hospital of Hebei Medical University, No. 12 Jiankang Road, Shijiazhuang, Hebei 050011, China
| | - Hongbo Liu
- Department of Pathology, The Fourth Hospital of Hebei Medical University, No. 12 Jiankang Road, Shijiazhuang, Hebei 050011, China
| | - Shishun Fan
- Department of Pathology, The Fourth Hospital of Hebei Medical University, No. 12 Jiankang Road, Shijiazhuang, Hebei 050011, China
| | - Weiwei Wei
- Medical Affairs Department, Betrue AI Lab, Guangzhou 510700, China
| | - Yueping Liu
- Department of Pathology, The Fourth Hospital of Hebei Medical University, No. 12 Jiankang Road, Shijiazhuang, Hebei 050011, China.
| |
Collapse
|
17
|
Alahmari SS, Goldgof D, Hall LO, Mouton PR. A Review of Nuclei Detection and Segmentation on Microscopy Images Using Deep Learning With Applications to Unbiased Stereology Counting. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7458-7477. [PMID: 36327184 DOI: 10.1109/tnnls.2022.3213407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The detection and segmentation of stained cells and nuclei are essential prerequisites for subsequent quantitative research for many diseases. Recently, deep learning has shown strong performance in many computer vision problems, including solutions for medical image analysis. Furthermore, accurate stereological quantification of microscopic structures in stained tissue sections plays a critical role in understanding human diseases and developing safe and effective treatments. In this article, we review the most recent deep learning approaches for cell (nuclei) detection and segmentation in cancer and Alzheimer's disease with an emphasis on deep learning approaches combined with unbiased stereology. Major challenges include accurate and reproducible cell detection and segmentation of microscopic images from stained sections. Finally, we discuss potential improvements and future trends in deep learning applied to cell detection and segmentation.
Collapse
|
18
|
Wan Z, Li M, Wang Z, Tan H, Li W, Yu L, Samuel DJ. CellT-Net: A Composite Transformer Method for 2-D Cell Instance Segmentation. IEEE J Biomed Health Inform 2024; 28:730-741. [PMID: 37023158 DOI: 10.1109/jbhi.2023.3265006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
Abstract
Cell instance segmentation (CIS) via light microscopy and artificial intelligence (AI) is essential to cell and gene therapy-based health care management, which offers the hope of revolutionary health care. An effective CIS method can help clinicians to diagnose neurological disorders and quantify how well these deadly disorders respond to treatment. To address the CIS task challenged by dataset characteristics such as irregular morphology, variation in sizes, cell adhesion, and obscure contours, we propose a novel deep learning model named CellT-Net to actualize effective cell instance segmentation. In particular, the Swin transformer (Swin-T) is used as the basic model to construct the CellT-Net backbone, as the self-attention mechanism can adaptively focus on useful image regions while suppressing irrelevant background information. Moreover, CellT-Net incorporating Swin-T constructs a hierarchical representation and generates multi-scale feature maps that are suitable for detecting and segmenting cells at different scales. A novel composite style named cross-level composition (CLC) is proposed to build composite connections between identical Swin-T models in the CellT-Net backbone and generate more representational features. The earth mover's distance (EMD) loss and binary cross entropy loss are used to train CellT-Net and actualize the precise segmentation of overlapped cells. The LiveCELL and Sartorius datasets are utilized to validate the model effectiveness, and the results demonstrate that CellT-Net can achieve better model performance for dealing with the challenges arising from the characteristics of cell datasets than state-of-the-art models.
Collapse
|
19
|
Mirimoghaddam MM, Majidpour J, Pashaei F, Arabalibeik H, Samizadeh E, Roshan NM, Rashid TA. HER2GAN: Overcome the Scarcity of HER2 Breast Cancer Dataset Based on Transfer Learning and GAN Model. Clin Breast Cancer 2024; 24:53-64. [PMID: 37926662 DOI: 10.1016/j.clbc.2023.09.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 09/06/2023] [Accepted: 09/24/2023] [Indexed: 11/07/2023]
Abstract
INTRODUCTION Immunohistochemistry (IHC) is crucial for breast cancer diagnosis, classification, and individualized treatment. IHC is used to measure the levels of expression of hormone receptors (estrogen and progesterone receptors), human epidermal growth factor receptor 2 (HER2), and other biomarkers, which are used to make treatment decisions and predict how well a patient will do. The evaluation of the breast cancer score on IHC slides, taking into account structural and morphological features as well as a scarcity of relevant data, is one of the most important issues in the IHC debate. Several recent studies have utilized machine learning and deep learning techniques to resolve these issues. MATERIALS AND METHODS This paper introduces a new approach for addressing the issue based on supervised deep learning. A GAN-based model is proposed for generating high-quality HER2 images and identifying and classifying HER2 levels. Using transfer learning methodologies, the original and generated images were evaluated. RESULTS AND CONCLUSION All of the models have been trained and evaluated using publicly accessible and private data sets, respectively. The InceptionV3 and InceptionResNetV2 models achieved a high accuracy of 93% with the combined generated and original images used for training and testing, demonstrating the exceptional quality of the details in the synthesized images.
Collapse
Affiliation(s)
| | - Jafar Majidpour
- Department of Computer Science, University of Raparin, Rania, Iraq.
| | - Fakhereh Pashaei
- Radiation Sciences Research Center (RSRC), Aja University of Medical Sciences, Tehran, Iran.
| | - Hossein Arabalibeik
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran
| | - Esmaeil Samizadeh
- Department of Pathology, School of Medicine and Imam Reza Hospital, AJA University of Medical Sciences, Tehran, Iran
| | | | - Tarik A Rashid
- Computer Science and Engineering Department, University of Kurdistan Hewlêr, Erbil, Iraq
| |
Collapse
|
20
|
Al Zorgani MM, Ugail H, Pors K, Dauda AM. Deep Transfer Learning-Based Approach for Glucose Transporter-1 (GLUT1) Expression Assessment. J Digit Imaging 2023; 36:2367-2381. [PMID: 37670181 PMCID: PMC10584776 DOI: 10.1007/s10278-023-00859-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 05/20/2023] [Accepted: 05/22/2023] [Indexed: 09/07/2023] Open
Abstract
Glucose transporter-1 (GLUT-1) expression level is a biomarker of tumour hypoxia condition in immunohistochemistry (IHC)-stained images. Thus, the GLUT-1 scoring is a routine procedure currently employed for predicting tumour hypoxia markers in clinical practice. However, visual assessment of GLUT-1 scores is subjective and consequently prone to inter-pathologist variability. Therefore, this study proposes an automated method for assessing GLUT-1 scores in IHC colorectal carcinoma images. For this purpose, we leverage deep transfer learning methodologies for evaluating the performance of six different pre-trained convolutional neural network (CNN) architectures: AlexNet, VGG16, GoogleNet, ResNet50, DenseNet-201 and ShuffleNet. The target CNNs are fine-tuned as classifiers or adapted as feature extractors with support vector machine (SVM) to classify GLUT-1 scores in IHC images. Our experimental results show that the winning model is the trained SVM classifier on the extracted deep features fusion Feat-Concat from DenseNet201, ResNet50 and GoogLeNet extractors. It yields the highest prediction accuracy of 98.86%, thus outperforming the other classifiers on our dataset. We also conclude, from comparing the methodologies, that the off-the-shelf feature extraction is better than the fine-tuning model in terms of time and resources required for training.
Collapse
Affiliation(s)
- Maisun Mohamed Al Zorgani
- Faculty of Engineering and Informatics, School of Media, Design and Technology, University of Bradford, Richmond Road, Bradford, BD7 1DP, UK.
| | - Hassan Ugail
- Faculty of Engineering and Informatics, School of Media, Design and Technology, University of Bradford, Richmond Road, Bradford, BD7 1DP, UK
| | - Klaus Pors
- Institute of Cancer Therapeutics, University of Bradford, Richmond Road, Bradford, BD7 1DP, UK
| | - Abdullahi Magaji Dauda
- Institute of Cancer Therapeutics, University of Bradford, Richmond Road, Bradford, BD7 1DP, UK
| |
Collapse
|
21
|
Liu L, Liu Z, Chang J, Qiao H, Sun T, Shang J. MGGAN: A multi-generator generative adversarial network for breast cancer immunohistochemical image generation. Heliyon 2023; 9:e20614. [PMID: 37860562 PMCID: PMC10582479 DOI: 10.1016/j.heliyon.2023.e20614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 10/02/2023] [Accepted: 10/02/2023] [Indexed: 10/21/2023] Open
Abstract
The immunohistochemical technique (IHC) is widely used for evaluating diagnostic markers, but it can be expensive to obtain IHC-stained section. Translating the cheap and easily available hematoxylin and eosin (HE) images into IHC images provides a solution to this challenge. In this paper, we propose a multi-generator generative adversarial network (MGGAN) that can generate high-quality IHC images based on the HE of breast cancer. Our MGGAN approach combines the low-frequency and high-frequency components of the HE image to improve the translation of breast cancer image details. We use the multi-generator to extract semantic information and a U-shaped architecture and patch-based discriminator to collect and optimize the low-frequency and high-frequency components of an image. We also include a cross-entropy loss as a regularization term in the loss function to ensure consistency between the synthesized image and the real image. Our experimental and visualization results demonstrate that our method outperforms other state-of-the-art image synthesis methods in terms of both quantitative and qualitative analysis. Our approach provides a cost-effective and efficient solution for obtaining high-quality IHC images.
Collapse
Affiliation(s)
- Liangliang Liu
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, PR China
| | - Zhihong Liu
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, PR China
| | - Jing Chang
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, PR China
| | - Hongbo Qiao
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, PR China
| | - Tong Sun
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, PR China
| | - Junping Shang
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, PR China
| |
Collapse
|
22
|
Sivamurugan J, Sureshkumar G. Applying dual models on optimized LSTM with U-net segmentation for breast cancer diagnosis using mammogram images. Artif Intell Med 2023; 143:102626. [PMID: 37673584 DOI: 10.1016/j.artmed.2023.102626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 07/08/2023] [Accepted: 07/09/2023] [Indexed: 09/08/2023]
Abstract
BACKGROUND OF THE STUDY Breast cancer is the most fatal disease that widely affects women. When the cancerous lumps grow from the cells of the breast, it causes breast cancer. Self-analysis and regular medical check-ups help for detecting the disease earlier and enhance the survival rate. Hence, an automated breast cancer detection system in mammograms can assist clinicians in the patient's treatment. In medical techniques, the categorization of breast cancer becomes challenging for investigators and researchers. The advancement in deep learning approaches has established more attention to their advantages to medical imaging issues, especially for breast cancer detection. AIM The research work plans to develop a novel hybrid model for breast cancer diagnosis with the support of optimized deep-learning architecture. METHODS The required images are gathered from the benchmark datasets. These collected datasets are used in three pre-processing approaches like "Median Filtering, Histogram Equalization, and morphological operation", which helps to remove unwanted regions from the images. Then, the pre-processed images are applied to the Optimized U-net-based tumor segmentation phase for obtaining accurate segmented results along with the optimization of certain parameters in U-Net by employing "Adapted-Black Widow Optimization (A-BWO)". Further, the detection is performed in two different ways that is given as model 1 and model 2. In model 1, the segmented tumors are used to extract the significant patterns with the help of the "Gray-Level Co-occurrence Matrix (GLCM) and Local Gradient pattern (LGP)". Further, these extracted patterns are utilized in the "Dual Model accessed Optimized Long Short-Term Memory (DM-OLSTM)" for performing breast cancer detection and the detected score 1 is obtained. In model 2, the same segmented tumors are given into the different variants of CNN, such as "VGG19, Resnet150, and Inception". The extracted deep features from three CNN-based approaches are fused to form a single set of deep features. These fused deep features are inserted into the developed DM-OLSTM for getting the detected score 2 for breast cancer diagnosis. In the final phase of the hybrid model, the score 1 and score 2 obtained from model 1 and model 2 are averaged to get the final detection output. RESULTS The accuracy and F1-score of the offered DM-OLSTM model are achieved at 96 % and 95 %. CONCLUSION Experimental analysis proves that the recommended methodology achieves better performance by analyzing with the benchmark dataset. Hence, the designed model is helpful for detecting breast cancer in real-time applications.
Collapse
Affiliation(s)
- J Sivamurugan
- Department of Computer Science and Engineering, School of Engineering & Technology, Pondicherry University (karaikal Campus), karaikal-609605, Puducherry UT, India..
| | - G Sureshkumar
- Department of Computer Science and Engineering, School of Engineering & Technology, Pondicherry University (karaikal Campus), karaikal-609605, Puducherry UT, India
| |
Collapse
|
23
|
Pham MD, Balezo G, Tilmant C, Petit S, Salmon I, Hadj SB, Fick RHJ. Interpretable HER2 scoring by evaluating clinical guidelines through a weakly supervised, constrained deep learning approach. Comput Med Imaging Graph 2023; 108:102261. [PMID: 37356357 DOI: 10.1016/j.compmedimag.2023.102261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 04/30/2023] [Accepted: 06/11/2023] [Indexed: 06/27/2023]
Abstract
The evaluation of the Human Epidermal growth factor Receptor-2 (HER2) expression is an important prognostic biomarker for breast cancer treatment selection. However, HER2 scoring has notoriously high interobserver variability due to stain variations between centers and the need to estimate visually the staining intensity in specific percentages of tumor area. In this paper, focusing on the interpretability of HER2 scoring by a pathologist, we propose a semi-automatic, two-stage deep learning approach that directly evaluates the clinical HER2 guidelines defined by the American Society of Clinical Oncology/ College of American Pathologists (ASCO/CAP). In the first stage, we segment the invasive tumor over the user-indicated Region of Interest (ROI). Then, in the second stage, we classify the tumor tissue into four HER2 classes. For the classification stage, we use weakly supervised, constrained optimization to find a model that classifies cancerous patches such that the tumor surface percentage meets the guidelines specification of each HER2 class. We end the second stage by freezing the model and refining its output logits in a supervised way to all slide labels in the training set. To ensure the quality of our dataset's labels, we conducted a multi-pathologist HER2 scoring consensus. For the assessment of doubtful cases where no consensus was found, our model can help by interpreting its HER2 class percentages output. We achieve a performance of 0.78 in F1-score on the test set while keeping our model interpretable for the pathologist, hopefully contributing to interpretable AI models in digital pathology.
Collapse
Affiliation(s)
- Manh-Dan Pham
- Tribun Health, 2 Rue du Capitaine Scott, 75015 Paris, France
| | | | | | | | | | - Saïma Ben Hadj
- Tribun Health, 2 Rue du Capitaine Scott, 75015 Paris, France
| | - Rutger H J Fick
- Tribun Health, 2 Rue du Capitaine Scott, 75015 Paris, France.
| |
Collapse
|
24
|
Li H, Zhong J, Lin L, Chen Y, Shi P. Semi-supervised nuclei segmentation based on multi-edge features fusion attention network. PLoS One 2023; 18:e0286161. [PMID: 37228137 DOI: 10.1371/journal.pone.0286161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 05/09/2023] [Indexed: 05/27/2023] Open
Abstract
The morphology of the nuclei represents most of the clinical pathological information, and nuclei segmentation is a vital step in current automated histopathological image analysis. Supervised machine learning-based segmentation models have already achieved outstanding performance with sufficiently precise human annotations. Nevertheless, outlining such labels on numerous nuclei is extremely professional needing and time consuming. Automatic nuclei segmentation with minimal manual interventions is highly needed to promote the effectiveness of clinical pathological researches. Semi-supervised learning greatly reduces the dependence on labeled samples while ensuring sufficient accuracy. In this paper, we propose a Multi-Edge Feature Fusion Attention Network (MEFFA-Net) with three feature inputs including image, pseudo-mask and edge, which enhances its learning ability by considering multiple features. Only a few labeled nuclei boundaries are used to train annotations on the remaining mostly unlabeled data. The MEFFA-Net creates more precise boundary masks for nucleus segmentation based on pseudo-masks, which greatly reduces the dependence on manual labeling. The MEFFA-Block focuses on the nuclei outline and selects features conducive to segment, making full use of the multiple features in segmentation. Experimental results on public multi-organ databases including MoNuSeg, CPM-17 and CoNSeP show that the proposed model has the mean IoU segmentation evaluations of 0.706, 0.751, and 0.722, respectively. The model also achieves better results than some cutting-edge methods while the labeling work is reduced to 1/8 of common supervised strategies. Our method provides a more efficient and accurate basis for nuclei segmentations and further quantifications in pathological researches.
Collapse
Affiliation(s)
- Huachang Li
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
| | - Jing Zhong
- Department of Radiology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Liyan Lin
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Yanping Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Peng Shi
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
| |
Collapse
|
25
|
Wei X, Ye F, Wan H, Xu J, Min W. TANet: Triple Attention Network for medical image segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
26
|
BFMNet: Bilateral feature fusion network with multi-scale context aggregation for real-time semantic segmentation. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2022.11.084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
27
|
Zhang H, He Y, Wu X, Huang P, Qin W, Wang F, Ye J, Huang X, Liao Y, Chen H, Guo L, Shi X, Luo L. PathNarratives: Data annotation for pathological human-AI collaborative diagnosis. Front Med (Lausanne) 2023; 9:1070072. [PMID: 36777158 PMCID: PMC9908590 DOI: 10.3389/fmed.2022.1070072] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 12/22/2022] [Indexed: 01/27/2023] Open
Abstract
Pathology is the gold standard of clinical diagnosis. Artificial intelligence (AI) in pathology becomes a new trend, but it is still not widely used due to the lack of necessary explanations for pathologists to understand the rationale. Clinic-compliant explanations besides the diagnostic decision of pathological images are essential for AI model training to provide diagnostic suggestions assisting pathologists practice. In this study, we propose a new annotation form, PathNarratives, that includes a hierarchical decision-to-reason data structure, a narrative annotation process, and a multimodal interactive annotation tool. Following PathNarratives, we recruited 8 pathologist annotators to build a colorectal pathological dataset, CR-PathNarratives, containing 174 whole-slide images (WSIs). We further experiment on the dataset with classification and captioning tasks to explore the clinical scenarios of human-AI-collaborative pathological diagnosis. The classification tasks show that fine-grain prediction enhances the overall classification accuracy from 79.56 to 85.26%. In Human-AI collaboration experience, the trust and confidence scores from 8 pathologists raised from 3.88 to 4.63 with providing more details. Results show that the classification and captioning tasks achieve better results with reason labels, provide explainable clues for doctors to understand and make the final decision and thus can support a better experience of human-AI collaboration in pathological diagnosis. In the future, we plan to optimize the tools for the annotation process, and expand the datasets with more WSIs and covering more pathological domains.
Collapse
Affiliation(s)
- Heyu Zhang
- College of Engineering, Peking University, Beijing, China
| | - Yan He
- Department of Pathology, Longgang Central Hospital of Shenzhen, Shenzhen, China
| | - Xiaomin Wu
- College of Engineering, Peking University, Beijing, China
| | - Peixiang Huang
- College of Engineering, Peking University, Beijing, China
| | - Wenkang Qin
- College of Engineering, Peking University, Beijing, China
| | - Fan Wang
- College of Engineering, Peking University, Beijing, China
| | - Juxiang Ye
- Department of Pathology, School of Basic Medical Science, Peking University Health Science Center, Peking University Third Hospital, Beijing, China
| | - Xirui Huang
- Department of Pathology, Longgang Central Hospital of Shenzhen, Shenzhen, China
| | - Yanfang Liao
- Department of Pathology, Longgang Central Hospital of Shenzhen, Shenzhen, China
| | - Hang Chen
- College of Engineering, Peking University, Beijing, China
| | - Limei Guo
- Department of Pathology, School of Basic Medical Science, Peking University Health Science Center, Peking University Third Hospital, Beijing, China,*Correspondence: Limei Guo,
| | - Xueying Shi
- Department of Pathology, School of Basic Medical Science, Peking University Health Science Center, Peking University Third Hospital, Beijing, China,Xueying Shi,
| | - Lin Luo
- College of Engineering, Peking University, Beijing, China,Lin Luo,
| |
Collapse
|
28
|
Lightweight Separable Convolution Network for Breast Cancer Histopathological Identification. Diagnostics (Basel) 2023; 13:diagnostics13020299. [PMID: 36673109 PMCID: PMC9858205 DOI: 10.3390/diagnostics13020299] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 01/08/2023] [Accepted: 01/10/2023] [Indexed: 01/15/2023] Open
Abstract
Breast cancer is one of the leading causes of death among women worldwide. Histopathological images have proven to be a reliable way to find out if someone has breast cancer over time, however, it could be time consuming and require much resources when observed physically. In order to lessen the burden on the pathologists and save lives, there is need for an automated system to effectively analysis and predict the disease diagnostic. In this paper, a lightweight separable convolution network (LWSC) is proposed to automatically learn and classify breast cancer from histopathological images. The proposed architecture aims to treat the problem of low quality by extracting the visual trainable features of the histopathological image using a contrast enhancement algorithm. LWSC model implements separable convolution layers stacked in parallel with multiple filters of different sizes in order to obtain wider receptive fields. Additionally, the factorization and the utilization of bottleneck convolution layers to reduce model dimension were introduced. These methods reduce the number of trainable parameters as well as the computational cost sufficiently with greater non-linear expressive capacity than plain convolutional networks. The evaluation results depict that the proposed LWSC model performs optimally, obtaining 97.23% accuracy, 97.71% sensitivity, and 97.93% specificity on multi-class categories. Compared with other models, the proposed LWSC obtains comparable performance.
Collapse
|
29
|
Che Y, Ren F, Zhang X, Cui L, Wu H, Zhao Z. Immunohistochemical HER2 Recognition and Analysis of Breast Cancer Based on Deep Learning. Diagnostics (Basel) 2023; 13:263. [PMID: 36673073 PMCID: PMC9858188 DOI: 10.3390/diagnostics13020263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 01/05/2023] [Accepted: 01/06/2023] [Indexed: 01/13/2023] Open
Abstract
Breast cancer is one of the common malignant tumors in women. It seriously endangers women's life and health. The human epidermal growth factor receptor 2 (HER2) protein is responsible for the division and growth of healthy breast cells. The overexpression of the HER2 protein is generally evaluated by immunohistochemistry (IHC). The IHC evaluation criteria mainly includes three indexes: staining intensity, circumferential membrane staining pattern, and proportion of positive cells. Manually scoring HER2 IHC images is an error-prone, variable, and time-consuming work. To solve these problems, this study proposes an automated predictive method for scoring whole-slide images (WSI) of HER2 slides based on a deep learning network. A total of 95 HER2 pathological slides from September 2021 to December 2021 were included. The average patch level precision and f1 score were 95.77% and 83.09%, respectively. The overall accuracy of automated scoring for slide-level classification was 97.9%. The proposed method showed excellent specificity for all IHC 0 and 3+ slides and most 1+ and 2+ slides. The evaluation effect of the integrated method is better than the effect of using the staining result only.
Collapse
Affiliation(s)
- Yuxuan Che
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
- School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing 101408, China
- Jinfeng Laboratory, Chongqing 401329, China
| | - Fei Ren
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Xueyuan Zhang
- Beijing Zhijian Life Technology Co., Ltd., Beijing 100036, China
| | - Li Cui
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Huanwen Wu
- Department of Pathology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Ze Zhao
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| |
Collapse
|
30
|
Deep Semantic Segmentation of Angiogenesis Images. Int J Mol Sci 2023; 24:ijms24021102. [PMID: 36674617 PMCID: PMC9866671 DOI: 10.3390/ijms24021102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Revised: 12/31/2022] [Accepted: 01/02/2023] [Indexed: 01/09/2023] Open
Abstract
Angiogenesis is the development of new blood vessels from pre-existing ones. It is a complex multifaceted process that is essential for the adequate functioning of human organisms. The investigation of angiogenesis is conducted using various methods. One of the most popular and most serviceable of these methods in vitro is the short-term culture of endothelial cells on Matrigel. However, a significant disadvantage of this method is the manual analysis of a large number of microphotographs. In this regard, it is necessary to develop a technique for automating the annotation of images of capillary-like structures. Despite the increasing use of deep learning in biomedical image analysis, as far as we know, there still has not been a study on the application of this method to angiogenesis images. To the best of our knowledge, this article demonstrates the first tool based on a convolutional Unet++ encoder-decoder architecture for the semantic segmentation of in vitro angiogenesis simulation images followed by the resulting mask postprocessing for data analysis by experts. The first annotated dataset in this field, AngioCells, is also being made publicly available. To create this dataset, participants were recruited into a markup group, an annotation protocol was developed, and an interparticipant agreement study was carried out.
Collapse
|
31
|
Thalakottor LA, Shirwaikar RD, Pothamsetti PT, Mathews LM. Classification of Histopathological Images from Breast Cancer Patients Using Deep Learning: A Comparative Analysis. Crit Rev Biomed Eng 2023; 51:41-62. [PMID: 37581350 DOI: 10.1615/critrevbiomedeng.2023047793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
Cancer, a leading cause of mortality, is distinguished by the multi-stage conversion of healthy cells into cancer cells. Discovery of the disease early can significantly enhance the possibility of survival. Histology is a procedure where the tissue of interest is first surgically removed from a patient and cut into thin slices. A pathologist will then mount these slices on glass slides, stain them with specialized dyes like hematoxylin and eosin (H&E), and then inspect the slides under a microscope. Unfortunately, a manual analysis of histopathology images during breast cancer biopsy is time consuming. Literature suggests that automated techniques based on deep learning algorithms with artificial intelligence can be used to increase the speed and accuracy of detection of abnormalities within the histopathological specimens obtained from breast cancer patients. This paper highlights some recent work on such algorithms, a comparative study on various deep learning methods is provided. For the present study the breast cancer histopathological database (BreakHis) is used. These images are processed to enhance the inherent features, classified and an evaluation is carried out regarding the accuracy of the algorithm. Three convolutional neural network (CNN) models, visual geometry group (VGG19), densely connected convolutional networks (DenseNet201), and residual neural network (ResNet50V2), were employed while analyzing the images. Of these the DenseNet201 model performed better than other models and attained an accuracy of 91.3%. The paper includes a review of different classification techniques based on machine learning methods including CNN-based models and some of which may replace manual breast cancer diagnosis and detection.
Collapse
Affiliation(s)
- Louie Antony Thalakottor
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), 560054, India
| | - Rudresh Deepak Shirwaikar
- Department of Computer Engineering, Agnel Institute of Technology and Design (AITD), Goa University, Assagao, Goa, India, 403507
| | - Pavan Teja Pothamsetti
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), 560054, India
| | - Lincy Meera Mathews
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), 560054, India
| |
Collapse
|
32
|
Using Whole Slide Gray Value Map to Predict HER2 Expression and FISH Status in Breast Cancer. Cancers (Basel) 2022; 14:cancers14246233. [PMID: 36551720 PMCID: PMC9777488 DOI: 10.3390/cancers14246233] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/12/2022] [Accepted: 12/14/2022] [Indexed: 12/24/2022] Open
Abstract
Accurate detection of HER2 expression through immunohistochemistry (IHC) is of great clinical significance in the treatment of breast cancer. However, manual interpretation of HER2 is challenging, due to the interobserver variability among pathologists. We sought to explore a deep learning method to predict HER2 expression level and gene status based on a Whole Slide Image (WSI) of the HER2 IHC section. When applied to 228 invasive breast carcinoma of no special type (IBC-NST) DAB-stained slides, our GrayMap+ convolutional neural network (CNN) model accurately classified HER2 IHC level with mean accuracy 0.952 ± 0.029 and predicted HER2 FISH status with mean accuracy 0.921 ± 0.029. Our result also demonstrated strong consistency in HER2 expression score between our system and experienced pathologists (intraclass correlation coefficient (ICC) = 0.903, Cohen's κ = 0.875). The discordant cases were found to be largely caused by high intra-tumor staining heterogeneity in the HER2 IHC group and low copy number in the HER2 FISH group.
Collapse
|
33
|
Subramanian AAV, Venugopal JP. A deep ensemble network model for classifying and predicting breast cancer. Comput Intell 2022. [DOI: 10.1111/coin.12563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
34
|
The Analysis of Relevant Gene Networks Based on Driver Genes in Breast Cancer. Diagnostics (Basel) 2022; 12:diagnostics12112882. [PMID: 36428940 PMCID: PMC9689550 DOI: 10.3390/diagnostics12112882] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 11/08/2022] [Accepted: 11/14/2022] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND The occurrence and development of breast cancer has a strong correlation with a person's genetics. Therefore, it is important to analyze the genetic factors of breast cancer for future development of potential targeted therapies from the genetic level. METHODS In this study, we complete an analysis of the relevant protein-protein interaction network relating to breast cancer. This includes three steps, which are breast cancer-relevant genes selection using mutual information method, protein-protein interaction network reconstruction based on the STRING database, and vital genes calculating by nodes centrality analysis. RESULTS The 230 breast cancer-relevant genes were chosen in gene selection to reconstruct the protein-protein interaction network and some vital genes were calculated by node centrality analyses. Node centrality analyses conducted with the top 10 and top 20 values of each metric found 19 and 39 statistically vital genes, respectively. In order to prove the biological significance of these vital genes, we carried out the survival analysis and DNA methylation analysis, inquired about the prognosis in other cancer tissues and the RNA expression level in breast cancer. The results all proved the validity of the selected genes. CONCLUSIONS These genes could provide a valuable reference in clinical treatment among breast cancer patients.
Collapse
|
35
|
Song B, Li S, Sunny S, Gurushanth K, Mendonca P, Mukhia N, Patrick S, Peterson T, Gurudath S, Raghavan S, Tsusennaro I, Leivon ST, Kolur T, Shetty V, Bushan V, Ramesh R, Pillai V, Wilder-Smith P, Suresh A, Kuriakose MA, Birur P, Liang R. Exploring uncertainty measures in convolutional neural network for semantic segmentation of oral cancer images. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:115001. [PMID: 36329004 PMCID: PMC9630461 DOI: 10.1117/1.jbo.27.11.115001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Accepted: 10/13/2022] [Indexed: 06/16/2023]
Abstract
SIGNIFICANCE Oral cancer is one of the most prevalent cancers, especially in middle- and low-income countries such as India. Automatic segmentation of oral cancer images can improve the diagnostic workflow, which is a significant task in oral cancer image analysis. Despite the remarkable success of deep-learning networks in medical segmentation, they rarely provide uncertainty quantification for their output. AIM We aim to estimate uncertainty in a deep-learning approach to semantic segmentation of oral cancer images and to improve the accuracy and reliability of predictions. APPROACH This work introduced a UNet-based Bayesian deep-learning (BDL) model to segment potentially malignant and malignant lesion areas in the oral cavity. The model can quantify uncertainty in predictions. We also developed an efficient model that increased the inference speed, which is almost six times smaller and two times faster (inference speed) than the original UNet. The dataset in this study was collected using our customized screening platform and was annotated by oral oncology specialists. RESULTS The proposed approach achieved good segmentation performance as well as good uncertainty estimation performance. In the experiments, we observed an improvement in pixel accuracy and mean intersection over union by removing uncertain pixels. This result reflects that the model provided less accurate predictions in uncertain areas that may need more attention and further inspection. The experiments also showed that with some performance compromises, the efficient model reduced computation time and model size, which expands the potential for implementation on portable devices used in resource-limited settings. CONCLUSIONS Our study demonstrates the UNet-based BDL model not only can perform potentially malignant and malignant oral lesion segmentation, but also can provide informative pixel-level uncertainty estimation. With this extra uncertainty information, the accuracy and reliability of the model’s prediction can be improved.
Collapse
Affiliation(s)
- Bofan Song
- The University of Arizona, Wyant College of Optical Sciences, Tucson, Arizona, United States
| | - Shaobai Li
- The University of Arizona, Wyant College of Optical Sciences, Tucson, Arizona, United States
| | - Sumsum Sunny
- Mazumdar Shaw Medical Centre, Bangalore, Karnataka, India
| | | | | | - Nirza Mukhia
- KLE Society Institute of Dental Sciences, Bangalore, Karnataka, India
| | | | - Tyler Peterson
- The University of Arizona, Wyant College of Optical Sciences, Tucson, Arizona, United States
| | - Shubha Gurudath
- KLE Society Institute of Dental Sciences, Bangalore, Karnataka, India
| | | | - Imchen Tsusennaro
- Christian Institute of Health Sciences and Research, Dimapur, Nagaland, India
| | - Shirley T. Leivon
- Christian Institute of Health Sciences and Research, Dimapur, Nagaland, India
| | - Trupti Kolur
- Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India
| | - Vivek Shetty
- Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India
| | - Vidya Bushan
- Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India
| | - Rohan Ramesh
- Christian Institute of Health Sciences and Research, Dimapur, Nagaland, India
| | - Vijay Pillai
- Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India
| | - Petra Wilder-Smith
- University of California, Beckman Laser Institute & Medical Clinic, Irvine, California, United States
| | - Amritha Suresh
- Mazumdar Shaw Medical Centre, Bangalore, Karnataka, India
- Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India
| | | | - Praveen Birur
- KLE Society Institute of Dental Sciences, Bangalore, Karnataka, India
- Biocon Foundation, Bangalore, Karnataka, India
| | - Rongguang Liang
- The University of Arizona, Wyant College of Optical Sciences, Tucson, Arizona, United States
| |
Collapse
|
36
|
Shi P, Zhong J, Lin L, Lin L, Li H, Wu C. Nuclei segmentation of HE stained histopathological images based on feature global delivery connection network. PLoS One 2022; 17:e0273682. [PMID: 36107930 PMCID: PMC9477331 DOI: 10.1371/journal.pone.0273682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 08/12/2022] [Indexed: 11/22/2022] Open
Abstract
The analysis of pathological images, such as cell counting and nuclear morphological measurement, is an essential part in clinical histopathology researches. Due to the diversity of uncertain cell boundaries after staining, automated nuclei segmentation of Hematoxylin-Eosin (HE) stained pathological images remains challenging. Although better performances could be achieved than most of classic image processing methods do, manual labeling is still necessary in a majority of current machine learning based segmentation strategies, which restricts further improvements of efficiency and accuracy. Aiming at the requirements of stable and efficient high-throughput pathological image analysis, an automated Feature Global Delivery Connection Network (FGDC-net) is proposed for nuclei segmentation of HE stained images. Firstly, training sample patches and their corresponding asymmetric labels are automatically generated based on a Full Mixup strategy from RGB to HSV color space. Secondly, in order to add connections between adjacent layers and achieve the purpose of feature selection, FGDC module is designed by removing the jumping connections between codecs commonly used in UNet-based image segmentation networks, which learns the relationships between channels in each layer and pass information selectively. Finally, a dynamic training strategy based on mixed loss is used to increase the generalization capability of the model by flexible epochs. The proposed improvements were verified by the ablation experiments on multiple open databases and own clinical meningioma dataset. Experimental results on multiple datasets showed that FGDC-net could effectively improve the segmentation performances of HE stained pathological images without manual interventions, and provide valuable references for clinical pathological analysis.
Collapse
Affiliation(s)
- Peng Shi
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
- * E-mail:
| | - Jing Zhong
- Department of Radiology, Fujian Medical University Cancer Hospital, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Liyan Lin
- Department of Pathology, Fujian Medical University Cancer Hospital, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Lin Lin
- Department of Radiology, Fujian Medical University Union Hospital, Fuzhou, Fujian, China
| | - Huachang Li
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
| | - Chongshu Wu
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
| |
Collapse
|
37
|
Zheng T, Zheng S, Wang K, Quan H, Bai Q, Li S, Qi R, Zhao Y, Cui X, Gao X. Automatic CD30 scoring method for whole slide images of primary cutaneous CD30 + lymphoproliferative diseases. J Clin Pathol 2022; 76:jclinpath-2022-208344. [PMID: 35863885 DOI: 10.1136/jcp-2022-208344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 07/07/2022] [Indexed: 11/03/2022]
Abstract
AIMS Deep-learning methods for scoring biomarkers are an active research topic. However, the superior performance of many studies relies on large datasets collected from clinical samples. In addition, there are fewer studies on immunohistochemical marker assessment for dermatological diseases. Accordingly, we developed a method for scoring CD30 based on convolutional neural networks for a few primary cutaneous CD30+ lymphoproliferative disorders and used this method to evaluate other biomarkers. METHODS A multipatch spatial attention mechanism and conditional random field algorithm were used to fully fuse tumour tissue characteristics on immunohistochemical slides and alleviate the few sample feature deficits. We trained and tested 28 CD30+ immunohistochemical whole slide images (WSIs), evaluated them with a performance index, and compared them with the diagnoses of senior dermatologists. Finally, the model's performance was further demonstrated on the publicly available Yale HER2 cohort. RESULTS Compared with the diagnoses by senior dermatologists, this method can better locate the tumour area and reduce the misdiagnosis rate. The prediction of CD3 and Ki-67 validated the model's ability to identify other biomarkers. CONCLUSIONS In this study, using a few immunohistochemical WSIs, our model can accurately identify CD30, CD3 and Ki-67 markers. In addition, the model could be applied to additional tumour identification tasks to aid pathologists in diagnosis and benefit clinical evaluation.
Collapse
Affiliation(s)
- Tingting Zheng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Song Zheng
- Department of Dermatology, The First Hospital of China Medical University, Shenyang, Liaoning, China
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics No, Heping District, Liaoning Province, China
- NHC Key Laboratory of Immunodermatology, Heping District, Liaoning Province, China
| | - Ke Wang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Hao Quan
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Qun Bai
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Shuqin Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Ruiqun Qi
- Department of Dermatology, The First Hospital of China Medical University, Shenyang, Liaoning, China
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics No, Heping District, Liaoning Province, China
- NHC Key Laboratory of Immunodermatology, Heping District, Liaoning Province, China
| | - Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics No, Heping District, Liaoning Province, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Xinghua Gao
- Department of Dermatology, The First Hospital of China Medical University, Shenyang, Liaoning, China
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics No, Heping District, Liaoning Province, China
- NHC Key Laboratory of Immunodermatology, Heping District, Liaoning Province, China
| |
Collapse
|
38
|
Mathew T, Niyas S, Johnpaul C, Kini JR, Rajan J. A novel deep classifier framework for automated molecular subtyping of breast carcinoma using immunohistochemistry image analysis. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
39
|
Han Z, Lan J, Wang T, Hu Z, Huang Y, Deng Y, Zhang H, Wang J, Chen M, Jiang H, Lee RG, Gao Q, Du M, Tong T, Chen G. A Deep Learning Quantification Algorithm for HER2 Scoring of Gastric Cancer. Front Neurosci 2022; 16:877229. [PMID: 35706692 PMCID: PMC9190202 DOI: 10.3389/fnins.2022.877229] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 04/19/2022] [Indexed: 11/13/2022] Open
Abstract
Gastric cancer is the third most common cause of cancer-related death in the world. Human epidermal growth factor receptor 2 (HER2) positive is an important subtype of gastric cancer, which can provide significant diagnostic information for gastric cancer pathologists. However, pathologists usually use a semi-quantitative assessment method to assign HER2 scores for gastric cancer by repeatedly comparing hematoxylin and eosin (H&E) whole slide images (WSIs) with their HER2 immunohistochemical WSIs one by one under the microscope. It is a repetitive, tedious, and highly subjective process. Additionally, WSIs have billions of pixels in an image, which poses computational challenges to Computer-Aided Diagnosis (CAD) systems. This study proposed a deep learning algorithm for HER2 quantification evaluation of gastric cancer. Different from other studies that use convolutional neural networks for extracting feature maps or pre-processing on WSIs, we proposed a novel automatic HER2 scoring framework in this study. In order to accelerate the computational process, we proposed to use the re-parameterization scheme to separate the training model from the deployment model, which significantly speedup the inference process. To the best of our knowledge, this is the first study to provide a deep learning quantification algorithm for HER2 scoring of gastric cancer to assist the pathologist's diagnosis. Experiment results have demonstrated the effectiveness of our proposed method with an accuracy of 0.94 for the HER2 scoring prediction.
Collapse
Affiliation(s)
- Zixin Han
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| | - Junlin Lan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| | - Tao Wang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| | - Ziwei Hu
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| | - Yuxiu Huang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| | - Yanglin Deng
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| | - Hejun Zhang
- Department of Pathology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, China
| | - Jianchao Wang
- Department of Pathology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, China
| | - Musheng Chen
- Department of Pathology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, China
| | - Haiyan Jiang
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
- College of Electrical Engineering and Automation, Fuzhou University, Fuzhou, China
| | - Ren-Guey Lee
- Department of Electronic Engineering, National Taipei University of Technology, Taipei, Taiwan
| | - Qinquan Gao
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
- Imperial Vision Technology, Fuzhou, China
| | - Ming Du
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
- Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fuzhou University, Fuzhou, China
- Imperial Vision Technology, Fuzhou, China
| | - Gang Chen
- College of Electrical Engineering and Automation, Fuzhou University, Fuzhou, China
- Fujian Provincial Key Laboratory of Translational Cancer Medicin, Fuzhou, China
| |
Collapse
|
40
|
Graph-Embedded Online Learning for Cell Detection and Tumour Proportion Score Estimation. ELECTRONICS 2022. [DOI: 10.3390/electronics11101642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Cell detection in microscopy images can provide useful clinical information. Most methods based on deep learning for cell detection are fully supervised. Without enough labelled samples, the accuracy of these methods would drop rapidly. To handle limited annotations and massive unlabelled data, semi-supervised learning methods have been developed. However, many of these are trained off-line, and are unable to process new incoming data to meet the needs of clinical diagnosis. Therefore, we propose a novel graph-embedded online learning network (GeoNet) for cell detection. It can locate and classify cells with dot annotations, saving considerable manpower. Trained by both historical data and reliable new samples, the online network can predict nuclear locations for upcoming new images while being optimized. To be more easily adapted to open data, it engages dynamic graph regularization and learns the inherent nonlinear structures of cells. Moreover, GeoNet can be applied to downstream tasks such as quantitative estimation of tumour proportion score (TPS), which is a useful indicator for lung squamous cell carcinoma treatment and prognostics. Experimental results for five large datasets with great variability in cell type and morphology validate the effectiveness and generalizability of the proposed method. For the lung squamous cell carcinoma (LUSC) dataset, the detection F1-scores of GeoNet for negative and positive tumour cells are 0.734 and 0.769, respectively, and the relative error of GeoNet for TPS estimation is 11.1%.
Collapse
|
41
|
Premature Ventricular Contraction Recognition Based on a Deep Learning Approach. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1450723. [PMID: 35378947 PMCID: PMC8976634 DOI: 10.1155/2022/1450723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 02/25/2022] [Accepted: 02/28/2022] [Indexed: 11/18/2022]
Abstract
Electrocardiogram signal (ECG) is considered a significant biological signal employed to diagnose heart diseases. An ECG signal allows the demonstration of the cyclical contraction and relaxation of human heart muscles. This signal is a primary and noninvasive tool employed to recognize the actual life threat related to the heart. Abnormal ECG heartbeat and arrhythmia are the possible symptoms of severe heart diseases that can lead to death. Premature ventricular contraction (PVC) is one of the most common arrhythmias which begins from the lower chamber of the heart and can cause cardiac arrest, palpitation, and other symptoms affecting all activities of a patient. Nowadays, computer-assisted techniques reduce doctors' burden to assess heart arrhythmia and heart disease automatically. In this study, we propose a PVC recognition based on a deep learning approach using the MIT-BIH arrhythmia database. Firstly, 10 heartbeat and statistical features including three morphological features (RS amplitude, QR amplitude, and QRS width) and seven statistical features are computed for each signal. The extraction process of these features is conducted for 20 s of ECG data that create a feature vector. Next, these features are fed into a convolutional neural network (CNN) to find unique patterns and classify them more effectively. The obtained results prove that our pipeline improves the diagnosis performance more effectively.
Collapse
|
42
|
Shi W, Xu T, Yang H, Xi Y, Du Y, Li J, Li J. Attention Gate based dual-pathway Network for Vertebra Segmentation of X-ray Spine images. IEEE J Biomed Health Inform 2022; 26:3976-3987. [PMID: 35290194 DOI: 10.1109/jbhi.2022.3158968] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Automatic spine and vertebra segmentation from X-ray spine images is a critical and challenging problem in many computer-aid spinal image analysis and disease diagnosis applications. In this paper, a two-stage automatic segmentation framework for spine X-ray images is proposed, which can firstly locate the spine regions (including backbone, sacrum and illum) in the coarse stage and then identify eighteen vertebrae (i.e., cervical vertebra 1, thoracic vertebra 1-12 and lumbar vertebra 1-5) with isolate and clear boundary in the fine stage. A novel Attention Gate based dual-pathway Network (AGNet) composed of context and edge pathways is designed to extract semantic and boundary information for segmentation of both spine and vertebra regions. Multi-scale supervision mechanism is applied to explore comprehensive features and an Edge aware Fusion Mechanism (EFM) is proposed to fuse features extracted from the two pathways. Some other image processing skills, such as centralized backbone clipping, patch cropping and convex hull detection are introduced to further refine the vertebra segmentation results. Experimental validations on spine X-ray images dataset and vertebrae dataset suggest that the proposed AGNet achieves superior performance compared with state-of-the-art segmentation methods, and the coarse-to-fine framework can be implemented in real spinal diagnosis systems.
Collapse
|
43
|
Tewary S, Mukhopadhyay S. AutoIHCNet: CNN architecture and decision fusion for automated HER2 scoring. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
44
|
Wang X, Shao C, Liu W, Liang H, Li N. HER2-ResNet: A HER2 classification method based on deep residual network. Technol Health Care 2022; 30:215-224. [PMID: 35124598 PMCID: PMC9028740 DOI: 10.3233/thc-228020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND: HER2 gene expression is one of the main reference indicators for breast cancer detection and treatment, and it is also an important target for tumor targeted therapy drug selection. Therefore, the correct detection and evaluation of HER2 gene expression has important value for clinical treatment of breast cancer. OBJECTIVE: The study goal is to better classify HER2 images. METHODS: For general convolution neural network, with the increase of network layers, over fitting phenomenon is often very serious, which requires setting the value of random descent ratio, and parameter adjustment is often time-consuming and laborious, so this paper uses residual network, with the increase of network layer, the accuracy will not be reduced. RESULTS: In this paper, a HER2 image classification algorithm based on improved residual network is proposed. Experimental results show that the proposed HER2 network has high accuracy in breast cancer assessment. Conclusion: Taking HER2 images in Stanford University database as experimental data, the accuracy of HER2 image automatic classification is improved through experiments. This method will help to reduce the detection intensity and improve the accuracy of HER2 image classification.
Collapse
Affiliation(s)
- Xingang Wang
- School of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan, Shandong, China
| | - Cuiling Shao
- School of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan, Shandong, China
| | - Wensheng Liu
- School of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan, Shandong, China
| | - Hu Liang
- School of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan, Shandong, China
| | - Na Li
- Shandong Computer Science Center (National Supercomputing Center in Jinan), Jinan, Shandong, China
| |
Collapse
|
45
|
Garberis I, Andre F, Lacroix-Triki M. L’intelligence artificielle pourrait-elle intervenir dans l’aide au diagnostic des cancers du sein ? – L’exemple de HER2. Bull Cancer 2022; 108:11S35-11S45. [PMID: 34969514 DOI: 10.1016/s0007-4551(21)00635-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
HER2 is an important prognostic and predictive biomarker in breast cancer. Its detection makes it possible to define which patients will benefit from a targeted treatment. While assessment of HER2 status by immunohistochemistry in positive vs negative categories is well implemented and reproducible, the introduction of a new "HER2-low" category could raise some concerns about its scoring and reproducibility. We herein described the current HER2 testing methods and the application of innovative machine learning techniques to improve these determinations, as well as the main challenges and opportunities related to the implementation of digital pathology in the up-and-coming AI era.
Collapse
Affiliation(s)
- Ingrid Garberis
- Inserm UMR 981, Gustave Roussy Cancer Campus, Villejuif, France; Université Paris-Saclay, 94270 Le Kremlin-Bicêtre, France.
| | - Fabrice Andre
- Inserm UMR 981, Gustave Roussy Cancer Campus, Villejuif, France; Université Paris-Saclay, 94270 Le Kremlin-Bicêtre, France; Département d'oncologie médicale, Gustave-Roussy, Villejuif, France
| | - Magali Lacroix-Triki
- Inserm UMR 981, Gustave Roussy Cancer Campus, Villejuif, France; Département d'anatomie et cytologie pathologiques, Gustave-Roussy, Villejuif, France
| |
Collapse
|
46
|
Zhou K, Li W, Zhao D. Deep learning-based breast region extraction of mammographic images combining pre-processing methods and semantic segmentation supported by Deeplab v3. Technol Health Care 2022; 30:173-190. [PMID: 35124595 PMCID: PMC9028646 DOI: 10.3233/thc-228017] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
BACKGROUND Breast cancer has long been one of the major global life-threatening illnesses among women. Surgery and adjuvant therapy, coupled with early detection, could save many lives. This underscores the importance of mammography, a cost-effective and accurate method for early detection. Due to the poor contrast, noise and artifacts which results in difficulty for radiologists to diagnose, Computer-Aided Diagnosis (CAD) systems are hence developed. The extraction of breast region is a fundamental and crucial preparation step for further development of CAD systems. OBJECTIVE The proposed method aims to extract breast region accurately from mammographic images where noise is suppressed, contrast is enhanced and pectoral muscle region is removed. METHODS This paper presents a new deep learning-based breast region extraction method that combines pre-processing methods containing noise suppression using median filter, contrast enhancement using CLAHE and semantic segmentation using Deeplab v3+ model. RESULTS The method is trained and evaluated on mini-MIAS dataset. It has also been evaluated on INbreast dataset. The results outperform those generated by other recent researches and are indicative of the capacity of the model to retain its accuracy and runtime advantage across different databases with different image resolutions. CONCLUSIONS The proposed method shows state-of-the-art performance at extracting breast region from mammographic images. Wide range of evaluation on two commonly used mammography datasets proves the ability and adaptability of the method.
Collapse
Affiliation(s)
- Kuochen Zhou
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Wei Li
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Dazhe Zhao
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| |
Collapse
|
47
|
Joint segmentation and classification task via adversarial network: Application to HEp-2 cell images. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2021.108156] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
48
|
Chattoraj S, Chakraborty A, Gupta A, Vishwakarma Y, Vishwakarma K, Aparajeeta J. Deep Phenotypic Cell Classification using Capsule Neural Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4031-4036. [PMID: 34892115 DOI: 10.1109/embc46164.2021.9629862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Recent developments in ultra-high-throughput microscopy have created a new generation of cell classification methodologies focused solely on image-based cell phenotypes. These image-based analyses enable morphological profiling and screening of thousands or even millions of single cells at a fraction of the cost. They have been shown to demonstrate the statistical significance required for understanding the role of cell heterogeneity in diverse biologists. However, these single-cell analysis techniques are slow and require expensive genetic/epigenetic analysis. This treatise proposes an innovative DL system based on the newly created capsule networks (CapsNet) architecture. The proposed deep CapsNet model employs "Capsules" for high-level feature abstraction relevant to the cell category. Experiments demonstrate that our proposed system can accurately classify different types of cells based on phenotypic label-free bright-field images with over 98.06% accuracy and that deep CapsNet models outperform CNN models in the prior art.
Collapse
|
49
|
Zhou X, Gu M, Cheng Z. Local Integral Regression Network for Cell Nuclei Detection. ENTROPY 2021; 23:e23101336. [PMID: 34682060 PMCID: PMC8535160 DOI: 10.3390/e23101336] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Accepted: 10/07/2021] [Indexed: 11/16/2022]
Abstract
Nuclei detection is a fundamental task in the field of histopathology image analysis and remains challenging due to cellular heterogeneity. Recent studies explore convolutional neural networks to either isolate them with sophisticated boundaries (segmentation-based methods) or locate the centroids of the nuclei (counting-based approaches). Although these two methods have demonstrated superior success, their fully supervised training demands considerable and laborious pixel-wise annotations manually labeled by pathology experts. To alleviate such tedious effort and reduce the annotation cost, we propose a novel local integral regression network (LIRNet) that allows both fully and weakly supervised learning (FSL/WSL) frameworks for nuclei detection. Furthermore, the LIRNet can output an exquisite density map of nuclei, in which the localization of each nucleus is barely affected by the post-processing algorithms. The quantitative experimental results demonstrate that the FSL version of the LIRNet achieves a state-of-the-art performance compared to other counterparts. In addition, the WSL version has exhibited a competitive detection performance and an effortless data annotation that requires only 17.5% of the annotation effort.
Collapse
|
50
|
Wang J, Wang Y, Tao X, Li Q, Sun L, Chen J, Zhou M, Hu M, Zhou X. PCA-U-Net based breast cancer nest segmentation from microarray hyperspectral images. FUNDAMENTAL RESEARCH 2021. [DOI: 10.1016/j.fmre.2021.06.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|