1
|
Kheiri F, Rahnamayan S, Makrehchi M, Asilian Bidgoli A. Investigation on potential bias factors in histopathology datasets. Sci Rep 2025; 15:11349. [PMID: 40175463 PMCID: PMC11965531 DOI: 10.1038/s41598-025-89210-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Accepted: 02/04/2025] [Indexed: 04/04/2025] Open
Abstract
Deep neural networks (DNNs) have demonstrated remarkable capabilities in medical applications, including digital pathology, where they excel at analyzing complex patterns in medical images to assist in accurate disease diagnosis and prognosis. However, concerns have arisen about potential biases in The Cancer Genome Atlas (TCGA) dataset, a comprehensive repository of digitized histopathology data and serves as both a training and validation source for deep learning models, suggesting that over-optimistic results of model performance may be due to reliance on biased features rather than histological characteristics. Surprisingly, recent studies have confirmed the existence of site-specific bias in the embedded features extracted for cancer-type discrimination, leading to high accuracy in acquisition site classification. This biased behavior motivated us to conduct an in-depth analysis to investigate potential causes behind this unexpected biased ability toward site-specific pattern recognition. The analysis was conducted on two cutting-edge DNN models: KimiaNet, a state-of-the-art DNN trained on TCGA images, and the self-trained EfficientNet. In this research study, the balanced accuracy metric is used to evaluate the performance of a model trained to classify data centers, which was originally designed to learn cancerous patterns, with the aim of investigating the potential factors contributing to the higher balanced accuracy in data center detection.
Collapse
Affiliation(s)
- Farnaz Kheiri
- Department of Electrical, Computer and Software Engineering, Ontario Tech University, Oshawa, Canada.
| | | | - Masoud Makrehchi
- Department of Electrical, Computer and Software Engineering, Ontario Tech University, Oshawa, Canada
| | | |
Collapse
|
2
|
Dunn C, Brettle D, Hodgson C, Hughes R, Treanor D. An international study of stain variability in histopathology using qualitative and quantitative analysis. J Pathol Inform 2025; 17:100423. [PMID: 40145070 PMCID: PMC11938143 DOI: 10.1016/j.jpi.2025.100423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2024] [Revised: 01/06/2025] [Accepted: 02/07/2025] [Indexed: 03/28/2025] Open
Abstract
Hematoxylin and eosin (H&E) staining accounts for over 80% of slides stained worldwide. Although routinely used, there are high levels of variation between labs due to different staining methods. Staining is a pivotal part of slide preparation, but quality control is largely subjective, with overall clinical assurance provided by external quality assessment (EQA) services, underpinned by expert assessment. Digital pathology offers the potential to provide objective quantification of stain, through color analysis, to augment EQA assessment. This large-scale study evaluated H&E staining in 247 international labs participating in the UK NEQAS CPT EQA programme. Tissue sections were circulated to each lab to stain using their routine H&E staining protocol. The slides were reviewed by independent expert UK NEQAS CPT assessors, and quantitative digital analysis was conducted, comprising of H&E color deconvolution and color difference determination (ΔE). Most labs (69%) achieved an EQA score indicating good or excellent staining, with high inter-observer concordance to support this (92.5% within one mark of each other). H&E color difference, ΔE, showed 60% of labs were within 2 ΔE of the mean, which is considered as only perceptible through close observation. There was little correlation found between H&E intensity and assessor score, however, the H&E intensity ratio indicated a trend with assessor score suggesting there may be an optimal stain relationship that should be investigated further. The presented hybrid analysis combines expert analysis with objective data. This has the potential to inform upon optimal tissue staining and allows us to consider quantitative standards of H&E staining in pathology practice.
Collapse
Affiliation(s)
- Catriona Dunn
- National Pathology Imaging Co-operative, Leeds Teaching Hospitals NHS Trust, Beckett Street, Leeds, UK
| | - David Brettle
- National Pathology Imaging Co-operative, Leeds Teaching Hospitals NHS Trust, Beckett Street, Leeds, UK
| | - Chantell Hodgson
- UK NEQAS Cellular Pathology Technique, Haylofts, St Thomas Street, Haymarket, Newcastle, UK
| | - Robert Hughes
- UK NEQAS Cellular Pathology Technique, Haylofts, St Thomas Street, Haymarket, Newcastle, UK
| | - Darren Treanor
- National Pathology Imaging Co-operative, Leeds Teaching Hospitals NHS Trust, Beckett Street, Leeds, UK
- Department of Histopathology, Leeds Teaching Hospitals NHS Trust, Beckett Street, Leeds, UK
- Department of Pathology and Data Analytics, University of Leeds, Beckett Street, Leeds, UK
- Department of Clinical Pathology and Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
- Centre for Medical Image Science and Visualisation, Linköping University, Linköping, Sweden
| |
Collapse
|
3
|
Ke J, Zhou Y, Shen Y, Guo Y, Liu N, Han X, Shen D. Learnable color space conversion and fusion for stain normalization in pathology images. Med Image Anal 2025; 101:103424. [PMID: 39740473 DOI: 10.1016/j.media.2024.103424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Revised: 10/30/2024] [Accepted: 12/03/2024] [Indexed: 01/02/2025]
Abstract
Variations in hue and contrast are common in H&E-stained pathology images due to differences in slide preparation across various institutions. Such stain variations, while not affecting pathologists much in diagnosing the biopsy, pose significant challenges for computer-assisted diagnostic systems, leading to potential underdiagnosis or misdiagnosis, especially when stain differentiation introduces substantial heterogeneity across datasets from different sources. Traditional stain normalization methods, aimed at mitigating these issues, often require labor-intensive selection of appropriate templates, limiting their practicality and automation. Innovatively, we propose a Learnable Stain Normalization layer, i.e. LStainNorm, designed as an easily integrable component for pathology image analysis. It minimizes the need for manual template selection by autonomously learning the optimal stain characteristics. Moreover, the learned optimal stain template provides the interpretability to enhance the understanding of the normalization process. Additionally, we demonstrate that fusing pathology images normalized in multiple color spaces can improve performance. Therefore, we extend LStainNorm with a novel self-attention mechanism to facilitate the fusion of features across different attributes and color spaces. Experimentally, LStainNorm outperforms the state-of-the-art methods including conventional ones and GANs on two classification datasets and three nuclei segmentation datasets by an average increase of 4.78% in accuracy, 3.53% in Dice coefficient, and 6.59% in IoU. Additionally, by enabling an end-to-end training and inference process, LStainNorm eliminates the need for intermediate steps between normalization and analysis, resulting in more efficient use of hardware resources and significantly faster inference time, i.e up to hundreds of times quicker than traditional methods. The code is publicly available at https://github.com/yjzscode/Optimal-Normalisation-in-Color-Spaces.
Collapse
Affiliation(s)
- Jing Ke
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China; School of Computer Science and Engineering, University of New South Wales, Australia.
| | - Yijin Zhou
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai, China.
| | - Yiqing Shen
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
| | - Yi Guo
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Sydney, Australia.
| | - Ning Liu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Xiaodan Han
- Department of Anaesthesiology, Zhongshan Hospital, Fudan University, Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
4
|
Tang Y, Zhou Y, Zhang S, Lu Y. A High-Resolution Digital Pathological Image Staining Style Transfer Model Based on Gradient Guidance. Bioengineering (Basel) 2025; 12:187. [PMID: 40001706 PMCID: PMC11851416 DOI: 10.3390/bioengineering12020187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2025] [Revised: 02/08/2025] [Accepted: 02/13/2025] [Indexed: 02/27/2025] Open
Abstract
Digital pathology images have long been regarded as the gold standard for cancer diagnosis in clinical medicine. A highly generalized digital pathological image diagnosis system can provide strong support for cancer diagnosis, help to improve the diagnostic efficiency and accuracy of doctors, and has important research value. The whole slide image of different centers can lead to very large staining differences due to different scanners and dyes, which pose a challenge to the generalization performance of the model application in multi-center data testing. In order to achieve the normalization of multi-center data, this paper proposes a style transfer algorithm based on an adversarial generative network for high-resolution images. The gradient-guided dye migration model proposed in this paper introduces a gradient-enhanced regularized term in the loss function design of the algorithm. A style transfer algorithm was applied to the source data, and the diagnostic performance of the multi-example learning model based on the domain data was significantly improved by validation in the pathological image datasets of two centers. The proposed method improved the AUC of the best classification model from 0.8856 to 0.9243, and another set of experiments improved the AUC from 0.8012 to 0.8313.
Collapse
Affiliation(s)
- Yutao Tang
- School of Computer Science and Engineering, Sun-Yat sen University, Guangzhou 510006, China; (Y.T.); (Y.Z.)
| | - Yuanpin Zhou
- School of Computer Science and Engineering, Sun-Yat sen University, Guangzhou 510006, China; (Y.T.); (Y.Z.)
| | - Siyu Zhang
- Vertex Pharmaceuticals, 50 Northern Avenue, Boston, MA 02210, USA;
| | - Yao Lu
- School of Computer Science and Engineering, Sun-Yat sen University, Guangzhou 510006, China; (Y.T.); (Y.Z.)
| |
Collapse
|
5
|
Mezei T, Kolcsár M, Joó A, Gurzu S. Image Analysis in Histopathology and Cytopathology: From Early Days to Current Perspectives. J Imaging 2024; 10:252. [PMID: 39452415 PMCID: PMC11508754 DOI: 10.3390/jimaging10100252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2024] [Revised: 10/03/2024] [Accepted: 10/12/2024] [Indexed: 10/26/2024] Open
Abstract
Both pathology and cytopathology still rely on recognizing microscopical morphologic features, and image analysis plays a crucial role, enabling the identification, categorization, and characterization of different tissue types, cell populations, and disease states within microscopic images. Historically, manual methods have been the primary approach, relying on expert knowledge and experience of pathologists to interpret microscopic tissue samples. Early image analysis methods were often constrained by computational power and the complexity of biological samples. The advent of computers and digital imaging technologies challenged the exclusivity of human eye vision and brain computational skills, transforming the diagnostic process in these fields. The increasing digitization of pathological images has led to the application of more objective and efficient computer-aided analysis techniques. Significant advancements were brought about by the integration of digital pathology, machine learning, and advanced imaging technologies. The continuous progress in machine learning and the increasing availability of digital pathology data offer exciting opportunities for the future. Furthermore, artificial intelligence has revolutionized this field, enabling predictive models that assist in diagnostic decision making. The future of pathology and cytopathology is predicted to be marked by advancements in computer-aided image analysis. The future of image analysis is promising, and the increasing availability of digital pathology data will invariably lead to enhanced diagnostic accuracy and improved prognostic predictions that shape personalized treatment strategies, ultimately leading to better patient outcomes.
Collapse
Affiliation(s)
- Tibor Mezei
- Department of Pathology, George Emil Palade University of Medicine, Pharmacy, Science, and Technology of Targu Mures, 540139 Targu Mures, Romania;
| | - Melinda Kolcsár
- Department of Pharmacology and Clinical Pharmacy, George Emil Palade University of Medicine, Pharmacy, Science, and Technology of Targu Mures, 540142 Targu Mures, Romania;
| | - András Joó
- Accenture Romania, 540035 Targu Mures, Romania;
| | - Simona Gurzu
- Department of Pathology, George Emil Palade University of Medicine, Pharmacy, Science, and Technology of Targu Mures, 540139 Targu Mures, Romania;
| |
Collapse
|
6
|
Berghout T. Joint Image Processing with Learning-Driven Data Representation and Model Behavior for Non-Intrusive Anemia Diagnosis in Pediatric Patients. J Imaging 2024; 10:245. [PMID: 39452408 PMCID: PMC11508579 DOI: 10.3390/jimaging10100245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2024] [Revised: 09/26/2024] [Accepted: 10/01/2024] [Indexed: 10/26/2024] Open
Abstract
Anemia diagnosis is crucial for pediatric patients due to its impact on growth and development. Traditional methods, like blood tests, are effective but pose challenges, such as discomfort, infection risk, and frequent monitoring difficulties, underscoring the need for non-intrusive diagnostic methods. In light of this, this study proposes a novel method that combines image processing with learning-driven data representation and model behavior for non-intrusive anemia diagnosis in pediatric patients. The contributions of this study are threefold. First, it uses an image-processing pipeline to extract 181 features from 13 categories, with a feature-selection process identifying the most crucial data for learning. Second, a deep multilayered network based on long short-term memory (LSTM) is utilized to train a model for classifying images into anemic and non-anemic cases, where hyperparameters are optimized using Bayesian approaches. Third, the trained LSTM model is integrated as a layer into a learning model developed based on recurrent expansion rules, forming a part of a new deep network called a recurrent expansion network (RexNet). RexNet is designed to learn data representations akin to traditional deep-learning methods while also understanding the interaction between dependent and independent variables. The proposed approach is applied to three public datasets, namely conjunctival eye images, palmar images, and fingernail images of children aged up to 6 years. RexNet achieves an overall evaluation of 99.83 ± 0.02% across all classification metrics, demonstrating significant improvements in diagnostic results and generalization compared to LSTM networks and existing methods. This highlights RexNet's potential as a promising alternative to traditional blood-based methods for non-intrusive anemia diagnosis.
Collapse
Affiliation(s)
- Tarek Berghout
- Laboratory of Automation and Manufacturing Engineering, Department of Industrial Engineering, Batna 2 University, Batna 05000, Algeria
| |
Collapse
|
7
|
Prezja F, Annala L, Kiiskinen S, Lahtinen S, Ojala T, Ruusuvuori P, Kuopio T. Improving performance in colorectal cancer histology decomposition using deep and ensemble machine learning. Heliyon 2024; 10:e37561. [PMID: 39309850 PMCID: PMC11415691 DOI: 10.1016/j.heliyon.2024.e37561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2024] [Accepted: 09/05/2024] [Indexed: 09/25/2024] Open
Abstract
In routine colorectal cancer management, histologic samples stained with hematoxylin and eosin are commonly used. Nonetheless, their potential for defining objective biomarkers for patient stratification and treatment selection is still being explored. The current gold standard relies on expensive and time-consuming genetic tests. However, recent research highlights the potential of convolutional neural networks (CNNs) to facilitate the extraction of clinically relevant biomarkers from these readily available images. These CNN-based biomarkers can predict patient outcomes comparably to golden standards, with the added advantages of speed, automation, and minimal cost. The predictive potential of CNN-based biomarkers fundamentally relies on the ability of CNNs to accurately classify diverse tissue types from whole slide microscope images. Consequently, enhancing the accuracy of tissue class decomposition is critical to amplifying the prognostic potential of imaging-based biomarkers. This study introduces a hybrid deep transfer learning and ensemble machine learning model that improves upon previous approaches, including a transformer and neural architecture search baseline for this task. We employed a pairing of the EfficientNetV2 architecture with a random forest classification head. Our model achieved 96.74% accuracy (95% CI: 96.3%-97.1%) on the external test set and 99.89% on the internal test set. Recognizing the potential of these models in the task, we have made them publicly available.
Collapse
Affiliation(s)
- Fabi Prezja
- University of Jyväskylä, Faculty of Information Technology, Jyväskylä, 40014, Finland
| | - Leevi Annala
- University of Helsinki, Faculty of Science, Department of Computer Science, Helsinki, Finland
- University of Helsinki, Faculty of Agriculture and Forestry, Department of Food and Nutrition, Helsinki, Finland
| | - Sampsa Kiiskinen
- University of Jyväskylä, Faculty of Information Technology, Jyväskylä, 40014, Finland
| | - Suvi Lahtinen
- University of Jyväskylä, Faculty of Information Technology, Jyväskylä, 40014, Finland
- University of Jyväskylä, Faculty of Mathematics and Science, Department of Biological and Environmental Science, Jyväskylä, 40014, Finland
| | - Timo Ojala
- University of Jyväskylä, Faculty of Information Technology, Jyväskylä, 40014, Finland
| | - Pekka Ruusuvuori
- University of Turku, Institute of Biomedicine, Cancer Research Unit, Turku, 20014, Finland
- Turku University Hospital, FICAN West Cancer Centre, Turku, 20521, Finland
| | - Teijo Kuopio
- University of Jyväskylä, Department of Biological and Environmental Science, Jyväskylä, 40014, Finland
- Hospital Nova of Central Finland, Department of Pathology, Jyväskylä, 40620, Finland
| |
Collapse
|
8
|
Hetz MJ, Bucher TC, Brinker TJ. Multi-domain stain normalization for digital pathology: A cycle-consistent adversarial network for whole slide images. Med Image Anal 2024; 94:103149. [PMID: 38574542 DOI: 10.1016/j.media.2024.103149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 12/11/2023] [Accepted: 03/20/2024] [Indexed: 04/06/2024]
Abstract
The variation in histologic staining between different medical centers is one of the most profound challenges in the field of computer-aided diagnosis. The appearance disparity of pathological whole slide images causes algorithms to become less reliable, which in turn impedes the wide-spread applicability of downstream tasks like cancer diagnosis. Furthermore, different stainings lead to biases in the training which in case of domain shifts negatively affect the test performance. Therefore, in this paper we propose MultiStain-CycleGAN, a multi-domain approach to stain normalization based on CycleGAN. Our modifications to CycleGAN allow us to normalize images of different origins without retraining or using different models. We perform an extensive evaluation of our method using various metrics and compare it to commonly used methods that are multi-domain capable. First, we evaluate how well our method fools a domain classifier that tries to assign a medical center to an image. Then, we test our normalization on the tumor classification performance of a downstream classifier. Furthermore, we evaluate the image quality of the normalized images using the Structural similarity index and the ability to reduce the domain shift using the Fréchet inception distance. We show that our method proves to be multi-domain capable, provides a very high image quality among the compared methods, and can most reliably fool the domain classifier while keeping the tumor classifier performance high. By reducing the domain influence, biases in the data can be removed on the one hand and the origin of the whole slide image can be disguised on the other, thus enhancing patient data privacy.
Collapse
Affiliation(s)
- Martin J Hetz
- Division of Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tabea-Clara Bucher
- Division of Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Titus J Brinker
- Division of Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
9
|
Jahanifar M, Shephard A, Zamanitajeddin N, Graham S, Raza SEA, Minhas F, Rajpoot N. Mitosis detection, fast and slow: Robust and efficient detection of mitotic figures. Med Image Anal 2024; 94:103132. [PMID: 38442527 DOI: 10.1016/j.media.2024.103132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 02/28/2024] [Accepted: 03/01/2024] [Indexed: 03/07/2024]
Abstract
Counting of mitotic figures is a fundamental step in grading and prognostication of several cancers. However, manual mitosis counting is tedious and time-consuming. In addition, variation in the appearance of mitotic figures causes a high degree of discordance among pathologists. With advances in deep learning models, several automatic mitosis detection algorithms have been proposed but they are sensitive to domain shift often seen in histology images. We propose a robust and efficient two-stage mitosis detection framework, which comprises mitosis candidate segmentation (Detecting Fast) and candidate refinement (Detecting Slow) stages. The proposed candidate segmentation model, termed EUNet, is fast and accurate due to its architectural design. EUNet can precisely segment candidates at a lower resolution to considerably speed up candidate detection. Candidates are then refined using a deeper classifier network, EfficientNet-B7, in the second stage. We make sure both stages are robust against domain shift by incorporating domain generalization methods. We demonstrate state-of-the-art performance and generalizability of the proposed model on the three largest publicly available mitosis datasets, winning the two mitosis domain generalization challenge contests (MIDOG21 and MIDOG22). Finally, we showcase the utility of the proposed algorithm by processing the TCGA breast cancer cohort (1,124 whole-slide images) to generate and release a repository of more than 620K potential mitotic figures (not exhaustively validated).
Collapse
Affiliation(s)
- Mostafa Jahanifar
- Tissue Image Analytic (TIA) Center, Department of Computer Science, University of Warwick, UK.
| | - Adam Shephard
- Tissue Image Analytic (TIA) Center, Department of Computer Science, University of Warwick, UK
| | - Neda Zamanitajeddin
- Tissue Image Analytic (TIA) Center, Department of Computer Science, University of Warwick, UK
| | - Simon Graham
- Tissue Image Analytic (TIA) Center, Department of Computer Science, University of Warwick, UK; Histofy Ltd, Birmingham, UK
| | - Shan E Ahmed Raza
- Tissue Image Analytic (TIA) Center, Department of Computer Science, University of Warwick, UK
| | - Fayyaz Minhas
- Tissue Image Analytic (TIA) Center, Department of Computer Science, University of Warwick, UK
| | - Nasir Rajpoot
- Tissue Image Analytic (TIA) Center, Department of Computer Science, University of Warwick, UK; Histofy Ltd, Birmingham, UK.
| |
Collapse
|
10
|
Tan Y, Feng LJ, Huang YH, Xue JW, Feng ZB, Long LL. Development and validation of a Radiopathomics model based on CT scans and whole slide images for discriminating between Stage I-II and Stage III gastric cancer. BMC Cancer 2024; 24:368. [PMID: 38519974 PMCID: PMC10960497 DOI: 10.1186/s12885-024-12021-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 02/18/2024] [Indexed: 03/25/2024] Open
Abstract
OBJECTIVE This study aimed to develop and validate an artificial intelligence radiopathological model using preoperative CT scans and postoperative hematoxylin and eosin (HE) stained slides to predict the pathological staging of gastric cancer (stage I-II and stage III). METHODS This study included a total of 202 gastric cancer patients with confirmed pathological staging (training cohort: n = 141; validation cohort: n = 61). Pathological histological features were extracted from HE slides, and pathological models were constructed using logistic regression (LR), support vector machine (SVM), and NaiveBayes. The optimal pathological model was selected through receiver operating characteristic (ROC) curve analysis. Machine learnin algorithms were employed to construct radiomic models and radiopathological models using the optimal pathological model. Model performance was evaluated using ROC curve analysis, and clinical utility was estimated using decision curve analysis (DCA). RESULTS A total of 311 pathological histological features were extracted from the HE images, including 101 Term Frequency-Inverse Document Frequency (TF-IDF) features and 210 deep learning features. A pathological model was constructed using 19 selected pathological features through dimension reduction, with the SVM model demonstrating superior predictive performance (AUC, training cohort: 0.949; validation cohort: 0.777). Radiomic features were constructed using 6 selected features from 1834 radiomic features extracted from CT scans via SVM machine algorithm. Simultaneously, a radiopathomics model was built using 17 non-zero coefficient features obtained through dimension reduction from a total of 2145 features (combining both radiomics and pathomics features). The best discriminative ability was observed in the SVM_radiopathomics model (AUC, training cohort: 0.953; validation cohort: 0.851), and clinical decision curve analysis (DCA) demonstrated excellent clinical utility. CONCLUSION The radiopathomics model, combining pathological and radiomic features, exhibited superior performance in distinguishing between stage I-II and stage III gastric cancer. This study is based on the prediction of pathological staging using pathological tissue slides from surgical specimens after gastric cancer curative surgery and preoperative CT images, highlighting the feasibility of conducting research on pathological staging using pathological slides and CT images.
Collapse
Affiliation(s)
- Yang Tan
- Department of Pathology, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, China
| | - Li-Juan Feng
- Department of Radiology, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, China
| | - Ying-He Huang
- Department of Pathology, The First Affiliated Hospital of Guangxi University of Chinese Medicine, Nanning, Guangxi, China
| | - Jia-Wen Xue
- Department of Pathology, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, China
| | - Zhen-Bo Feng
- Department of Pathology, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, China.
| | - Li-Ling Long
- Department of Radiology, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, China.
- Key Laboratory of Early Prevention and Treatment for Regional High Frequency Tumor, Gaungxi Medical University, Ministry of Education, Nanning, Guangxi, China.
- Guangxi Key Laboratory of Immunology and Metabolism for Liver Diseases, Nanning, Guangxi, China.
| |
Collapse
|
11
|
Elazab N, Gab-Allah WA, Elmogy M. A multi-class brain tumor grading system based on histopathological images using a hybrid YOLO and RESNET networks. Sci Rep 2024; 14:4584. [PMID: 38403597 PMCID: PMC10894864 DOI: 10.1038/s41598-024-54864-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 02/17/2024] [Indexed: 02/27/2024] Open
Abstract
Gliomas are primary brain tumors caused by glial cells. These cancers' classification and grading are crucial for prognosis and treatment planning. Deep learning (DL) can potentially improve the digital pathology investigation of brain tumors. In this paper, we developed a technique for visualizing a predictive tumor grading model on histopathology pictures to help guide doctors by emphasizing characteristics and heterogeneity in forecasts. The proposed technique is a hybrid model based on YOLOv5 and ResNet50. The function of YOLOv5 is to localize and classify the tumor in large histopathological whole slide images (WSIs). The suggested technique incorporates ResNet into the feature extraction of the YOLOv5 framework, and the detection results show that our hybrid network is effective for identifying brain tumors from histopathological images. Next, we estimate the glioma grades using the extreme gradient boosting classifier. The high-dimensional characteristics and nonlinear interactions present in histopathology images are well-handled by this classifier. DL techniques have been used in previous computer-aided diagnosis systems for brain tumor diagnosis. However, by combining the YOLOv5 and ResNet50 architectures into a hybrid model specifically designed for accurate tumor localization and predictive grading within histopathological WSIs, our study presents a new approach that advances the field. By utilizing the advantages of both models, this creative integration goes beyond traditional techniques to produce improved tumor localization accuracy and thorough feature extraction. Additionally, our method ensures stable training dynamics and strong model performance by integrating ResNet50 into the YOLOv5 framework, addressing concerns about gradient explosion. The proposed technique is tested using the cancer genome atlas dataset. During the experiments, our model outperforms the other standard ways on the same dataset. Our results indicate that the proposed hybrid model substantially impacts tumor subtype discrimination between low-grade glioma (LGG) II and LGG III. With 97.2% of accuracy, 97.8% of precision, 98.6% of sensitivity, and the Dice similarity coefficient of 97%, the proposed model performs well in classifying four grades. These results outperform current approaches for identifying LGG from high-grade glioma and provide competitive performance in classifying four categories of glioma in the literature.
Collapse
Affiliation(s)
- Naira Elazab
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Wael A Gab-Allah
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt.
| |
Collapse
|
12
|
Durán-Díaz I, Sarmiento A, Fondón I, Bodineau C, Tomé M, Durán RV. A Robust Method for the Unsupervised Scoring of Immunohistochemical Staining. ENTROPY (BASEL, SWITZERLAND) 2024; 26:165. [PMID: 38392420 PMCID: PMC10888407 DOI: 10.3390/e26020165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Revised: 02/02/2024] [Accepted: 02/07/2024] [Indexed: 02/24/2024]
Abstract
Immunohistochemistry is a powerful technique that is widely used in biomedical research and clinics; it allows one to determine the expression levels of some proteins of interest in tissue samples using color intensity due to the expression of biomarkers with specific antibodies. As such, immunohistochemical images are complex and their features are difficult to quantify. Recently, we proposed a novel method, including a first separation stage based on non-negative matrix factorization (NMF), that achieved good results. However, this method was highly dependent on the parameters that control sparseness and non-negativity, as well as on algorithm initialization. Furthermore, the previously proposed method required a reference image as a starting point for the NMF algorithm. In the present work, we propose a new, simpler and more robust method for the automated, unsupervised scoring of immunohistochemical images based on bright field. Our work is focused on images from tumor tissues marked with blue (nuclei) and brown (protein of interest) stains. The new proposed method represents a simpler approach that, on the one hand, avoids the use of NMF in the separation stage and, on the other hand, circumvents the need for a control image. This new approach determines the subspace spanned by the two colors of interest using principal component analysis (PCA) with dimension reduction. This subspace is a two-dimensional space, allowing for color vector determination by considering the point density peaks. A new scoring stage is also developed in our method that, again, avoids reference images, making the procedure more robust and less dependent on parameters. Semi-quantitative image scoring experiments using five categories exhibit promising and consistent results when compared to manual scoring carried out by experts.
Collapse
Affiliation(s)
- Iván Durán-Díaz
- Signal Theory and Communications Department, University of Seville, Avda. Descubrimientos S/N, 41092 Seville, Spain
| | - Auxiliadora Sarmiento
- Signal Theory and Communications Department, University of Seville, Avda. Descubrimientos S/N, 41092 Seville, Spain
| | - Irene Fondón
- Signal Theory and Communications Department, University of Seville, Avda. Descubrimientos S/N, 41092 Seville, Spain
| | - Clément Bodineau
- Department of Pathology, Brigham and Women's Hospital, Boston, MA 02115, USA
- Department of Genetics, Harvard Medical School, Boston, MA 02115, USA
| | - Mercedes Tomé
- Centro Andaluz de Biología Molecular y Medicina Regenerativa-CABIMER, Consejo Superior de Investigaciones Científicas, Universidad de Sevilla, Universidad Pablo de Olavide, 41092 Seville, Spain
| | - Raúl V Durán
- Centro Andaluz de Biología Molecular y Medicina Regenerativa-CABIMER, Consejo Superior de Investigaciones Científicas, Universidad de Sevilla, Universidad Pablo de Olavide, 41092 Seville, Spain
| |
Collapse
|
13
|
Gallo M, Krajňanský V, Nenutil R, Holub P, Brázdil T. Shedding light on the black box of a neural network used to detect prostate cancer in whole slide images by occlusion-based explainability. N Biotechnol 2023; 78:52-67. [PMID: 37793603 DOI: 10.1016/j.nbt.2023.09.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 08/29/2023] [Accepted: 09/30/2023] [Indexed: 10/06/2023]
Abstract
Diagnostic histopathology faces increasing demands due to aging populations and expanding healthcare programs. Semi-automated diagnostic systems employing deep learning methods are one approach to alleviate this pressure. The learning models for histopathology are inherently complex and opaque from the user's perspective. Hence different methods have been developed to interpret their behavior. However, relatively limited attention has been devoted to the connection between interpretation methods and the knowledge of experienced pathologists. The main contribution of this paper is a method for comparing morphological patterns used by expert pathologists to detect cancer with the patterns identified as important for inference of learning models. Given the patch-based nature of processing large-scale histopathological imaging, we have been able to show statistically that the VGG16 model could utilize all the structures that are observable by the pathologist, given the patch size and scan resolution. The results show that the neural network approach to recognizing prostatic cancer is similar to that of a pathologist at medium optical resolution. The saliency maps identified several prevailing histomorphological features characterizing carcinoma, e.g., single-layered epithelium, small lumina, and hyperchromatic nuclei with halo. A convincing finding was the recognition of their mimickers in non-neoplastic tissue. The method can also identify differences, i.e., standard patterns not used by the learning models and new patterns not yet used by pathologists. Saliency maps provide added value for automated digital pathology to analyze and fine-tune deep learning systems and improve trust in computer-based decisions.
Collapse
Affiliation(s)
- Matej Gallo
- Faculty of Informatics, Masaryk University, Botanická 68a, 602 00 Brno, Czech Republic.
| | - Vojtěch Krajňanský
- Faculty of Informatics, Masaryk University, Botanická 68a, 602 00 Brno, Czech Republic
| | - Rudolf Nenutil
- Department of Pathology, Masaryk Memorial Cancer Institute, Žlutý kopec 7, 656 53 Brno, Czech Republic
| | - Petr Holub
- Institute of Computer Science, Masaryk University, Šumavská 416/15, 602 00 Brno, Czech Republic
| | - Tomáš Brázdil
- Faculty of Informatics, Masaryk University, Botanická 68a, 602 00 Brno, Czech Republic
| |
Collapse
|
14
|
Ke J, Liu K, Sun Y, Xue Y, Huang J, Lu Y, Dai J, Chen Y, Han X, Shen Y, Shen D. Artifact Detection and Restoration in Histology Images With Stain-Style and Structural Preservation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3487-3500. [PMID: 37352087 DOI: 10.1109/tmi.2023.3288940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/25/2023]
Abstract
The artifacts in histology images may encumber the accurate interpretation of medical information and cause misdiagnosis. Accordingly, prepending manual quality control of artifacts considerably decreases the degree of automation. To close this gap, we propose a methodical pre-processing framework to detect and restore artifacts, which minimizes their impact on downstream AI diagnostic tasks. First, the artifact recognition network AR-Classifier first differentiates common artifacts from normal tissues, e.g., tissue folds, marking dye, tattoo pigment, spot, and out-of-focus, and also catalogs artifact patches by their restorability. Then, the succeeding artifact restoration network AR-CycleGAN performs de-artifact processing where stain styles and tissue structures can be maximally retained. We construct a benchmark for performance evaluation, curated from both clinically collected WSIs and public datasets of colorectal and breast cancer. The functional structures are compared with state-of-the-art methods, and also comprehensively evaluated by multiple metrics across multiple tasks, including artifact classification, artifact restoration, downstream diagnostic tasks of tumor classification and nuclei segmentation. The proposed system allows full automation of deep learning based histology image analysis without human intervention. Moreover, the structure-independent characteristic enables its processing with various artifact subtypes. The source code and data in this research are available at https://github.com/yunboer/AR-classifier-and-AR-CycleGAN.
Collapse
|
15
|
Voon W, Hum YC, Tee YK, Yap WS, Nisar H, Mokayed H, Gupta N, Lai KW. Evaluating the effectiveness of stain normalization techniques in automated grading of invasive ductal carcinoma histopathological images. Sci Rep 2023; 13:20518. [PMID: 37993544 PMCID: PMC10665422 DOI: 10.1038/s41598-023-46619-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 11/02/2023] [Indexed: 11/24/2023] Open
Abstract
Debates persist regarding the impact of Stain Normalization (SN) on recent breast cancer histopathological studies. While some studies propose no influence on classification outcomes, others argue for improvement. This study aims to assess the efficacy of SN in breast cancer histopathological classification, specifically focusing on Invasive Ductal Carcinoma (IDC) grading using Convolutional Neural Networks (CNNs). The null hypothesis asserts that SN has no effect on the accuracy of CNN-based IDC grading, while the alternative hypothesis suggests the contrary. We evaluated six SN techniques, with five templates selected as target images for the conventional SN techniques. We also utilized seven ImageNet pre-trained CNNs for IDC grading. The performance of models trained with and without SN was compared to discern the influence of SN on classification outcomes. The analysis unveiled a p-value of 0.11, indicating no statistically significant difference in Balanced Accuracy Scores between models trained with StainGAN-normalized images, achieving a score of 0.9196 (the best-performing SN technique), and models trained with non-normalized images, which scored 0.9308. As a result, we did not reject the null hypothesis, indicating that we found no evidence to support a significant discrepancy in effectiveness between stain-normalized and non-normalized datasets for IDC grading tasks. This study demonstrates that SN has a limited impact on IDC grading, challenging the assumption of performance enhancement through SN.
Collapse
Affiliation(s)
- Wingates Voon
- Department of Mechatronics and Biomedical Engineering, Faculty of Engineering and Science, Lee Kong Chian, Universiti Tunku Abdul Rahman, Kampar, Malaysia
| | - Yan Chai Hum
- Department of Mechatronics and Biomedical Engineering, Faculty of Engineering and Science, Lee Kong Chian, Universiti Tunku Abdul Rahman, Kampar, Malaysia.
| | - Yee Kai Tee
- Department of Mechatronics and Biomedical Engineering, Faculty of Engineering and Science, Lee Kong Chian, Universiti Tunku Abdul Rahman, Kampar, Malaysia
| | - Wun-She Yap
- Department of Electrical and Electronic Engineering, Faculty of Engineering and Science, Lee Kong Chian, Universiti Tunku Abdul Rahman, Kampar, Malaysia
| | - Humaira Nisar
- Department of Electronic Engineering, Faculty of Engineering and Green Technology, Universiti Tunku Abdul Rahman, 31900, Kampar, Malaysia
| | - Hamam Mokayed
- Department of Computer Science, Electrical and Space Engineering, Lulea University of Technology, Lulea, Sweden
| | - Neha Gupta
- School of Electronics Engineering, Vellore Institute of Technology, Amaravati, AP, India
| | - Khin Wee Lai
- Department of Biomedical Engineering, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| |
Collapse
|
16
|
Frei AL, McGuigan A, Sinha RRAK, Glaire MA, Jabbar F, Gneo L, Tomasevic T, Harkin A, Iveson TJ, Saunders M, Oein K, Maka N, Pezella F, Campo L, Hay J, Edwards J, Sansom OJ, Kelly C, Tomlinson I, Kildal W, Kerr RS, Kerr DJ, Danielsen HE, Domingo E, Church DN, Koelzer VH. Accounting for intensity variation in image analysis of large-scale multiplexed clinical trial datasets. J Pathol Clin Res 2023; 9:449-463. [PMID: 37697694 PMCID: PMC10556275 DOI: 10.1002/cjp2.342] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 08/14/2023] [Accepted: 08/20/2023] [Indexed: 09/13/2023]
Abstract
Multiplex immunofluorescence (mIF) imaging can provide comprehensive quantitative and spatial information for multiple immune markers for tumour immunoprofiling. However, application at scale to clinical trial samples sourced from multiple institutions is challenging due to pre-analytical heterogeneity. This study reports an analytical approach to the largest multi-parameter immunoprofiling study of clinical trial samples to date. We analysed 12,592 tissue microarray (TMA) spots from 3,545 colorectal cancers sourced from more than 240 institutions in two clinical trials (QUASAR 2 and SCOT) stained for CD4, CD8, CD20, CD68, FoxP3, pan-cytokeratin, and DAPI by mIF. TMA slides were multi-spectrally imaged and analysed by cell-based and pixel-based marker analysis. We developed an adaptive thresholding method to account for inter- and intra-slide intensity variation in TMA analysis. Applying this method effectively ameliorated inter- and intra-slide intensity variation improving the image analysis results compared with methods using a single global threshold. Correlation of CD8 data derived by our mIF analysis approach with single-plex chromogenic immunohistochemistry CD8 data derived from subsequent sections indicates the validity of our method (Spearman's rank correlation coefficients ρ between 0.63 and 0.66, p ≪ 0.01) as compared with the current gold standard analysis approach. Evaluation of correlation between cell-based and pixel-based analysis results confirms equivalency (ρ > 0.8, p ≪ 0.01, except for CD20 in the epithelial region) of both analytical approaches. These data suggest that our adaptive thresholding approach can enable analysis of mIF-stained clinical trial TMA datasets by digital pathology at scale for precision immunoprofiling.
Collapse
Affiliation(s)
- Anja L Frei
- Department of Pathology and Molecular PathologyUniversity Hospital Zurich, University of ZurichZurichSwitzerland
- Life Science Zurich Graduate School, PhD Program in BiomedicineUniversity of ZurichZurichSwitzerland
| | | | | | - Mark A Glaire
- Nuffield Department of MedicineUniversity of OxfordOxfordUK
| | - Faiz Jabbar
- Nuffield Department of MedicineUniversity of OxfordOxfordUK
| | - Luciana Gneo
- Nuffield Department of MedicineUniversity of OxfordOxfordUK
| | | | - Andrea Harkin
- Cancer Research UK Glasgow Clinical Trials UnitUniversity of GlasgowGlasgowUK
| | - Tim J Iveson
- Southampton University Hospital NHS Foundation TrustSouthamptonUK
| | | | - Karin Oein
- Glasgow Tissue Research FacilityUniversity of Glasgow, Queen Elizabeth University HospitalGlasgowUK
| | - Noori Maka
- Glasgow Tissue Research FacilityUniversity of Glasgow, Queen Elizabeth University HospitalGlasgowUK
| | - Francesco Pezella
- Nuffield Division of Clinical Laboratory SciencesUniversity of OxfordOxfordUK
| | | | - Jennifer Hay
- Glasgow Tissue Research FacilityUniversity of Glasgow, Queen Elizabeth University HospitalGlasgowUK
| | | | - Owen J Sansom
- School of Cancer SciencesUniversity of GlasgowGlasgowUK
- Cancer Research UK Beatson InstituteGlasgowUK
- Cancer Research UK Scotland CentreEdinburgh and GlasgowUK
| | - Caroline Kelly
- Cancer Research UK Glasgow Clinical Trials UnitUniversity of GlasgowGlasgowUK
| | | | - Wanja Kildal
- Institute for Cancer Genetics and InformaticsOslo University HospitalOsloNorway
| | | | - David J Kerr
- Nuffield Division of Clinical Laboratory SciencesUniversity of OxfordOxfordUK
| | - Håvard E Danielsen
- Nuffield Division of Clinical Laboratory SciencesUniversity of OxfordOxfordUK
- Institute for Cancer Genetics and InformaticsOslo University HospitalOsloNorway
- Department of InformaticsUniversity of OsloOsloNorway
| | - Enric Domingo
- Department of OncologyUniversity of OxfordOxfordUK
- Cancer Research UK Scotland CentreEdinburgh and GlasgowUK
| | | | - David N Church
- Nuffield Department of MedicineUniversity of OxfordOxfordUK
- Oxford NIHR Comprehensive Biomedical Research CentreOxford University Hospitals NHS Foundation TrustOxfordUK
| | - Viktor H Koelzer
- Department of Pathology and Molecular PathologyUniversity Hospital Zurich, University of ZurichZurichSwitzerland
- Nuffield Department of MedicineUniversity of OxfordOxfordUK
- Department of OncologyUniversity of OxfordOxfordUK
| |
Collapse
|
17
|
Yang Y, Sun K, Gao Y, Wang K, Yu G. Preparing Data for Artificial Intelligence in Pathology with Clinical-Grade Performance. Diagnostics (Basel) 2023; 13:3115. [PMID: 37835858 PMCID: PMC10572440 DOI: 10.3390/diagnostics13193115] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/27/2023] [Accepted: 09/28/2023] [Indexed: 10/15/2023] Open
Abstract
The pathology is decisive for disease diagnosis but relies heavily on experienced pathologists. In recent years, there has been growing interest in the use of artificial intelligence in pathology (AIP) to enhance diagnostic accuracy and efficiency. However, the impressive performance of deep learning-based AIP in laboratory settings often proves challenging to replicate in clinical practice. As the data preparation is important for AIP, the paper has reviewed AIP-related studies in the PubMed database published from January 2017 to February 2022, and 118 studies were included. An in-depth analysis of data preparation methods is conducted, encompassing the acquisition of pathological tissue slides, data cleaning, screening, and subsequent digitization. Expert review, image annotation, dataset division for model training and validation are also discussed. Furthermore, we delve into the reasons behind the challenges in reproducing the high performance of AIP in clinical settings and present effective strategies to enhance AIP's clinical performance. The robustness of AIP depends on a randomized collection of representative disease slides, incorporating rigorous quality control and screening, correction of digital discrepancies, reasonable annotation, and sufficient data volume. Digital pathology is fundamental in clinical-grade AIP, and the techniques of data standardization and weakly supervised learning methods based on whole slide image (WSI) are effective ways to overcome obstacles of performance reproduction. The key to performance reproducibility lies in having representative data, an adequate amount of labeling, and ensuring consistency across multiple centers. Digital pathology for clinical diagnosis, data standardization and the technique of WSI-based weakly supervised learning will hopefully build clinical-grade AIP.
Collapse
Affiliation(s)
- Yuanqing Yang
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
- Department of Biomedical Engineering, School of Medical, Tsinghua University, Beijing 100084, China
| | - Kai Sun
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
- Furong Laboratory, Changsha 410013, China
| | - Yanhua Gao
- Department of Ultrasound, Shaanxi Provincial People’s Hospital, Xi’an 710068, China;
| | - Kuansong Wang
- Department of Pathology, School of Basic Medical Sciences, Central South University, Changsha 410013, China;
- Department of Pathology, Xiangya Hospital, Central South University, Changsha 410013, China
| | - Gang Yu
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
| |
Collapse
|
18
|
Jing Y, Li C, Du T, Jiang T, Sun H, Yang J, Shi L, Gao M, Grzegorzek M, Li X. A comprehensive survey of intestine histopathological image analysis using machine vision approaches. Comput Biol Med 2023; 165:107388. [PMID: 37696178 DOI: 10.1016/j.compbiomed.2023.107388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/06/2023] [Accepted: 08/25/2023] [Indexed: 09/13/2023]
Abstract
Colorectal Cancer (CRC) is currently one of the most common and deadly cancers. CRC is the third most common malignancy and the fourth leading cause of cancer death worldwide. It ranks as the second most frequent cause of cancer-related deaths in the United States and other developed countries. Histopathological images contain sufficient phenotypic information, they play an indispensable role in the diagnosis and treatment of CRC. In order to improve the objectivity and diagnostic efficiency for image analysis of intestinal histopathology, Computer-aided Diagnosis (CAD) methods based on machine learning (ML) are widely applied in image analysis of intestinal histopathology. In this investigation, we conduct a comprehensive study on recent ML-based methods for image analysis of intestinal histopathology. First, we discuss commonly used datasets from basic research studies with knowledge of intestinal histopathology relevant to medicine. Second, we introduce traditional ML methods commonly used in intestinal histopathology, as well as deep learning (DL) methods. Then, we provide a comprehensive review of the recent developments in ML methods for segmentation, classification, detection, and recognition, among others, for histopathological images of the intestine. Finally, the existing methods have been studied, and the application prospects of these methods in this field are given.
Collapse
Affiliation(s)
- Yujie Jing
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.
| | - Tianming Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Liyu Shi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Minghe Gao
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Liaoning Cancer Hospital, Shenyang, China.
| |
Collapse
|
19
|
Moscalu M, Moscalu R, Dascălu CG, Țarcă V, Cojocaru E, Costin IM, Țarcă E, Șerban IL. Histopathological Images Analysis and Predictive Modeling Implemented in Digital Pathology-Current Affairs and Perspectives. Diagnostics (Basel) 2023; 13:2379. [PMID: 37510122 PMCID: PMC10378281 DOI: 10.3390/diagnostics13142379] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Revised: 07/11/2023] [Accepted: 07/12/2023] [Indexed: 07/30/2023] Open
Abstract
In modern clinical practice, digital pathology has an essential role, being a technological necessity for the activity in the pathological anatomy laboratories. The development of information technology has majorly facilitated the management of digital images and their sharing for clinical use; the methods to analyze digital histopathological images, based on artificial intelligence techniques and specific models, quantify the required information with significantly higher consistency and precision compared to that provided by optical microscopy. In parallel, the unprecedented advances in machine learning facilitate, through the synergy of artificial intelligence and digital pathology, the possibility of diagnosis based on image analysis, previously limited only to certain specialties. Therefore, the integration of digital images into the study of pathology, combined with advanced algorithms and computer-assisted diagnostic techniques, extends the boundaries of the pathologist's vision beyond the microscopic image and allows the specialist to use and integrate his knowledge and experience adequately. We conducted a search in PubMed on the topic of digital pathology and its applications, to quantify the current state of knowledge. We found that computer-aided image analysis has a superior potential to identify, extract and quantify features in more detail compared to the human pathologist's evaluating possibilities; it performs tasks that exceed its manual capacity, and can produce new diagnostic algorithms and prediction models applicable in translational research that are able to identify new characteristics of diseases based on changes at the cellular and molecular level.
Collapse
Affiliation(s)
- Mihaela Moscalu
- Department of Preventive Medicine and Interdisciplinarity, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Roxana Moscalu
- Wythenshawe Hospital, Manchester University NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester M139PT, UK
| | - Cristina Gena Dascălu
- Department of Preventive Medicine and Interdisciplinarity, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Viorel Țarcă
- Department of Preventive Medicine and Interdisciplinarity, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Elena Cojocaru
- Department of Morphofunctional Sciences I, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Ioana Mădălina Costin
- Faculty of Medicine, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Elena Țarcă
- Department of Surgery II-Pediatric Surgery, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Ionela Lăcrămioara Șerban
- Department of Morpho-Functional Sciences II, Faculty of Medicine, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| |
Collapse
|
20
|
Miranda Ruiz F, Lahrmann B, Bartels L, Krauthoff A, Keil A, Härtel S, Tao AS, Ströbel P, Clarke MA, Wentzensen N, Grabe N. CNN stability training improves robustness to scanner and IHC-based image variability for epithelium segmentation in cervical histology. Front Med (Lausanne) 2023; 10:1173616. [PMID: 37476610 PMCID: PMC10354251 DOI: 10.3389/fmed.2023.1173616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 06/06/2023] [Indexed: 07/22/2023] Open
Abstract
Background In digital pathology, image properties such as color, brightness, contrast and blurriness may vary based on the scanner and sample preparation. Convolutional Neural Networks (CNNs) are sensitive to these variations and may underperform on images from a different domain than the one used for training. Robustness to these image property variations is required to enable the use of deep learning in clinical practice and large scale clinical research. Aims CNN Stability Training (CST) is proposed and evaluated as a method to increase CNN robustness to scanner and Immunohistochemistry (IHC)-based image variability. Methods CST was applied to segment epithelium in immunohistological cervical Whole Slide Images (WSIs). CST randomly distorts input tiles and factors the difference between the CNN prediction for the original and distorted inputs within the loss function. CNNs were trained using 114 p16-stained WSIs from the same scanner, and evaluated on 6 WSI test sets, each with 23 to 24 WSIs of the same tissue but different scanner/IHC combinations. Relative robustness (rAUC) was measured as the difference between the AUC on the training domain test set (i.e., baseline test set) and the remaining test sets. Results Across all test sets, The AUC of CST models outperformed "No CST" models (AUC: 0.940-0.989 vs. 0.905-0.986, p < 1e - 8), and obtained an improved robustness (rAUC: [-0.038, -0.003] vs. [-0.081, -0.002]). At a WSI level, CST models showed an increase in performance in 124 of the 142 WSIs. CST models also outperformed models trained with random on-the-fly data augmentation (DA) in all test sets ([0.002, 0.021], p < 1e-6). Conclusion CST offers a path to improve CNN performance without the need for more data and allows customizing distortions to specific use cases. A python implementation of CST is publicly available at https://github.com/TIGACenter/CST_v1.
Collapse
Affiliation(s)
- Felipe Miranda Ruiz
- Institute of Pathology, University Medical Center Göttingen UMG, Göttingen, Germany
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
| | - Bernd Lahrmann
- Institute of Pathology, University Medical Center Göttingen UMG, Göttingen, Germany
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
| | - Liam Bartels
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
- Medical Oncology Department, National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Alexandra Krauthoff
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
- Medical Oncology Department, National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Andreas Keil
- Institute of Pathology, University Medical Center Göttingen UMG, Göttingen, Germany
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
| | - Steffen Härtel
- Medical Faculty, Center of Medical Informatics and Telemedicine (CIMT), University of Chile, Santiago, Chile
| | - Amy S. Tao
- Division of Cancer Epidemiology and Genetics, US National Cancer Institute (NCI), Bethesda, MD, United States
| | - Philipp Ströbel
- Institute of Pathology, University Medical Center Göttingen UMG, Göttingen, Germany
| | - Megan A. Clarke
- Division of Cancer Epidemiology and Genetics, US National Cancer Institute (NCI), Bethesda, MD, United States
| | - Nicolas Wentzensen
- Division of Cancer Epidemiology and Genetics, US National Cancer Institute (NCI), Bethesda, MD, United States
| | - Niels Grabe
- Institute of Pathology, University Medical Center Göttingen UMG, Göttingen, Germany
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
- Medical Oncology Department, National Center for Tumor Diseases (NCT), Heidelberg, Germany
| |
Collapse
|
21
|
Sun G, Yan X, Wang H, Li F, Yang R, Xu J, Liu X, Li X, Zou X. Color restoration based on digital pathology image. PLoS One 2023; 18:e0287704. [PMID: 37379301 DOI: 10.1371/journal.pone.0287704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 06/09/2023] [Indexed: 06/30/2023] Open
Abstract
OBJECTIVE Protective color restoration of faded digital pathology images based on color transfer algorithm. METHODS Twenty fresh tissue samples of invasive breast cancer from the pathology department of Qingdao Central Hospital in 2021 were screened. After HE staining, HE stained sections were irradiated with sunlight to simulate natural fading, and every 7 days was a fading cycle, and a total of 8 cycles were experienced. At the end of each cycle, the sections were digitally scanned to retain clear images, and the color changes of the sections during the fading process were recorded. The color transfer algorithm was applied to restore the color of the faded images; Adobe Lightroom Classic software presented the histogram of the image color distribution; UNet++ cell recognition segmentation model was used to identify the color restored images; Natural Image Quality Evaluator (NIQE), Information Entropy (Entropy), and Average Gradient (AG) were applied to evaluate the quality of the restored images. RESULTS The restored image color met the diagnostic needs of pathologists. Compared with the faded images, the NIQE value decreased (P<0.05), Entropy value increased (P<0.01), and AG value increased (P<0.01). The cell recognition rate of the restored image was significantly improved. CONCLUSION The color transfer algorithm can effectively repair faded pathology images, restore the color contrast between nucleus and cytoplasm, improve the image quality, meet the diagnostic needs and improve the cell recognition rate of the deep learning model.
Collapse
Affiliation(s)
- Guoxin Sun
- School of Clinical Medicine, Qingdao University, Qingdao, China
| | - Xiong Yan
- Department of Pathology, Qingdao Central Hospital, Qingdao, China
| | - Huizhe Wang
- School of Clinical Medicine, Qingdao University, Qingdao, China
| | - Fei Li
- School of Computer Engineering and Science Shanghai University, Shanghai, China
| | - Rui Yang
- School of Computer Engineering and Science Shanghai University, Shanghai, China
| | - Jing Xu
- Department of Pathology, Qingdao Central Hospital, Qingdao, China
| | - Xin Liu
- School of Clinical Medicine, Qingdao University, Qingdao, China
| | - Xiaomao Li
- School of Computer Engineering and Science Shanghai University, Shanghai, China
| | - Xiao Zou
- Department of Breast Surgery, Xiangdong Hospital Affiliated to Hunan Normal University, Hunan, China
| |
Collapse
|
22
|
Salido J, Vallez N, González-López L, Deniz O, Bueno G. Comparison of deep learning models for digital H&E staining from unpaired label-free multispectral microscopy images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 235:107528. [PMID: 37040684 DOI: 10.1016/j.cmpb.2023.107528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 03/27/2023] [Accepted: 04/03/2023] [Indexed: 05/08/2023]
Abstract
BACKGROUND AND OBJECTIVE This paper presents the quantitative comparison of three generative models of digital staining, also known as virtual staining, in H&E modality (i.e., Hematoxylin and Eosin) that are applied to 5 types of breast tissue. Moreover, a qualitative evaluation of the results achieved with the best model was carried out. This process is based on images of samples without staining captured by a multispectral microscope with previous dimensional reduction to three channels in the RGB range. METHODS The models compared are based on conditional GAN (pix2pix) which uses images aligned with/without staining, and two models that do not require image alignment, Cycle GAN (cycleGAN) and contrastive learning-based model (CUT). These models are compared based on the structural similarity and chromatic discrepancy between samples with chemical staining and their corresponding ones with digital staining. The correspondence between images is achieved after the chemical staining images are subjected to digital unstaining by means of a model obtained to guarantee the cyclic consistency of the generative models. RESULTS The comparison of the three models corroborates the visual evaluation of the results showing the superiority of cycleGAN both for its larger structural similarity with respect to chemical staining (mean value of SSIM ∼ 0.95) and lower chromatic discrepancy (10%). To this end, quantization and calculation of EMD (Earth Mover's Distance) between clusters is used. In addition, quality evaluation through subjective psychophysical tests with three experts was carried out to evaluate quality of the results with the best model (cycleGAN). CONCLUSIONS The results can be satisfactorily evaluated by metrics that use as reference image a chemically stained sample and the digital staining images of the reference sample with prior digital unstaining. These metrics demonstrate that generative staining models that guarantee cyclic consistency provide the closest results to chemical H&E staining that also is consistent with the result of qualitative evaluation by experts.
Collapse
Affiliation(s)
- Jesus Salido
- IEEAC Dept. (ESI-UCLM), P de la Universidad 4, Ciudad Real, 13071, Spain.
| | - Noelia Vallez
- IEEAC Dept. (ETSII-UCLM), Avda. Camilo José Cela s/n, Ciudad Real, 13071, Spain
| | - Lucía González-López
- Hospital Gral. Universitario de C.Real (HGUCR), C. Obispo Rafael Torija s/n, Ciudad Real, 13005, Spain
| | - Oscar Deniz
- IEEAC Dept. (ETSII-UCLM), Avda. Camilo José Cela s/n, Ciudad Real, 13071, Spain
| | - Gloria Bueno
- IEEAC Dept. (ETSII-UCLM), Avda. Camilo José Cela s/n, Ciudad Real, 13071, Spain
| |
Collapse
|
23
|
Rong R, Wang S, Zhang X, Wen Z, Cheng X, Jia L, Yang DM, Xie Y, Zhan X, Xiao G. Enhanced Pathology Image Quality with Restore-Generative Adversarial Network. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:404-416. [PMID: 36669682 PMCID: PMC10123520 DOI: 10.1016/j.ajpath.2022.12.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 12/12/2022] [Accepted: 12/20/2022] [Indexed: 01/20/2023]
Abstract
Whole slide imaging is becoming a routine procedure in clinical diagnosis. Advanced image analysis techniques have been developed to assist pathologists in disease diagnosis, staging, subtype classification, and risk stratification. Recently, deep learning algorithms have achieved state-of-the-art performances in various imaging analysis tasks, including tumor region segmentation, nuclei detection, and disease classification. However, widespread clinical use of these algorithms is hampered by their performances often degrading due to image quality issues commonly seen in real-world pathology imaging data such as low resolution, blurring regions, and staining variation. Restore-Generative Adversarial Network (GAN), a deep learning model, was developed to improve the imaging qualities by restoring blurred regions, enhancing low resolution, and normalizing staining colors. The results demonstrate that Restore-GAN can significantly improve image quality, which leads to improved model robustness and performance for existing deep learning algorithms in pathology image analysis. Restore-GAN has the potential to be used to facilitate the applications of deep learning models in digital pathology analyses.
Collapse
Affiliation(s)
- Ruichen Rong
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Shidan Wang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Xinyi Zhang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Zhuoyu Wen
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Xian Cheng
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Liwei Jia
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Donghan M Yang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Yang Xie
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas; Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, Texas; Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Xiaowei Zhan
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas; Center for the Genetics of Host Defense, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Guanghua Xiao
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas; Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, Texas; Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, Texas.
| |
Collapse
|
24
|
Kim J, Ko S, Kim M, Park NJY, Han H, Cho J, Park JY. Deep Learning Prediction of TERT Promoter Mutation Status in Thyroid Cancer Using Histologic Images. Medicina (B Aires) 2023; 59:medicina59030536. [PMID: 36984536 PMCID: PMC10055833 DOI: 10.3390/medicina59030536] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Revised: 02/02/2023] [Accepted: 03/06/2023] [Indexed: 03/12/2023] Open
Abstract
Background and objectives: Telomerase reverse transcriptase (TERT) promoter mutation, found in a subset of patients with thyroid cancer, is strongly associated with aggressive biologic behavior. Predicting TERT promoter mutation is thus necessary for the prognostic stratification of thyroid cancer patients. Materials and Methods: In this study, we evaluate TERT promoter mutation status in thyroid cancer through the deep learning approach using histologic images. Our analysis included 13 consecutive surgically resected thyroid cancers with TERT promoter mutations (either C228T or C250T) and 12 randomly selected surgically resected thyroid cancers with a wild-type TERT promoter. Our deep learning model was created using a two-step cascade approach. First, tumor areas were identified using convolutional neural networks (CNNs), and then TERT promoter mutations within tumor areas were predicted using the CNN–recurrent neural network (CRNN) model. Results: Using the hue–saturation–value (HSV)-strong color transformation scheme, the overall experiment results show 99.9% sensitivity and 60% specificity (improvements of approximately 25% and 37%, respectively, compared to image normalization as a baseline model) in predicting TERT mutations. Conclusions: Highly sensitive screening for TERT promoter mutations is possible using histologic image analysis based on deep learning. This approach will help improve the classification of thyroid cancer patients according to the biologic behavior of tumors.
Collapse
Affiliation(s)
- Jinhee Kim
- Department of Pathology, Kyungpook National University School of Medicine, Kyungpook National University Chilgok Hospital, Daegu 41404, Republic of Korea
| | - Seokhwan Ko
- Clinical Omics Institute, Kyungpook National University, Daegu 41405, Republic of Korea
- Department of Biomedical Science, School of Medicine, Kyungpook National University, Daegu 41944, Republic of Korea
| | - Moonsik Kim
- Department of Pathology, Kyungpook National University School of Medicine, Kyungpook National University Chilgok Hospital, Daegu 41404, Republic of Korea
| | - Nora Jee-Young Park
- Department of Pathology, Kyungpook National University School of Medicine, Kyungpook National University Chilgok Hospital, Daegu 41404, Republic of Korea
| | - Hyungsoo Han
- Clinical Omics Institute, Kyungpook National University, Daegu 41405, Republic of Korea
- Department of Physiology, School of Medicine, Kyungpook National University, Daegu 41944, Republic of Korea
| | - Junghwan Cho
- Clinical Omics Institute, Kyungpook National University, Daegu 41405, Republic of Korea
- Correspondence: (J.C.); (J.Y.P.); Tel.: +82-53-950-4214 or +82-01-8315-1896 (J.C.); Tel.: +82-53-200-3408 or +82-10-9941-5245 (J.Y.P.)
| | - Ji Young Park
- Department of Pathology, Kyungpook National University School of Medicine, Kyungpook National University Chilgok Hospital, Daegu 41404, Republic of Korea
- Correspondence: (J.C.); (J.Y.P.); Tel.: +82-53-950-4214 or +82-01-8315-1896 (J.C.); Tel.: +82-53-200-3408 or +82-10-9941-5245 (J.Y.P.)
| |
Collapse
|
25
|
Impact of Stain Normalization on Pathologist Assessment of Prostate Cancer: A Comparative Study. Cancers (Basel) 2023; 15:cancers15051503. [PMID: 36900293 PMCID: PMC10000688 DOI: 10.3390/cancers15051503] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 02/17/2023] [Accepted: 02/23/2023] [Indexed: 03/04/2023] Open
Abstract
In clinical routine, the quality of whole-slide images plays a key role in the pathologist's diagnosis, and suboptimal staining may be a limiting factor. The stain normalization process helps to solve this problem through the standardization of color appearance of a source image with respect to a target image with optimal chromatic features. The analysis is focused on the evaluation of the following parameters assessed by two experts on original and normalized slides: (i) perceived color quality, (ii) diagnosis for the patient, (iii) diagnostic confidence and (iv) time required for diagnosis. Results show a statistically significant increase in color quality in the normalized images for both experts (p < 0.0001). Regarding prostate cancer assessment, the average times for diagnosis are significantly lower for normalized images than original ones (first expert: 69.9 s vs. 77.9 s with p < 0.0001; second expert: 37.4 s vs. 52.7 s with p < 0.0001), and at the same time, a statistically significant increase in diagnostic confidence is proven. The improvement of poor-quality images and greater clarity of diagnostically important details in normalized slides demonstrate the potential of stain normalization in the routine practice of prostate cancer assessment.
Collapse
|
26
|
Stain color translation of multi-domain OSCC histopathology images using attention gated cGAN. Comput Med Imaging Graph 2023; 106:102202. [PMID: 36857953 DOI: 10.1016/j.compmedimag.2023.102202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 02/19/2023] [Accepted: 02/19/2023] [Indexed: 02/26/2023]
Abstract
Oral Squamous Cell Carcinoma (OSCC) is the most prevalent type of oral cancer across the globe. Histopathology examination is the gold standard for OSCC examination, where stained histopathology slides help in studying and analyzing the cell structures under a microscope to determine the stages and grading of OSCC. One of the staining methods popularly known as H&E staining is used to produce differential coloration, highlight key tissue features, and improve contrast, which makes cell analysis easier. However, the stained H&E histopathology images exhibit inter and intra-variation due to staining techniques, incubation times, and staining reagents. These variations negatively impact computer-aided diagnosis (CAD) and Machine learning algorithm's accuracy and development. A pre-processing procedure called stain normalization must be employed to reduce stain variance's negative impacts. Numerous state-of-the-art stain normalization methods are introduced. However, a robust multi-domain stain normalization approach is still required because, in a real-world situation, the OSCC histopathology images will include more than two color variations involving several domains. In this paper, a multi-domain stain translation method is proposed. The proposed method is an attention gated generator based on a Conditional Generative Adversarial Network (cGAN) with a novel objective function to enforce color distribution and the perpetual resemblance between the source and target domains. Instead of using WSI scanner images like previous techniques, the proposed method is experimented on OSCC histopathology images obtained by several conventional microscopes coupled with cameras. The proposed method receives the L* channel from the L*a*b* color space in inference mode and generates the G(a*b*) channel, which are color-adapted. The proposed technique uses mappings learned during training phases to translate the source domain to the target domain; mapping are learned using the whole color distribution of the target domain instead of one reference image. The suggested technique outperforms the four state-of-the-art methods in multi-domain OSCC histopathological translation, the claim is supported by results obtained after assessment in both quantitative and qualitative ways.
Collapse
|
27
|
Bouyssoux A, Jarnouen K, Lallement L, Fezzani R, Olivo-Marin JC. Automated staining analysis in digital cytopathology and applications. Cytometry A 2022; 101:1068-1083. [PMID: 35614552 DOI: 10.1002/cyto.a.24659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 02/25/2022] [Accepted: 05/16/2022] [Indexed: 01/27/2023]
Abstract
The progress of digital pathology in recent years has been an opportunity for the development of automated image analysis algorithms for quantitative measurements and computer aided diagnosis. With those new methods comes the need for high staining quality and reproducibility, as image analysis tools are typically more sensible to slight stain variations than trained pathologists. This article presents a method for the automated analysis of cytology slides stains specifically adapted to the challenges encountered in digital cytopathology. In particular, the variety of cell types in cytology slides, the 3D distribution of the cellular material, the presence of superposed cells and the need for independent analysis of sub-cellular compartments are addressed. The proposed method is applied to the quantification of staining variations for quality control, resulting from changes in the staining protocol such as reagent immersion time or a reagent change. Another demonstrated application is the selection of staining protocol parameters that maximize the visible details in nucleus. Finally the analysis pipeline is also used to compare different stain normalization algorithms on digital cytology slides. Code available at: https://gitlab.com/vitadx/articles/automated_staining_analysis.
Collapse
Affiliation(s)
- Alexandre Bouyssoux
- BioImage Analysis Unit, CNRS UMR 3691, Institut Pasteur, Université de Paris, Paris, France.,VitaDX International, Paris, France
| | | | | | | | | |
Collapse
|
28
|
Kosaraju S, Park J, Lee H, Yang JW, Kang M. Deep learning-based framework for slide-based histopathological image analysis. Sci Rep 2022; 12:19075. [PMID: 36351997 PMCID: PMC9646838 DOI: 10.1038/s41598-022-23166-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 10/26/2022] [Indexed: 11/11/2022] Open
Abstract
Digital pathology coupled with advanced machine learning (e.g., deep learning) has been changing the paradigm of whole-slide histopathological images (WSIs) analysis. Major applications in digital pathology using machine learning include automatic cancer classification, survival analysis, and subtyping from pathological images. While most pathological image analyses are based on patch-wise processing due to the extremely large size of histopathology images, there are several applications that predict a single clinical outcome or perform pathological diagnosis per slide (e.g., cancer classification, survival analysis). However, current slide-based analyses are task-dependent, and a general framework of slide-based analysis in WSI has been seldom investigated. We propose a novel slide-based histopathology analysis framework that creates a WSI representation map, called HipoMap, that can be applied to any slide-based problems, coupled with convolutional neural networks. HipoMap converts a WSI of various shapes and sizes to structured image-type representation. Our proposed HipoMap outperformed existing methods in intensive experiments with various settings and datasets. HipoMap showed the Area Under the Curve (AUC) of 0.96±0.026 (5% improved) in the experiments for lung cancer classification, and c-index of 0.787±0.013 (3.5% improved) and coefficient of determination ([Formula: see text]) of 0.978±0.032 (24% improved) in survival analysis and survival prediction with TCGA lung cancer data respectively, as a general framework of slide-based analysis with a flexible capability. The results showed significant improvement comparing to the current state-of-the-art methods on each task. We further discussed experimental results of HipoMap as pathological viewpoints and verified the performance using publicly available TCGA datasets. A Python package is available at https://pypi.org/project/hipomap , and the package can be easily installed using Python PIP. The open-source codes in Python are available at: https://github.com/datax-lab/HipoMap .
Collapse
Affiliation(s)
- Sai Kosaraju
- grid.272362.00000 0001 0806 6926Department of Computer Science, University of Nevada, Las Vegas, Las Vegas, NV 89154 USA
| | - Jeongyeon Park
- grid.412859.30000 0004 0533 4202Department of Computer Science, Sun Moon University, Asan, 336708 South Korea
| | - Hyun Lee
- grid.412859.30000 0004 0533 4202Department of Computer Science, Sun Moon University, Asan, 336708 South Korea
| | - Jung Wook Yang
- grid.256681.e0000 0001 0661 1492Department of Pathology, Gyeongsang National University Hospital, Gyeongsang National University College of Medicine, Jinju, South Korea
| | - Mingon Kang
- grid.272362.00000 0001 0806 6926Department of Computer Science, University of Nevada, Las Vegas, Las Vegas, NV 89154 USA
| |
Collapse
|
29
|
Michielli N, Caputo A, Scotto M, Mogetta A, Pennisi OAM, Molinari F, Balmativola D, Bosco M, Gambella A, Metovic J, Tota D, Carpenito L, Gasparri P, Salvi M. Stain normalization in digital pathology: Clinical multi-center evaluation of image quality. J Pathol Inform 2022; 13:100145. [PMID: 36268060 PMCID: PMC9577129 DOI: 10.1016/j.jpi.2022.100145] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 09/14/2022] [Accepted: 09/22/2022] [Indexed: 11/20/2022] Open
Abstract
In digital pathology, the final appearance of digitized images is affected by several factors, resulting in stain color and intensity variation. Stain normalization is an innovative solution to overcome stain variability. However, the validation of color normalization tools has been assessed only from a quantitative perspective, through the computation of similarity metrics between the original and normalized images. To the best of our knowledge, no works investigate the impact of normalization on the pathologist's evaluation. The objective of this paper is to propose a multi-tissue (i.e., breast, colon, liver, lung, and prostate) and multi-center qualitative analysis of a stain normalization tool with the involvement of pathologists with different years of experience. Two qualitative studies were carried out for this purpose: (i) a first study focused on the analysis of the perceived image quality and absence of significant image artifacts after the normalization process; (ii) a second study focused on the clinical score of the normalized image with respect to the original one. The results of the first study prove the high quality of the normalized image with a low impact artifact generation, while the second study demonstrates the superiority of the normalized image with respect to the original one in clinical practice. The normalization process can help both to reduce variability due to tissue staining procedures and facilitate the pathologist in the histological examination. The experimental results obtained in this work are encouraging and can justify the use of a stain normalization tool in clinical routine.
Collapse
Affiliation(s)
- Nicola Michielli
- Biolab, PolitoMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Alessandro Caputo
- Department of Medicine and Surgery, University Hospital of Salerno, Salerno, Italy
| | - Manuela Scotto
- Biolab, PolitoMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Alessandro Mogetta
- Biolab, PolitoMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Orazio Antonino Maria Pennisi
- Technology Transfer and Industrial Liaison Department, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Filippo Molinari
- Biolab, PolitoMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Davide Balmativola
- Pathology Unit, Humanitas Gradenigo Hospital, Corso Regina Margherita 8, 10153 Turin, Italy
| | - Martino Bosco
- Department of Pathology, Michele and Pietro Ferrero Hospital, 12060 Verduno, Italy
| | - Alessandro Gambella
- Pathology Unit, Department of Medical Sciences, University of Turin, Via Santena 7, 10126 Turin, Italy
| | - Jasna Metovic
- Pathology Unit, Department of Medical Sciences, University of Turin, Via Santena 7, 10126 Turin, Italy
| | - Daniele Tota
- Pathology Unit, Department of Medical Sciences, University of Turin, Via Santena 7, 10126 Turin, Italy
| | - Laura Carpenito
- Department of Pathology, Fondazione IRCCS Istituto Nazionale dei Tumori, Milan, Italy
- University of Milan, Milan, Italy
| | - Paolo Gasparri
- UOC di Anatomia Patologica, ASP Catania P.O. “Gravina”, Caltagirone, Italy
| | - Massimo Salvi
- Biolab, PolitoMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| |
Collapse
|
30
|
McKenzie AT, Marx GA, Koenigsberg D, Sawyer M, Iida MA, Walker JM, Richardson TE, Campanella G, Attems J, McKee AC, Stein TD, Fuchs TJ, White CL, Farrell K, Crary JF. Interpretable deep learning of myelin histopathology in age-related cognitive impairment. Acta Neuropathol Commun 2022; 10:131. [PMID: 36127723 PMCID: PMC9490907 DOI: 10.1186/s40478-022-01425-5] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Accepted: 08/09/2022] [Indexed: 02/08/2023] Open
Abstract
Age-related cognitive impairment is multifactorial, with numerous underlying and frequently co-morbid pathological correlates. Amyloid beta (Aβ) plays a major role in Alzheimer's type age-related cognitive impairment, in addition to other etiopathologies such as Aβ-independent hyperphosphorylated tau, cerebrovascular disease, and myelin damage, which also warrant further investigation. Classical methods, even in the setting of the gold standard of postmortem brain assessment, involve semi-quantitative ordinal staging systems that often correlate poorly with clinical outcomes, due to imperfect cognitive measurements and preconceived notions regarding the neuropathologic features that should be chosen for study. Improved approaches are needed to identify histopathological changes correlated with cognition in an unbiased way. We used a weakly supervised multiple instance learning algorithm on whole slide images of human brain autopsy tissue sections from a group of elderly donors to predict the presence or absence of cognitive impairment (n = 367 with cognitive impairment, n = 349 without). Attention analysis allowed us to pinpoint the underlying subregional architecture and cellular features that the models used for the prediction in both brain regions studied, the medial temporal lobe and frontal cortex. Despite noisy labels of cognition, our trained models were able to predict the presence of cognitive impairment with a modest accuracy that was significantly greater than chance. Attention-based interpretation studies of the features most associated with cognitive impairment in the top performing models suggest that they identified myelin pallor in the white matter. Our results demonstrate a scalable platform with interpretable deep learning to identify unexpected aspects of pathology in cognitive impairment that can be translated to the study of other neurobiological disorders.
Collapse
Affiliation(s)
- Andrew T McKenzie
- Departments of Pathology, Neuroscience, and Artificial Intelligence & Human Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Neuropathology Brain Bank & Research Core, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer's Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Gabriel A Marx
- Departments of Pathology, Neuroscience, and Artificial Intelligence & Human Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Neuropathology Brain Bank & Research Core, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer's Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Daniel Koenigsberg
- Departments of Pathology, Neuroscience, and Artificial Intelligence & Human Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Neuropathology Brain Bank & Research Core, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer's Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Mary Sawyer
- Departments of Pathology, Neuroscience, and Artificial Intelligence & Human Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Neuropathology Brain Bank & Research Core, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer's Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Megan A Iida
- Departments of Pathology, Neuroscience, and Artificial Intelligence & Human Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Neuropathology Brain Bank & Research Core, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer's Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Jamie M Walker
- Department of Pathology, University of Texas Health Science Center, San Antonio, TX, USA
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center, San Antonio, TX, USA
| | - Timothy E Richardson
- Department of Pathology, University of Texas Health Science Center, San Antonio, TX, USA
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center, San Antonio, TX, USA
| | - Gabriele Campanella
- Departments of Pathology, Neuroscience, and Artificial Intelligence & Human Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Johannes Attems
- Translation and Clinical Research Institute, Newcastle University, Newcastle upon Tyne, NE4 5PL, UK
| | - Ann C McKee
- Department of Pathology, VA Medical Center &, Boston University School of Medicine, Boston, MA, USA
| | - Thor D Stein
- Department of Pathology, VA Medical Center &, Boston University School of Medicine, Boston, MA, USA
| | - Thomas J Fuchs
- Departments of Pathology, Neuroscience, and Artificial Intelligence & Human Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Charles L White
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Kurt Farrell
- Departments of Pathology, Neuroscience, and Artificial Intelligence & Human Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Neuropathology Brain Bank & Research Core, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Ronald M. Loeb Center for Alzheimer's Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Department of Pathology, Icahn School of Medicine at Mount Sinai, Icahn Building 9th Floor, L9-02C, 1425 Madison Avenue, New York, NY, USA.
| | - John F Crary
- Departments of Pathology, Neuroscience, and Artificial Intelligence & Human Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Neuropathology Brain Bank & Research Core, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Ronald M. Loeb Center for Alzheimer's Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Department of Pathology, Icahn School of Medicine at Mount Sinai, Icahn Building 9th Floor, Room 20A, 1425 Madison Avenue, New York, NY, 10029, USA.
| |
Collapse
|
31
|
Hameed Z, Garcia-Zapirain B, Aguirre JJ, Isaza-Ruget MA. Multiclass classification of breast cancer histopathology images using multilevel features of deep convolutional neural network. Sci Rep 2022; 12:15600. [PMID: 36114214 PMCID: PMC9649689 DOI: 10.1038/s41598-022-19278-2] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Accepted: 08/26/2022] [Indexed: 12/03/2022] Open
Abstract
Breast cancer is a common malignancy and a leading cause of cancer-related deaths in women worldwide. Its early diagnosis can significantly reduce the morbidity and mortality rates in women. To this end, histopathological diagnosis is usually followed as the gold standard approach. However, this process is tedious, labor-intensive, and may be subject to inter-reader variability. Accordingly, an automatic diagnostic system can assist to improve the quality of diagnosis. This paper presents a deep learning approach to automatically classify hematoxylin-eosin-stained breast cancer microscopy images into normal tissue, benign lesion, in situ carcinoma, and invasive carcinoma using our collected dataset. Our proposed model exploited six intermediate layers of the Xception (Extreme Inception) network to retrieve robust and abstract features from input images. First, we optimized the proposed model on the original (unnormalized) dataset using 5-fold cross-validation. Then, we investigated its performance on four normalized datasets resulting from Reinhard, Ruifrok, Macenko, and Vahadane stain normalization. For original images, our proposed framework yielded an accuracy of 98% along with a kappa score of 0.969. Also, it achieved an average AUC-ROC score of 0.998 as well as a mean AUC-PR value of 0.995. Specifically, for in situ carcinoma and invasive carcinoma, it offered sensitivity of 96% and 99%, respectively. For normalized images, the proposed architecture performed better for Makenko normalization compared to the other three techniques. In this case, the proposed model achieved an accuracy of 97.79% together with a kappa score of 0.965. Also, it attained an average AUC-ROC score of 0.997 and a mean AUC-PR value of 0.991. Especially, for in situ carcinoma and invasive carcinoma, it offered sensitivity of 96% and 99%, respectively. These results demonstrate that our proposed model outperformed the baseline AlexNet as well as state-of-the-art VGG16, VGG19, Inception-v3, and Xception models with their default settings. Furthermore, it can be inferred that although stain normalization techniques offered competitive performance, they could not surpass the results of the original dataset.
Collapse
|
32
|
Abu Haeyeh Y, Ghazal M, El-Baz A, Talaat IM. Development and Evaluation of a Novel Deep-Learning-Based Framework for the Classification of Renal Histopathology Images. Bioengineering (Basel) 2022; 9:423. [PMID: 36134972 PMCID: PMC9495730 DOI: 10.3390/bioengineering9090423] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 08/11/2022] [Accepted: 08/23/2022] [Indexed: 12/24/2022] Open
Abstract
Kidney cancer has several types, with renal cell carcinoma (RCC) being the most prevalent and severe type, accounting for more than 85% of adult patients. The manual analysis of whole slide images (WSI) of renal tissues is the primary tool for RCC diagnosis and prognosis. However, the manual identification of RCC is time-consuming and prone to inter-subject variability. In this paper, we aim to distinguish between benign tissue and malignant RCC tumors and identify the tumor subtypes to support medical therapy management. We propose a novel multiscale weakly-supervised deep learning approach for RCC subtyping. Our system starts by applying the RGB-histogram specification stain normalization on the whole slide images to eliminate the effect of the color variations on the system performance. Then, we follow the multiple instance learning approach by dividing the input data into multiple overlapping patches to maintain the tissue connectivity. Finally, we train three multiscale convolutional neural networks (CNNs) and apply decision fusion to their predicted results to obtain the final classification decision. Our dataset comprises four classes of renal tissues: non-RCC renal parenchyma, non-RCC fat tissues, clear cell RCC (ccRCC), and clear cell papillary RCC (ccpRCC). The developed system demonstrates a high classification accuracy and sensitivity on the RCC biopsy samples at the slide level. Following a leave-one-subject-out cross-validation approach, the developed RCC subtype classification system achieves an overall classification accuracy of 93.0% ± 4.9%, a sensitivity of 91.3% ± 10.7%, and a high classification specificity of 95.6% ± 5.2%, in distinguishing ccRCC from ccpRCC or non-RCC tissues. Furthermore, our method outperformed the state-of-the-art Resnet-50 model.
Collapse
Affiliation(s)
- Yasmine Abu Haeyeh
- College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
| | - Mohammed Ghazal
- College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
| | - Ayman El-Baz
- BioImaging Lab, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Iman M. Talaat
- Clinical Sciences Department, College of Medicine, University of Sharjah, Sharjah 27272, United Arab Emirates
| |
Collapse
|
33
|
H&E Multi-Laboratory Staining Variance Exploration with Machine Learning. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157511] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
In diagnostic histopathology, hematoxylin and eosin (H&E) staining is a critical process that highlights salient histological features. Staining results vary between laboratories regardless of the histopathological task, although the method does not change. This variance can impair the accuracy of algorithms and histopathologists’ time-to-insight. Investigating this variance can help calibrate stain normalization tasks to reverse this negative potential. With machine learning, this study evaluated the staining variance between different laboratories on three tissue types. We received H&E-stained slides from 66 different laboratories. Each slide contained kidney, skin, and colon tissue samples stained by the method routinely used in each laboratory. The samples were digitized and summarized as red, green, and blue channel histograms. Dimensions were reduced using principal component analysis. The data projected by principal components were inserted into the k-means clustering algorithm and the k-nearest neighbors classifier with the laboratories as the target. The k-means silhouette index indicated that K = 2 clusters had the best separability in all tissue types. The supervised classification result showed laboratory effects and tissue-type bias. Both supervised and unsupervised approaches suggested that tissue type also affected inter-laboratory variance. We suggest tissue type to also be considered upon choosing the staining and color-normalization approach.
Collapse
|
34
|
Tan XJ, Mustafa N, Mashor MY, Rahman KSA. Automated knowledge-assisted mitosis cells detection framework in breast histopathology images. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:1721-1745. [PMID: 35135226 DOI: 10.3934/mbe.2022081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Based on the Nottingham Histopathology Grading (NHG) system, mitosis cells detection is one of the important criteria to determine the grade of breast carcinoma. Mitosis cells detection is a challenging task due to the heterogeneous microenvironment of breast histopathology images. Recognition of complex and inconsistent objects in the medical images could be achieved by incorporating domain knowledge in the field of interest. In this study, the strategies of the histopathologist and domain knowledge approach were used to guide the development of the image processing framework for automated mitosis cells detection in breast histopathology images. The detection framework starts with color normalization and hyperchromatic nucleus segmentation. Then, a knowledge-assisted false positive reduction method is proposed to eliminate the false positive (i.e., non-mitosis cells). This stage aims to minimize the percentage of false positive and thus increase the F1-score. Next, features extraction was performed. The mitosis candidates were classified using a Support Vector Machine (SVM) classifier. For evaluation purposes, the knowledge-assisted detection framework was tested using two datasets: a custom dataset and a publicly available dataset (i.e., MITOS dataset). The proposed knowledge-assisted false positive reduction method was found promising by eliminating at least 87.1% of false positive in both the dataset producing promising results in the F1-score. Experimental results demonstrate that the knowledge-assisted detection framework can achieve promising results in F1-score (custom dataset: 89.1%; MITOS dataset: 88.9%) and outperforms the recent works.
Collapse
Affiliation(s)
- Xiao Jian Tan
- Centre for Multimodal Signal Processing, Department of Electrical and Electronic Engineering, Faculty of Engineering and Technology, Tunku Abdul Rahman University College (TARUC), Jalan Genting Kelang, Setapak 53300, Kuala Lumpur, Malaysia
| | - Nazahah Mustafa
- Biomedical Electronic Engineering Programme, Faculty of Electronic Engineering Technology, Universiti Malaysia Perlis (UniMAP) 02600 Arau, Perlis, Malaysia
| | - Mohd Yusoff Mashor
- Biomedical Electronic Engineering Programme, Faculty of Electronic Engineering Technology, Universiti Malaysia Perlis (UniMAP) 02600 Arau, Perlis, Malaysia
| | - Khairul Shakir Ab Rahman
- Department of Pathology, Hospital Tuanku Fauziah 01000 Jalan Tun Abdul Razak Kangar Perlis, Malaysia
| |
Collapse
|
35
|
Rashmi R, Prasad K, Udupa CBK. Breast histopathological image analysis using image processing techniques for diagnostic puposes: A methodological review. J Med Syst 2021; 46:7. [PMID: 34860316 PMCID: PMC8642363 DOI: 10.1007/s10916-021-01786-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 10/21/2021] [Indexed: 12/24/2022]
Abstract
Breast cancer in women is the second most common cancer worldwide. Early detection of breast cancer can reduce the risk of human life. Non-invasive techniques such as mammograms and ultrasound imaging are popularly used to detect the tumour. However, histopathological analysis is necessary to determine the malignancy of the tumour as it analyses the image at the cellular level. Manual analysis of these slides is time consuming, tedious, subjective and are susceptible to human errors. Also, at times the interpretation of these images are inconsistent between laboratories. Hence, a Computer-Aided Diagnostic system that can act as a decision support system is need of the hour. Moreover, recent developments in computational power and memory capacity led to the application of computer tools and medical image processing techniques to process and analyze breast cancer histopathological images. This review paper summarizes various traditional and deep learning based methods developed to analyze breast cancer histopathological images. Initially, the characteristics of breast cancer histopathological images are discussed. A detailed discussion on the various potential regions of interest is presented which is crucial for the development of Computer-Aided Diagnostic systems. We summarize the recent trends and choices made during the selection of medical image processing techniques. Finally, a detailed discussion on the various challenges involved in the analysis of BCHI is presented along with the future scope.
Collapse
Affiliation(s)
- R Rashmi
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India
| | | |
Collapse
|
36
|
Boschman J, Farahani H, Darbandsari A, Ahmadvand P, Van Spankeren A, Farnell D, Levine AB, Naso JR, Churg A, Jones SJ, Yip S, Köbel M, Huntsman DG, Gilks CB, Bashashati A. The utility of color normalization for AI-based diagnosis of hematoxylin and eosin-stained pathology images. J Pathol 2021; 256:15-24. [PMID: 34543435 DOI: 10.1002/path.5797] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 08/11/2021] [Accepted: 09/16/2021] [Indexed: 12/17/2022]
Abstract
The color variation of hematoxylin and eosin (H&E)-stained tissues has presented a challenge for applications of artificial intelligence (AI) in digital pathology. Many color normalization algorithms have been developed in recent years in order to reduce the color variation between H&E images. However, previous efforts in benchmarking these algorithms have produced conflicting results and none have sufficiently assessed the efficacy of the various color normalization methods for improving diagnostic performance of AI systems. In this study, we systematically investigated eight color normalization algorithms for AI-based classification of H&E-stained histopathology slides, in the context of using images both from one center and from multiple centers. Our results show that color normalization does not consistently improve classification performance when both training and testing data are from a single center. However, using four multi-center datasets of two cancer types (ovarian and pleural) and objective functions, we show that color normalization can significantly improve the classification accuracy of images from external datasets (ovarian cancer: 0.25 AUC increase, p = 1.6 e-05; pleural cancer: 0.21 AUC increase, p = 1.4 e-10). Furthermore, we introduce a novel augmentation strategy by mixing color-normalized images using three easily accessible algorithms that consistently improves the diagnosis of test images from external centers, even when the individual normalization methods had varied results. We anticipate our study to be a starting point for reliable use of color normalization to improve AI-based, digital pathology-empowered diagnosis of cancers sourced from multiple centers. © 2021 The Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Jeffrey Boschman
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Hossein Farahani
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada.,Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Amirali Darbandsari
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Pouya Ahmadvand
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Ashley Van Spankeren
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - David Farnell
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.,Vancouver General Hospital, Vancouver, BC, Canada
| | - Adrian B Levine
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.,Vancouver General Hospital, Vancouver, BC, Canada
| | - Julia R Naso
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.,Vancouver General Hospital, Vancouver, BC, Canada
| | - Andrew Churg
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.,Vancouver General Hospital, Vancouver, BC, Canada
| | - Steven Jm Jones
- British Columbia Cancer Research Center, Vancouver, BC, Canada
| | - Stephen Yip
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.,Vancouver General Hospital, Vancouver, BC, Canada
| | - Martin Köbel
- Department of Pathology and Laboratory Medicine, University of Calgary, Calgary, BC, Canada
| | - David G Huntsman
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.,British Columbia Cancer Research Center, Vancouver, BC, Canada
| | - C Blake Gilks
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.,Vancouver General Hospital, Vancouver, BC, Canada
| | - Ali Bashashati
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada.,Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
37
|
Lee K, Lockhart JH, Xie M, Chaudhary R, Slebos RJC, Flores ER, Chung CH, Tan AC. Deep Learning of Histopathology Images at the Single Cell Level. Front Artif Intell 2021; 4:754641. [PMID: 34568816 PMCID: PMC8461055 DOI: 10.3389/frai.2021.754641] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 08/27/2021] [Indexed: 12/12/2022] Open
Abstract
The tumor immune microenvironment (TIME) encompasses many heterogeneous cell types that engage in extensive crosstalk among the cancer, immune, and stromal components. The spatial organization of these different cell types in TIME could be used as biomarkers for predicting drug responses, prognosis and metastasis. Recently, deep learning approaches have been widely used for digital histopathology images for cancer diagnoses and prognoses. Furthermore, some recent approaches have attempted to integrate spatial and molecular omics data to better characterize the TIME. In this review we focus on machine learning-based digital histopathology image analysis methods for characterizing tumor ecosystem. In this review, we will consider three different scales of histopathological analyses that machine learning can operate within: whole slide image (WSI)-level, region of interest (ROI)-level, and cell-level. We will systematically review the various machine learning methods in these three scales with a focus on cell-level analysis. We will provide a perspective of workflow on generating cell-level training data sets using immunohistochemistry markers to "weakly-label" the cell types. We will describe some common steps in the workflow of preparing the data, as well as some limitations of this approach. Finally, we will discuss future opportunities of integrating molecular omics data with digital histopathology images for characterizing tumor ecosystem.
Collapse
Affiliation(s)
- Kyubum Lee
- Department of Biostatistics and Bioinformatics, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States
| | - John H. Lockhart
- Department of Molecular Oncology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States
| | - Mengyu Xie
- Department of Biostatistics and Bioinformatics, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States
| | - Ritu Chaudhary
- Department of Head and Neck-Endocrine Oncology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States
| | - Robbert J. C. Slebos
- Department of Head and Neck-Endocrine Oncology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States
| | - Elsa R. Flores
- Department of Molecular Oncology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States
- Cancer Biology and Evolution Program, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States
| | - Christine H. Chung
- Department of Head and Neck-Endocrine Oncology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States
- Molecular Medicine Program, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States
| | - Aik Choon Tan
- Department of Biostatistics and Bioinformatics, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States
- Molecular Medicine Program, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States
| |
Collapse
|
38
|
McCombe KD, Craig SG, Viratham Pulsawatdi A, Quezada-Marín JI, Hagan M, Rajendran S, Humphries MP, Bingham V, Salto-Tellez M, Gault R, James JA. HistoClean: Open-source software for histological image pre-processing and augmentation to improve development of robust convolutional neural networks. Comput Struct Biotechnol J 2021; 19:4840-4853. [PMID: 34522291 PMCID: PMC8426467 DOI: 10.1016/j.csbj.2021.08.033] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 08/20/2021] [Accepted: 08/20/2021] [Indexed: 12/23/2022] Open
Abstract
The growth of digital pathology over the past decade has opened new research pathways and insights in cancer prediction and prognosis. In particular, there has been a surge in deep learning and computer vision techniques to analyse digital images. Common practice in this area is to use image pre-processing and augmentation to prevent bias and overfitting, creating a more robust deep learning model. This generally requires consultation of documentation for multiple coding libraries, as well as trial and error to ensure that the techniques used on the images are appropriate. Herein we introduce HistoClean; a user-friendly, graphical user interface that brings together multiple image processing modules into one easy to use toolkit. HistoClean is an application that aims to help bridge the knowledge gap between pathologists, biomedical scientists and computer scientists by providing transparent image augmentation and pre-processing techniques which can be applied without prior coding knowledge. In this study, we utilise HistoClean to pre-process images for a simple convolutional neural network used to detect stromal maturity, improving the accuracy of the model at a tile, region of interest, and patient level. This study demonstrates how HistoClean can be used to improve a standard deep learning workflow via classical image augmentation and pre-processing techniques, even with a relatively simple convolutional neural network architecture. HistoClean is free and open-source and can be downloaded from the Github repository here: https://github.com/HistoCleanQUB/HistoClean.
Collapse
Affiliation(s)
- Kris D. McCombe
- Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast, Northern Ireland
| | - Stephanie G. Craig
- Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast, Northern Ireland
| | | | - Javier I. Quezada-Marín
- Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast, Northern Ireland
| | - Matthew Hagan
- Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast, Northern Ireland
| | - Simon Rajendran
- Belfast Health and Social Care Trust, Belfast, Northern Ireland
| | - Matthew P. Humphries
- Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast, Northern Ireland
| | - Victoria Bingham
- Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast, Northern Ireland
| | - Manuel Salto-Tellez
- Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast, Northern Ireland
- Belfast Health and Social Care Trust, Belfast, Northern Ireland
- The Institute of Cancer Research, London United Kingdom
| | - Richard Gault
- The School of Electronics, Electrical Engineering and Computer Science, Queen’s University Belfast, Belfast, Northern Ireland
| | - Jacqueline A. James
- Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast, Northern Ireland
- Belfast Health and Social Care Trust, Belfast, Northern Ireland
| |
Collapse
|
39
|
Klein C, Zeng Q, Arbaretaz F, Devêvre E, Calderaro J, Lomenie N, Maiuri MC. Artificial Intelligence for solid tumor diagnosis in digital pathology. Br J Pharmacol 2021; 178:4291-4315. [PMID: 34302297 DOI: 10.1111/bph.15633] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 02/05/2021] [Accepted: 02/07/2021] [Indexed: 11/30/2022] Open
Abstract
Tumor diagnosis relies on the visual examination of histological slides by pathologists through a microscope eyepiece. Digital pathology, the digitalization of histological slides at high magnification with slides scanners, has raised the opportunity to extract quantitative information thanks to image analysis. In the last decade, medical image analysis has made exceptional progress due to the development of artificial intelligence (AI) algorithms. AI has been successfully used in the field of medical imaging and more recently in digital pathology. The feasibility and usefulness of AI assisted pathology tasks have been demonstrated in the very last years and we can expect those developments to be applied on routine histopathology in the future. In this review, we will describe and illustrate this technique and present the most recent applications in the field of tumor histopathology.
Collapse
Affiliation(s)
- Christophe Klein
- Centre de recherche des Cordeliers, Centre d'Imagerie, Histologie et Cytométrie (CHIC), INSERM, Sorbonne Université, Université de Paris, Paris, France
| | - Qinghe Zeng
- Centre de recherche des Cordeliers, Centre d'Imagerie, Histologie et Cytométrie (CHIC), INSERM, Sorbonne Université, Université de Paris, Paris, France.,Laboratoire d'informatique Paris Descartes (LIPADE), Université de Paris, Paris, France
| | - Floriane Arbaretaz
- Centre de recherche des Cordeliers, Centre d'Imagerie, Histologie et Cytométrie (CHIC), INSERM, Sorbonne Université, Université de Paris, Paris, France
| | - Estelle Devêvre
- Centre de recherche des Cordeliers, Centre d'Imagerie, Histologie et Cytométrie (CHIC), INSERM, Sorbonne Université, Université de Paris, Paris, France
| | - Julien Calderaro
- Département de pathologie, Hôpital Henri Mondor, Créteil, France
| | - Nicolas Lomenie
- Laboratoire d'informatique Paris Descartes (LIPADE), Université de Paris, Paris, France
| | - Maria Chiara Maiuri
- Centre de recherche des Cordeliers, Centre d'Imagerie, Histologie et Cytométrie (CHIC), INSERM, Sorbonne Université, Université de Paris, Paris, France
| |
Collapse
|
40
|
Pantanowitz L, Wu U, Seigh L, LoPresti E, Yeh FC, Salgia P, Michelow P, Hazelhurst S, Chen WY, Hartman D, Yeh CY. Artificial Intelligence-Based Screening for Mycobacteria in Whole-Slide Images of Tissue Samples. Am J Clin Pathol 2021; 156:117-128. [PMID: 33527136 DOI: 10.1093/ajcp/aqaa215] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
OBJECTIVES This study aimed to develop and validate a deep learning algorithm to screen digitized acid fast-stained (AFS) slides for mycobacteria within tissue sections. METHODS A total of 441 whole-slide images (WSIs) of AFS tissue material were used to develop a deep learning algorithm. Regions of interest with possible acid-fast bacilli (AFBs) were displayed in a web-based gallery format alongside corresponding WSIs for pathologist review. Artificial intelligence (AI)-assisted analysis of another 138 AFS slides was compared to manual light microscopy and WSI evaluation without AI support. RESULTS Algorithm performance showed an area under the curve of 0.960 at the image patch level. More AI-assisted reviews identified AFBs than manual microscopy or WSI examination (P < .001). Sensitivity, negative predictive value, and accuracy were highest for AI-assisted reviews. AI-assisted reviews also had the highest rate of matching the original sign-out diagnosis, were less time-consuming, and were much easier for pathologists to perform (P < .001). CONCLUSIONS This study reports the successful development and clinical validation of an AI-based digital pathology system to screen for AFBs in anatomic pathology material. AI assistance proved to be more sensitive and accurate, took pathologists less time to screen cases, and was easier to use than either manual microscopy or viewing WSIs.
Collapse
Affiliation(s)
- Liron Pantanowitz
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
- Department of Anatomical Pathology, University of the Witwatersrand and National Health Laboratory Services, Johannesburg, South Africa
| | - Uno Wu
- Department of Electrical Engineering, Molecular Biomedical Informatics Lab, National Cheng Kung University, Tainan City, Taiwan
- aetherAI, Taipei, Taiwan
| | - Lindsey Seigh
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Edmund LoPresti
- Information Services Division, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Fang-Cheng Yeh
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| | - Payal Salgia
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Pamela Michelow
- Department of Anatomical Pathology, University of the Witwatersrand and National Health Laboratory Services, Johannesburg, South Africa
| | - Scott Hazelhurst
- School of Electrical & Information Engineering and Sydney Brenner Institute for Molecular Bioscience, University of the Witwatersrand, Johannesburg, South Africa
| | - Wei-Yu Chen
- Department of Pathology, Wan Fang Hospital
- Department of Pathology, School of Medicine, Taipei Medical University, Taipei, Taiwan
| | - Douglas Hartman
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | | |
Collapse
|
41
|
Liew XY, Hameed N, Clos J. A Review of Computer-Aided Expert Systems for Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:2764. [PMID: 34199444 PMCID: PMC8199592 DOI: 10.3390/cancers13112764] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 05/25/2021] [Accepted: 05/28/2021] [Indexed: 11/18/2022] Open
Abstract
A computer-aided diagnosis (CAD) expert system is a powerful tool to efficiently assist a pathologist in achieving an early diagnosis of breast cancer. This process identifies the presence of cancer in breast tissue samples and the distinct type of cancer stages. In a standard CAD system, the main process involves image pre-processing, segmentation, feature extraction, feature selection, classification, and performance evaluation. In this review paper, we reviewed the existing state-of-the-art machine learning approaches applied at each stage involving conventional methods and deep learning methods, the comparisons within methods, and we provide technical details with advantages and disadvantages. The aims are to investigate the impact of CAD systems using histopathology images, investigate deep learning methods that outperform conventional methods, and provide a summary for future researchers to analyse and improve the existing techniques used. Lastly, we will discuss the research gaps of existing machine learning approaches for implementation and propose future direction guidelines for upcoming researchers.
Collapse
Affiliation(s)
- Xin Yu Liew
- Jubilee Campus, University of Nottingham, Wollaton Road, Nottingham NG8 1BB, UK; (N.H.); (J.C.)
| | | | | |
Collapse
|
42
|
Salvi M, Molinari F, Iussich S, Muscatello LV, Pazzini L, Benali S, Banco B, Abramo F, De Maria R, Aresu L. Histopathological Classification of Canine Cutaneous Round Cell Tumors Using Deep Learning: A Multi-Center Study. Front Vet Sci 2021; 8:640944. [PMID: 33869320 PMCID: PMC8044886 DOI: 10.3389/fvets.2021.640944] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Accepted: 03/08/2021] [Indexed: 01/12/2023] Open
Abstract
Canine cutaneous round cell tumors (RCT) represent one of the routine diagnostic challenges for veterinary pathologists. Computer-aided approaches are developed to overcome these restrictions and to increase accuracy and consistency of diagnosis. These systems are also of high benefit reducing errors when a large number of cases are screened daily. In this study we describe ARCTA (Automated Round Cell Tumors Assessment), a fully automated algorithm for cutaneous RCT classification and mast cell tumors grading in canine histopathological images. ARCTA employs a deep learning strategy and was developed on 416 RCT images and 213 mast cell tumors images. In the test set, our algorithm exhibited an excellent classification performance in both RCT classification (accuracy: 91.66%) and mast cell tumors grading (accuracy: 100%). Misdiagnoses were encountered for histiocytomas in the train set and for melanomas in the test set. For mast cell tumors the reduction of a grade was observed in the train set, but not in the test set. To the best of our knowledge, the proposed model is the first fully automated algorithm in histological images specifically developed for veterinary medicine. Being very fast (average computational time 2.63 s), this algorithm paves the way for an automated and effective evaluation of canine tumors.
Collapse
Affiliation(s)
- Massimo Salvi
- PoliToBIOMed Lab, Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Filippo Molinari
- PoliToBIOMed Lab, Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Selina Iussich
- Department of Veterinary Sciences, University of Turin, Turin, Italy
| | - Luisa Vera Muscatello
- Department of Veterinary Medical Sciences, University of Bologna, Bologna, Italy.,MyLav-Laboratorio La Vallonea, Milan, Italy
| | | | | | | | - Francesca Abramo
- Department of Veterinary Sciences, University of Pisa, Pisa, Italy
| | | | - Luca Aresu
- Department of Veterinary Sciences, University of Turin, Turin, Italy
| |
Collapse
|
43
|
Hoque MZ, Keskinarkaus A, Nyberg P, Seppänen T. Retinex model based stain normalization technique for whole slide image analysis. Comput Med Imaging Graph 2021; 90:101901. [PMID: 33862354 DOI: 10.1016/j.compmedimag.2021.101901] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 02/28/2021] [Accepted: 03/06/2021] [Indexed: 10/21/2022]
Abstract
Medical imaging provides the means for diagnosing many of the medical phenomena currently studied in clinical medicine and pathology. The variations of color and intensity in stained histological slides affect the quantitative analysis of the histopathological images. Moreover, stain normalization utilizing color for the classification of pixels into different stain components is challenging. The staining also suffers from variability, which complicates the automatization of tissue area segmentation with different staining and the analysis of whole slide images. We have developed a Retinex model based stain normalization technique in terms of area segmentation from stained tissue images to quantify the individual stain components of the histochemical stains for the ideal removal of variability. The performance was experimentally compared to reference methods and tested on organotypic carcinoma model based on myoma tissue and our method consistently has the smallest standard deviation, skewness value, and coefficient of variation in normalized median intensity measurements. Our method also achieved better quality performance in terms of Quaternion Structure Similarity Index Metric (QSSIM), Structural Similarity Index Metric (SSIM), and Pearson Correlation Coefficient (PCC) by improving robustness against variability and reproducibility. The proposed method could potentially be used in the development of novel research as well as diagnostic tools with the potential improvement of accuracy and consistency in computer aided diagnosis in biobank applications.
Collapse
Affiliation(s)
- Md Ziaul Hoque
- Physiological Signal Analysis Group, Center for Machine Vision and Signal Analysis, University of Oulu, Finland; Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland.
| | - Anja Keskinarkaus
- Physiological Signal Analysis Group, Center for Machine Vision and Signal Analysis, University of Oulu, Finland; Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Pia Nyberg
- Biobank Borealis of Northern Finland, Oulu University Hospital, Finland; Translational & Cancer Research Unit, Medical Research Center Oulu, Faculty of Medicine, University of Oulu, Finland
| | - Tapio Seppänen
- Physiological Signal Analysis Group, Center for Machine Vision and Signal Analysis, University of Oulu, Finland; Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| |
Collapse
|
44
|
Dave P, Alahmari S, Goldgof D, Hall LO, Morera H, Mouton PR. An adaptive digital stain separation method for deep learning-based automatic cell profile counts. J Neurosci Methods 2021; 354:109102. [PMID: 33607171 DOI: 10.1016/j.jneumeth.2021.109102] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 01/21/2021] [Accepted: 02/09/2021] [Indexed: 10/22/2022]
Abstract
BACKGROUND Quantifying cells in a defined region of biological tissue is critical for many clinical and preclinical studies, especially in the fields of pathology, toxicology, cancer and behavior. As part of a program to develop accurate, precise and more efficient automatic approaches for quantifying morphometric changes in biological tissue, we have shown that both deep learning-based and hand-crafted algorithms can estimate the total number of histologically stained cells at their maximal profile of focus in Extended Depth of Field (EDF) images. Deep learning-based approaches show accuracy comparable to manual counts on EDF images but significant enhancement in reproducibility, throughput efficiency and reduced error from human factors. However, a majority of the automated counts are designed for single-immunostained tissue sections. NEW METHOD To expand the automatic counting methods to more complex dual-staining protocols, we developed an adaptive method to separate stain color channels on images from tissue sections stained by a primary immunostain with secondary counterstain. COMPARISON WITH EXISTING METHODS The proposed method overcomes the limitations of the state-of-the-art stain-separation methods, like the requirement of pure stain color basis as a prerequisite or stain color basis learning on each image. RESULTS Experimental results are presented for automatic counts using deep learning-based and hand-crafted algorithms for sections immunostained for neurons (Neu-N) or microglial cells (Iba-1) with cresyl violet counterstain. CONCLUSION Our findings show more accurate counts by deep learning methods compared to the handcrafted method. Thus, stain-separated images can function as input for automatic deep learning-based quantification methods designed for single-stained tissue sections.
Collapse
Affiliation(s)
- Palak Dave
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA.
| | - Saeed Alahmari
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Dmitry Goldgof
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Lawrence O Hall
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Hunter Morera
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Peter R Mouton
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA; SRC Biosciences, Tampa, FL 33606, USA
| |
Collapse
|
45
|
Schmitt M, Maron RC, Hekler A, Stenzinger A, Hauschild A, Weichenthal M, Tiemann M, Krahl D, Kutzner H, Utikal JS, Haferkamp S, Kather JN, Klauschen F, Krieghoff-Henning E, Fröhling S, von Kalle C, Brinker TJ. Hidden Variables in Deep Learning Digital Pathology and Their Potential to Cause Batch Effects: Prediction Model Study. J Med Internet Res 2021; 23:e23436. [PMID: 33528370 PMCID: PMC7886613 DOI: 10.2196/23436] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 10/14/2020] [Accepted: 12/06/2020] [Indexed: 01/21/2023] Open
Abstract
BACKGROUND An increasing number of studies within digital pathology show the potential of artificial intelligence (AI) to diagnose cancer using histological whole slide images, which requires large and diverse data sets. While diversification may result in more generalizable AI-based systems, it can also introduce hidden variables. If neural networks are able to distinguish/learn hidden variables, these variables can introduce batch effects that compromise the accuracy of classification systems. OBJECTIVE The objective of the study was to analyze the learnability of an exemplary selection of hidden variables (patient age, slide preparation date, slide origin, and scanner type) that are commonly found in whole slide image data sets in digital pathology and could create batch effects. METHODS We trained four separate convolutional neural networks (CNNs) to learn four variables using a data set of digitized whole slide melanoma images from five different institutes. For robustness, each CNN training and evaluation run was repeated multiple times, and a variable was only considered learnable if the lower bound of the 95% confidence interval of its mean balanced accuracy was above 50.0%. RESULTS A mean balanced accuracy above 50.0% was achieved for all four tasks, even when considering the lower bound of the 95% confidence interval. Performance between tasks showed wide variation, ranging from 56.1% (slide preparation date) to 100% (slide origin). CONCLUSIONS Because all of the analyzed hidden variables are learnable, they have the potential to create batch effects in dermatopathology data sets, which negatively affect AI-based classification systems. Practitioners should be aware of these and similar pitfalls when developing and evaluating such systems and address these and potentially other batch effect variables in their data sets through sufficient data set stratification.
Collapse
Affiliation(s)
- Max Schmitt
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Roman Christoph Maron
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Achim Hekler
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Albrecht Stenzinger
- Institute of Pathology, University Hospital Heidelberg, University of Heidelberg, Heidelberg, Germany
| | - Axel Hauschild
- Department of Dermatology, University Hospital Kiel, University of Kiel, Kiel, Germany
| | - Michael Weichenthal
- Department of Dermatology, University Hospital Kiel, University of Kiel, Kiel, Germany
| | | | - Dieter Krahl
- Private Institute of Dermatopathology, Heidelberg, Germany
| | - Heinz Kutzner
- Private Institute of Dermatopathology, Friedrichshafen, Germany
| | - Jochen Sven Utikal
- Skin Cancer Unit, German Cancer Research Center (DKFZ), Heidelberg, Germany.,Department of Dermatology, University Medical Center Mannheim, University of Heidelberg, Mannheim, Germany
| | - Sebastian Haferkamp
- Department of Dermatology, University Hospital of Regensburg, Regensburg, Germany
| | | | | | - Eva Krieghoff-Henning
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Stefan Fröhling
- National Center for Tumor Diseases, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Christof von Kalle
- Department of Clinical-Translational Sciences, Charité and Berlin Institute of Health, Berlin, Germany
| | - Titus Josef Brinker
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases, German Cancer Research Center (DKFZ), Heidelberg, Germany
| |
Collapse
|
46
|
E Y, Meng J, Cai H, Li C, Liu S, Sun L, Liu Y. Effect of Biochar on the Production of L-Histidine From Glucose Through Escherichia coli Metabolism. Front Bioeng Biotechnol 2021; 8:605096. [PMID: 33490052 PMCID: PMC7818517 DOI: 10.3389/fbioe.2020.605096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Accepted: 12/02/2020] [Indexed: 12/01/2022] Open
Abstract
The organic compounds from biochar play a role of hormone analogs, stimulating the expression of metabolites by controlling related gene and protein. In this experiment, we reported the L-histidine biosysthesis was promoted by biochar treatment in E. coli unlike genetic engineering of the traditional method. The related results indicated the most optimal concentration was found to be 3%, and 7% is the lethal dose. E. coli was inhibited in the high-concentration treatment. On the other hand, docking technology was usually used as drug screening, basing on Lock-and-key model of protein in order to better understand mechanisms. So the organic compounds of biochar from GC-MS analysis that acted as ligands were connected to HisG protein controlling L-histidine biosysthesis in E. coli. The result showed that the three organic molecules interacted with HisG protein by hydrogen bond. So we considered that these three compounds play regulatory roles in L-histidine biosysthesis, and the hisG gene expression fully supports this conclusion.
Collapse
Affiliation(s)
- Yang E
- Liaoning Biochar Engineering & Technology Research Center, Shenyang Agricultural University, Shenyang, China
| | - Jun Meng
- Liaoning Biochar Engineering & Technology Research Center, Shenyang Agricultural University, Shenyang, China
| | - Heqing Cai
- Guizhou Tobacco Company in Bijie Company, Bijie, China
| | - Caibin Li
- Guizhou Tobacco Company in Bijie Company, Bijie, China
| | - Sainan Liu
- Liaoning Biochar Engineering & Technology Research Center, Shenyang Agricultural University, Shenyang, China
| | - Luming Sun
- Liaoning Biochar Engineering & Technology Research Center, Shenyang Agricultural University, Shenyang, China
| | - Yanxiang Liu
- Guizhou Tobacco Company in Bijie Company, Bijie, China
| |
Collapse
|
47
|
Shin SJ, You SC, Jeon H, Jung JW, An MH, Park RW, Roh J. Style transfer strategy for developing a generalizable deep learning application in digital pathology. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105815. [PMID: 33160111 DOI: 10.1016/j.cmpb.2020.105815] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2020] [Accepted: 10/20/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVES Despite recent advances in artificial intelligence for medical images, the development of a robust deep learning model for identifying malignancy on pathology slides has been limited by problems related to substantial inter- and intra-institutional heterogeneity attributable to tissue preparation. The paucity of available data aggravates this limitation for relatively rare cancers. Here, using ovarian cancer pathology images, we explored the effect of image-to-image style transfer approaches on diagnostic performance. METHODS We leveraged a relatively large public image set for 142 patients with ovarian cancer from The Cancer Image Archive (TCIA) to fine-tune the renowned deep learning model Inception V3 for identifying malignancy on tissue slides. As an external validation, the performance of the developed classifier was tested using a relatively small institutional pathology image set for 32 patients. To reduce deterioration of the performance associated with the inter-institutional heterogeneity of pathology slides, we translated the style of the small image set of the local institution into the large image set style of the TCIA using cycle-consistent generative adversarial networks. RESULTS Without style transfer, the performance of the classifier was as follows: area under the receiver operating characteristic curve (AUROC) = 0.737 and area under the precision recall curve (AUPRC) = 0.710. After style transfer, AUROC and AUPRC improved to 0.916 and 0.898, respectively. CONCLUSIONS This study provides a case of the successful application of style transfer technology to generalize a deep learning model into small image sets in the field of digital pathology. Researchers at local institutions can select this collaborative system to make their small image sets acceptable to the deep learning model.
Collapse
Affiliation(s)
- Seo Jeong Shin
- Department of Biomedical Sciences, Ajou University Graduate School of Medicine, Suwon, Republic of Korea
| | - Seng Chan You
- Department of Biomedical Informatics, Ajou University School of Medicine, Suwon, Republic of Korea
| | - Hokyun Jeon
- Department of Biomedical Sciences, Ajou University Graduate School of Medicine, Suwon, Republic of Korea
| | - Ji Won Jung
- Department of Pathology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea; Asan Institute for Life Science, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Min Ho An
- So Ahn Public Health Center, Wando-gun, Jeollanam-do, Republic of Korea
| | - Rae Woong Park
- Department of Biomedical Sciences, Ajou University Graduate School of Medicine, Suwon, Republic of Korea; Department of Biomedical Informatics, Ajou University School of Medicine, Suwon, Republic of Korea.
| | - Jin Roh
- Department of Pathology, Ajou University Hospital, Suwon, Republic of Korea.
| |
Collapse
|
48
|
Salvi M, Acharya UR, Molinari F, Meiburger KM. The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis. Comput Biol Med 2021; 128:104129. [DOI: 10.1016/j.compbiomed.2020.104129] [Citation(s) in RCA: 105] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 11/13/2020] [Indexed: 12/12/2022]
|
49
|
Shrivastava A, Adorno W, Sharma Y, Ehsan L, Ali SA, Moore SR, Amadi B, Kelly P, Syed S, Brown DE. Self-Attentive Adversarial Stain Normalization. PATTERN RECOGNITION : ICPR INTERNATIONAL WORKSHOPS AND CHALLENGES, VIRTUAL EVENT, JANUARY 10-15, 2021, PROCEEDINGS. PART I. INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (25TH : 2021 : ONLINE) 2021; 12661:120-140. [PMID: 34693406 PMCID: PMC8528268 DOI: 10.1007/978-3-030-68763-2_10] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/30/2023]
Abstract
Hematoxylin and Eosin (H&E) stained Whole Slide Images (WSIs) are utilized for biopsy visualization-based diagnostic and prognostic assessment of diseases. Variation in the H&E staining process across different lab sites can lead to significant variations in biopsy image appearance. These variations introduce an undesirable bias when the slides are examined by pathologists or used for training deep learning models. Traditionally proposed stain normalization and color augmentation strategies can handle the human level bias. But deep learning models can easily disentangle the linear transformation used in these approaches, resulting in undesirable bias and lack of generalization. To handle these limitations, we propose a Self-Attentive Adversarial Stain Normalization (SAASN) approach for the normalization of multiple stain appearances to a common domain. This unsupervised generative adversarial approach includes self-attention mechanism for synthesizing images with finer detail while preserving the structural consistency of the biopsy features during translation. SAASN demonstrates consistent and superior performance compared to other popular stain normalization techniques on H&E stained duodenal biopsy image data.
Collapse
Affiliation(s)
| | | | - Yash Sharma
- University of Virginia, Charlottesville, Virginia, USA
| | - Lubaina Ehsan
- University of Virginia, Charlottesville, Virginia, USA
| | | | - Sean R Moore
- University of Virginia, Charlottesville, Virginia, USA
| | | | - Paul Kelly
- University of Zambia School of Medicine, Lusaka, Zambia
- Queen Mary University of London, London, England
| | - Sana Syed
- University of Virginia, Charlottesville, Virginia, USA
| | | |
Collapse
|
50
|
Bianconi F, Kather JN, Reyes-Aldasoro CC. Experimental Assessment of Color Deconvolution and Color Normalization for Automated Classification of Histology Images Stained with Hematoxylin and Eosin. Cancers (Basel) 2020; 12:cancers12113337. [PMID: 33187299 PMCID: PMC7697346 DOI: 10.3390/cancers12113337] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 11/04/2020] [Indexed: 02/06/2023] Open
Abstract
Histological evaluation plays a major role in cancer diagnosis and treatment. The appearance of H&E-stained images can vary significantly as a consequence of differences in several factors, such as reagents, staining conditions, preparation procedure and image acquisition system. Such potential sources of noise can all have negative effects on computer-assisted classification. To minimize such artefacts and their potentially negative effects several color pre-processing methods have been proposed in the literature-for instance, color augmentation, color constancy, color deconvolution and color transfer. Still, little work has been done to investigate the efficacy of these methods on a quantitative basis. In this paper, we evaluated the effects of color constancy, deconvolution and transfer on automated classification of H&E-stained images representing different types of cancers-specifically breast, prostate, colorectal cancer and malignant lymphoma. Our results indicate that in most cases color pre-processing does not improve the classification accuracy, especially when coupled with color-based image descriptors. Some pre-processing methods, however, can be beneficial when used with some texture-based methods like Gabor filters and Local Binary Patterns.
Collapse
Affiliation(s)
- Francesco Bianconi
- Department of Engineering, Università degli Studi di Perugia, Via Goffredo Duranti 93, 06125 Perugia, Italy
- giCentre, School of Mathematics, Computer Science & Engineering, City, University of London, Northampton Square, London EC1V 0HB, UK;
- Correspondence: ; Tel.: +39-075-585-3706
| | - Jakob N. Kather
- Department of Medical Oncology and Internal Medicine VI, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany;
| | - Constantino Carlos Reyes-Aldasoro
- giCentre, School of Mathematics, Computer Science & Engineering, City, University of London, Northampton Square, London EC1V 0HB, UK;
| |
Collapse
|