1
|
Hetz MJ, Bucher TC, Brinker TJ. Multi-domain stain normalization for digital pathology: A cycle-consistent adversarial network for whole slide images. Med Image Anal 2024; 94:103149. [PMID: 38574542 DOI: 10.1016/j.media.2024.103149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 12/11/2023] [Accepted: 03/20/2024] [Indexed: 04/06/2024]
Abstract
The variation in histologic staining between different medical centers is one of the most profound challenges in the field of computer-aided diagnosis. The appearance disparity of pathological whole slide images causes algorithms to become less reliable, which in turn impedes the wide-spread applicability of downstream tasks like cancer diagnosis. Furthermore, different stainings lead to biases in the training which in case of domain shifts negatively affect the test performance. Therefore, in this paper we propose MultiStain-CycleGAN, a multi-domain approach to stain normalization based on CycleGAN. Our modifications to CycleGAN allow us to normalize images of different origins without retraining or using different models. We perform an extensive evaluation of our method using various metrics and compare it to commonly used methods that are multi-domain capable. First, we evaluate how well our method fools a domain classifier that tries to assign a medical center to an image. Then, we test our normalization on the tumor classification performance of a downstream classifier. Furthermore, we evaluate the image quality of the normalized images using the Structural similarity index and the ability to reduce the domain shift using the Fréchet inception distance. We show that our method proves to be multi-domain capable, provides a very high image quality among the compared methods, and can most reliably fool the domain classifier while keeping the tumor classifier performance high. By reducing the domain influence, biases in the data can be removed on the one hand and the origin of the whole slide image can be disguised on the other, thus enhancing patient data privacy.
Collapse
Affiliation(s)
- Martin J Hetz
- Division of Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tabea-Clara Bucher
- Division of Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Titus J Brinker
- Division of Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
2
|
Nissar I, Alam S, Masood S, Kashif M. MOB-CBAM: A dual-channel attention-based deep learning generalizable model for breast cancer molecular subtypes prediction using mammograms. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108121. [PMID: 38531147 DOI: 10.1016/j.cmpb.2024.108121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 02/15/2024] [Accepted: 03/06/2024] [Indexed: 03/28/2024]
Abstract
BACKGROUND AND OBJECTIVE Deep Learning models have emerged as a significant tool in generating efficient solutions for complex problems including cancer detection, as they can analyze large amounts of data with high efficiency and performance. Recent medical studies highlight the significance of molecular subtype detection in breast cancer, aiding the development of personalized treatment plans as different subtypes of cancer respond better to different therapies. METHODS In this work, we propose a novel lightweight dual-channel attention-based deep learning model MOB-CBAM that utilizes the backbone of MobileNet-V3 architecture with a Convolutional Block Attention Module to make highly accurate and precise predictions about breast cancer. We used the CMMD mammogram dataset to evaluate the proposed model in our study. Nine distinct data subsets were created from the original dataset to perform coarse and fine-grained predictions, enabling it to identify masses, calcifications, benign, malignant tumors and molecular subtypes of cancer, including Luminal A, Luminal B, HER-2 Positive, and Triple Negative. The pipeline incorporates several image pre-processing techniques, including filtering, enhancement, and normalization, for enhancing the model's generalization ability. RESULTS While identifying benign versus malignant tumors, i.e., coarse-grained classification, the MOB-CBAM model produced exceptional results with 99 % accuracy, precision, recall, and F1-score values of 0.99 and MCC of 0.98. In terms of fine-grained classification, the MOB-CBAM model has proven to be highly efficient in accurately identifying mass with (benign/malignant) and calcification with (benign/malignant) classification tasks with an impressive accuracy rate of 98 %. We have also cross-validated the efficiency of the proposed MOB-CBAM deep learning architecture on two datasets: MIAS and CBIS-DDSM. On the MIAS dataset, an accuracy of 97 % was reported for the task of classifying benign, malignant, and normal images, while on the CBIS-DDSM dataset, an accuracy of 98 % was achieved for the classification of mass with either benign or malignant, and calcification with benign and malignant tumors. CONCLUSION This study presents lightweight MOB-CBAM, a novel deep learning framework, to address breast cancer diagnosis and subtype prediction. The model's innovative incorporation of the CBAM enhances precise predictions. The extensive evaluation of the CMMD dataset and cross-validation on other datasets affirm the model's efficacy.
Collapse
Affiliation(s)
- Iqra Nissar
- Department of Computer Engineering, Jamia Millia Islamia (A Central University), New Delhi, 110025, India.
| | - Shahzad Alam
- Department of Computer Engineering, Jamia Millia Islamia (A Central University), New Delhi, 110025, India
| | - Sarfaraz Masood
- Department of Computer Engineering, Jamia Millia Islamia (A Central University), New Delhi, 110025, India
| | - Mohammad Kashif
- Department of Computer Engineering, Jamia Millia Islamia (A Central University), New Delhi, 110025, India
| |
Collapse
|
3
|
Wang HS, Liang WY. Combining Artificial Intelligence and Simplified Image Processing for the Automatic Detection of Mycobacterium tuberculosis in Acid-fast Stain: A Cross-institute Training and Validation Study. Am J Surg Pathol 2024:00000478-990000000-00325. [PMID: 38595262 DOI: 10.1097/pas.0000000000002223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2024]
Abstract
Tuberculosis (TB) poses a significant health threat in Taiwan, necessitating efficient detection methods. Traditional screening for acid-fast positive bacilli in acid-fast stain is time-consuming and prone to human error due to staining artifacts. To address this, we present an automated TB detection platform leveraging deep learning and image processing. Whole slide images from 2 hospitals were collected and processed on a high-performance system. The system utilizes an image processing technique to highlight red, rod-like regions and a modified EfficientNet model for binary classification of TB-positive regions. Our approach achieves a 97% accuracy in tile-based TB image classification, with minimal loss during the image processing step. By setting a 0.99 threshold, false positives are significantly reduced, resulting in a 94% detection rate when assisting pathologists, compared with 68% without artificial intelligence assistance. Notably, our system efficiently identifies artifacts and contaminants, addressing challenges in digital slide interpretation. Cross-hospital validation demonstrates the system's adaptability. The proposed artificial intelligence-assisted pipeline improves both detection rates and time efficiency, making it a promising tool for routine pathology work in TB detection.
Collapse
Affiliation(s)
- Hsiang Sheng Wang
- Department of Pathology, Chang Gung Memorial Hospital, Linkou Taoyuan, Taiwan-Ling Ko
| | - Wen-Yih Liang
- Department of Pathology, Taipei Veteran General Hospital
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei, Taiwan, Republic of China
| |
Collapse
|
4
|
Fernandez-Martín C, Silva-Rodriguez J, Kiraz U, Morales S, Janssen EAM, Naranjo V. Uninformed Teacher-Student for hard-samples distillation in weakly supervised mitosis localization. Comput Med Imaging Graph 2024; 112:102328. [PMID: 38244279 DOI: 10.1016/j.compmedimag.2024.102328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 11/02/2023] [Accepted: 12/12/2023] [Indexed: 01/22/2024]
Abstract
BACKGROUND AND OBJECTIVE Mitotic activity is a crucial biomarker for diagnosing and predicting outcomes for different types of cancers, particularly breast cancer. However, manual mitosis counting is challenging and time-consuming for pathologists, with moderate reproducibility due to biopsy slide size, low mitotic cell density, and pattern heterogeneity. In recent years, deep learning methods based on convolutional neural networks (CNNs) have been proposed to address these limitations. Nonetheless, these methods have been hampered by the available data labels, which usually consist only of the centroids of mitosis, and by the incoming noise from annotated hard negatives. As a result, complex algorithms with multiple stages are often required to refine the labels at the pixel level and reduce the number of false positives. METHODS This article presents a novel weakly supervised approach for mitosis detection that utilizes only image-level labels on histological hematoxylin and eosin (H&E) images, avoiding the need for complex labeling scenarios. Also, an Uninformed Teacher-Student (UTS) pipeline is introduced to detect and distill hard samples by comparing weakly supervised localizations and the annotated centroids, using strong augmentations to enhance uncertainty. Additionally, an automatic proliferation score is proposed that mimicks the pathologist-annotated mitotic activity index (MAI). The proposed approach is evaluated on three publicly available datasets for mitosis detection on breast histology samples, and two datasets for mitotic activity counting in whole-slide images. RESULTS The proposed framework achieves competitive performance with relevant prior literature in all the datasets used for evaluation without explicitly using the mitosis location information during training. This approach challenges previous methods that rely on strong mitosis location information and multiple stages to refine false positives. Furthermore, the proposed pipeline for hard-sample distillation demonstrates promising dataset-specific improvements. Concretely, when the annotation has not been thoroughly refined by multiple pathologists, the UTS model offers improvements of up to ∼4% in mitosis localization, thanks to the detection and distillation of uncertain cases. Concerning the mitosis counting task, the proposed automatic proliferation score shows a moderate positive correlation with the MAI annotated by pathologists at the biopsy level on two external datasets. CONCLUSIONS The proposed Uninformed Teacher-Student pipeline leverages strong augmentations to distill uncertain samples and measure dissimilarities between predicted and annotated mitosis. Results demonstrate the feasibility of the weakly supervised approach and highlight its potential as an objective evaluation tool for tumor proliferation.
Collapse
Affiliation(s)
- Claudio Fernandez-Martín
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-tech, Universitat Politècnica de València, Valencia, Spain.
| | | | - Umay Kiraz
- Department of Chemistry, Bioscience and Environmental Engineering, University of Stavanger, Stavanger, Norway; Department of Pathology, Stavanger University Hospital, Stavanger, Norway
| | - Sandra Morales
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-tech, Universitat Politècnica de València, Valencia, Spain
| | - Emiel A M Janssen
- Department of Chemistry, Bioscience and Environmental Engineering, University of Stavanger, Stavanger, Norway; Department of Pathology, Stavanger University Hospital, Stavanger, Norway
| | - Valery Naranjo
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-tech, Universitat Politècnica de València, Valencia, Spain
| |
Collapse
|
5
|
R Shihabuddin A, Beevi K S. Efficient mitosis detection: leveraging pre-trained faster R-CNN and cell-level classification. Biomed Phys Eng Express 2024; 10:025031. [PMID: 38357907 DOI: 10.1088/2057-1976/ad262f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Accepted: 02/05/2024] [Indexed: 02/16/2024]
Abstract
The assessment of mitotic activity is an integral part of the comprehensive evaluation of breast cancer pathology. Understanding the level of tumor dissemination is essential for assessing the severity of the malignancy and guiding appropriate treatment strategies. A pathologist must manually perform the intricate and time-consuming task of counting mitoses by examining biopsy slices stained with Hematoxylin and Eosin (H&E) under a microscope. Mitotic cells can be challenging to distinguish in H&E-stained sections due to limited available datasets and similarities among mitotic and non-mitotic cells. Computer-assisted mitosis detection approaches have simplified the whole procedure by selecting, detecting, and labeling mitotic cells. Traditional detection strategies rely on image processing techniques that apply custom criteria to distinguish between different aspects of an image. Additionally, the automatic feature extraction from histopathology images that exhibit mitosis using neural networks.Additionally, the possibility of automatically extracting features from histopathological images using deep neural networks was investigated. This study examines mitosis detection as an object detection problem using multiple neural networks. From a medical standpoint, mitosis at the tissue level was also investigated utilising pre-trained Faster R-CNN and raw image data. Experiments were done on the MITOS-ATYPIA- 14 dataset and TUPAC16 dataset, and the results were compared to those of other methods described in the literature.
Collapse
Affiliation(s)
- Abdul R Shihabuddin
- Centre For Artificial Intelligence, TKM College of Engineering, Karicode, Kollam, 691005, Kerala, India
| | - Sabeena Beevi K
- Department of Electrical and Electronics Engineering, TKM College of Engineering, Karicode, Kollam, 691005, Kerala, India
| |
Collapse
|
6
|
Wang L. Mammography with deep learning for breast cancer detection. Front Oncol 2024; 14:1281922. [PMID: 38410114 PMCID: PMC10894909 DOI: 10.3389/fonc.2024.1281922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/19/2024] [Indexed: 02/28/2024] Open
Abstract
X-ray mammography is currently considered the golden standard method for breast cancer screening, however, it has limitations in terms of sensitivity and specificity. With the rapid advancements in deep learning techniques, it is possible to customize mammography for each patient, providing more accurate information for risk assessment, prognosis, and treatment planning. This paper aims to study the recent achievements of deep learning-based mammography for breast cancer detection and classification. This review paper highlights the potential of deep learning-assisted X-ray mammography in improving the accuracy of breast cancer screening. While the potential benefits are clear, it is essential to address the challenges associated with implementing this technology in clinical settings. Future research should focus on refining deep learning algorithms, ensuring data privacy, improving model interpretability, and establishing generalizability to successfully integrate deep learning-assisted mammography into routine breast cancer screening programs. It is hoped that the research findings will assist investigators, engineers, and clinicians in developing more effective breast imaging tools that provide accurate diagnosis, sensitivity, and specificity for breast cancer.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
7
|
Alhatem A, Wong T, Clark Lambert W. Revolutionizing diagnostic pathology: The emergence and impact of artificial intelligence-what doesn't kill you makes you stronger? Clin Dermatol 2024:S0738-081X(23)00272-9. [PMID: 38181890 DOI: 10.1016/j.clindermatol.2023.12.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2024]
Abstract
This study explored the integration and impact of artificial intelligence (AI) in diagnostic pathology, particularly dermatopathology, assessing its challenges and potential solutions for global health care enhancement. A comprehensive literature search in PubMed and Google Scholar, conducted on March 30, 2023, and using terms related to AI, pathology, and machine learning, yielded 44 relevant publications. These were analyzed under themes including the evolution of deep learning in pathology, AI's role in replacing pathologists, development challenges of diagnostic algorithms, clinical implementation hurdles, strategies for practical application in dermatopathology, and future prospects of AI in this field. The findings highlight AI's transformative potential in pathology, underscore the need for ongoing research, collaboration, and regulatory dialogue, and emphasize the importance of addressing the ethical and practical challenges in AI implementation for improved global health care outcomes.
Collapse
Affiliation(s)
- Albert Alhatem
- Department of Pathology, Immunology and Laboratory Medicine and Department of Dermatology, Rutgers-New Jersey Medical School, Newark, New Jersey, USA
| | - Trish Wong
- Department of Pathology, Immunology and Laboratory Medicine and Department of Dermatology, Rutgers-New Jersey Medical School, Newark, New Jersey, USA
| | - W Clark Lambert
- Department of Pathology, Immunology and Laboratory Medicine and Department of Dermatology, Rutgers-New Jersey Medical School, Newark, New Jersey, USA.
| |
Collapse
|
8
|
Zhang J, Wu J, Zhou XS, Shi F, Shen D. Recent advancements in artificial intelligence for breast cancer: Image augmentation, segmentation, diagnosis, and prognosis approaches. Semin Cancer Biol 2023; 96:11-25. [PMID: 37704183 DOI: 10.1016/j.semcancer.2023.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 08/03/2023] [Accepted: 09/05/2023] [Indexed: 09/15/2023]
Abstract
Breast cancer is a significant global health burden, with increasing morbidity and mortality worldwide. Early screening and accurate diagnosis are crucial for improving prognosis. Radiographic imaging modalities such as digital mammography (DM), digital breast tomosynthesis (DBT), magnetic resonance imaging (MRI), ultrasound (US), and nuclear medicine techniques, are commonly used for breast cancer assessment. And histopathology (HP) serves as the gold standard for confirming malignancy. Artificial intelligence (AI) technologies show great potential for quantitative representation of medical images to effectively assist in segmentation, diagnosis, and prognosis of breast cancer. In this review, we overview the recent advancements of AI technologies for breast cancer, including 1) improving image quality by data augmentation, 2) fast detection and segmentation of breast lesions and diagnosis of malignancy, 3) biological characterization of the cancer such as staging and subtyping by AI-based classification technologies, 4) prediction of clinical outcomes such as metastasis, treatment response, and survival by integrating multi-omics data. Then, we then summarize large-scale databases available to help train robust, generalizable, and reproducible deep learning models. Furthermore, we conclude the challenges faced by AI in real-world applications, including data curating, model interpretability, and practice regulations. Besides, we expect that clinical implementation of AI will provide important guidance for the patient-tailored management.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
9
|
Li Y, Liu S. Adversarial Attack and Defense in Breast Cancer Deep Learning Systems. Bioengineering (Basel) 2023; 10:973. [PMID: 37627858 PMCID: PMC10451783 DOI: 10.3390/bioengineering10080973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 08/14/2023] [Indexed: 08/27/2023] Open
Abstract
Deep-learning-assisted medical diagnosis has brought revolutionary innovations to medicine. Breast cancer is a great threat to women's health, and deep-learning-assisted diagnosis of breast cancer pathology images can save manpower and improve diagnostic accuracy. However, researchers have found that deep learning systems based on natural images are vulnerable to attacks that can lead to errors in recognition and classification, raising security concerns about deep systems based on medical images. We used the adversarial attack algorithm FGSM to reveal that breast cancer deep learning systems are vulnerable to attacks and thus misclassify breast cancer pathology images. To address this problem, we built a deep learning system for breast cancer pathology image recognition with better defense performance. Accurate diagnosis of medical images is related to the health status of patients. Therefore, it is very important and meaningful to improve the security and reliability of medical deep learning systems before they are actually deployed.
Collapse
Affiliation(s)
- Yang Li
- Graduate School of Advanced Science and Engineering, Hiroshima University, Higashihiroshima 739-8511, Japan
| | | |
Collapse
|
10
|
Bhausaheb DP, Kashyap KL. Shuffled Shepherd Deer Hunting Optimization based Deep Neural Network for Breast Cancer Classification using Breast Histopathology Images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
|
11
|
Shihabuddin AR, K. SB. Multi CNN based automatic detection of mitotic nuclei in breast histopathological images. Comput Biol Med 2023; 158:106815. [PMID: 37003066 DOI: 10.1016/j.compbiomed.2023.106815] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 03/07/2023] [Accepted: 03/20/2023] [Indexed: 03/31/2023]
Abstract
In breast cancer diagnosis, the number of mitotic cells in a specific area is an important measure. It indicates how far the tumour has spread, which has consequences for forecasting the aggressiveness of cancer. Mitosis counting is a time-consuming and challenging technique that a pathologist does manually by examining Hematoxylin and Eosin (H&E) stained biopsy slices under a microscope. Due to limited datasets and the resemblance between mitotic and non-mitotic cells, detecting mitosis in H&E stained slices is difficult. By assisting in the screening, identifying, and labelling of mitotic cells, computer-aided mitosis detection technologies make the entire procedure much easier. For computer-aided detection approaches of smaller datasets, pre-trained convolutional neural networks are extensively employed. The usefulness of a multi CNN framework with three pre-trained CNNs is investigated in this research for mitosis detection. Features were collected from histopathology data and identified using VGG16, ResNet50, and DenseNet201 pre-trained networks. The proposed framework utilises all training folders of the MITOS dataset provided for the MITOS-ATYPIA contest 2014 and all the 73 folders of the TUPAC16 dataset. Each pre-trained Convolutional Neural Network model, such as VGG16, ResNet50 and DenseNet201, provides an accuracy of 83.22%, 73.67%, and 81.75%, respectively. Different combinations of these pre-trained CNNs constitute a multi CNN framework. Performance measures of multi CNN consisting of 3 pre-trained CNNs with Linear SVM give 93.81% precision and 92.41% F1-score compared to multi CNN combinations with other classifiers such as Adaboost and Random Forest.
Collapse
|
12
|
A generalizable and robust deep learning algorithm for mitosis detection in multicenter breast histopathological images. Med Image Anal 2023; 84:102703. [PMID: 36481608 DOI: 10.1016/j.media.2022.102703] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 09/16/2022] [Accepted: 11/21/2022] [Indexed: 11/24/2022]
Abstract
Mitosis counting of biopsies is an important biomarker for breast cancer patients, which supports disease prognostication and treatment planning. Developing a robust mitotic cell detection model is highly challenging due to its complex growth pattern and high similarities with non-mitotic cells. Most mitosis detection algorithms have poor generalizability across image domains and lack reproducibility and validation in multicenter settings. To overcome these issues, we propose a generalizable and robust mitosis detection algorithm (called FMDet), which is independently tested on multicenter breast histopathological images. To capture more refined morphological features of cells, we convert the object detection task as a semantic segmentation problem. The pixel-level annotations for mitotic nuclei are obtained by taking the intersection of the masks generated from a well-trained nuclear segmentation model and the bounding boxes provided by the MIDOG 2021 challenge. In our segmentation framework, a robust feature extractor is developed to capture the appearance variations of mitotic cells, which is constructed by integrating a channel-wise multi-scale attention mechanism into a fully convolutional network structure. Benefiting from the fact that the changes in the low-level spectrum do not affect the high-level semantic perception, we employ a Fourier-based data augmentation method to reduce domain discrepancies by exchanging the low-frequency spectrum between two domains. Our FMDet algorithm has been tested in the MIDOG 2021 challenge and ranked first place. Further, our algorithm is also externally validated on four independent datasets for mitosis detection, which exhibits state-of-the-art performance in comparison with previously published results. These results demonstrate that our algorithm has the potential to be deployed as an assistant decision support tool in clinical practice. Our code has been released at https://github.com/Xiyue-Wang/1st-in-MICCAI-MIDOG-2021-challenge.
Collapse
|
13
|
Tubule-U-Net: a novel dataset and deep learning-based tubule segmentation framework in whole slide images of breast cancer. Sci Rep 2023; 13:128. [PMID: 36599960 PMCID: PMC9812986 DOI: 10.1038/s41598-022-27331-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 12/30/2022] [Indexed: 01/06/2023] Open
Abstract
The tubule index is a vital prognostic measure in breast cancer tumor grading and is visually evaluated by pathologists. In this paper, a computer-aided patch-based deep learning tubule segmentation framework, named Tubule-U-Net, is developed and proposed to segment tubules in Whole Slide Images (WSI) of breast cancer. Moreover, this paper presents a new tubule segmentation dataset consisting of 30820 polygonal annotated tubules in 8225 patches. The Tubule-U-Net framework first uses a patch enhancement technique such as reflection or mirror padding and then employs an asymmetric encoder-decoder semantic segmentation model. The encoder is developed in the model by various deep learning architectures such as EfficientNetB3, ResNet34, and DenseNet161, whereas the decoder is similar to U-Net. Thus, three different models are obtained, which are EfficientNetB3-U-Net, ResNet34-U-Net, and DenseNet161-U-Net. The proposed framework with three different models, U-Net, U-Net++, and Trans-U-Net segmentation methods are trained on the created dataset and tested on five different WSIs. The experimental results demonstrate that the proposed framework with the EfficientNetB3 model trained on patches obtained using the reflection padding and tested on patches with overlapping provides the best segmentation results on the test data and achieves 95.33%, 93.74%, and 90.02%, dice, recall, and specificity scores, respectively.
Collapse
|
14
|
Zhao Y, Zhang J, Hu D, Qu H, Tian Y, Cui X. Application of Deep Learning in Histopathology Images of Breast Cancer: A Review. MICROMACHINES 2022; 13:2197. [PMID: 36557496 PMCID: PMC9781697 DOI: 10.3390/mi13122197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/04/2022] [Accepted: 12/09/2022] [Indexed: 06/17/2023]
Abstract
With the development of artificial intelligence technology and computer hardware functions, deep learning algorithms have become a powerful auxiliary tool for medical image analysis. This study was an attempt to use statistical methods to analyze studies related to the detection, segmentation, and classification of breast cancer in pathological images. After an analysis of 107 articles on the application of deep learning to pathological images of breast cancer, this study is divided into three directions based on the types of results they report: detection, segmentation, and classification. We introduced and analyzed models that performed well in these three directions and summarized the related work from recent years. Based on the results obtained, the significant ability of deep learning in the application of breast cancer pathological images can be recognized. Furthermore, in the classification and detection of pathological images of breast cancer, the accuracy of deep learning algorithms has surpassed that of pathologists in certain circumstances. Our study provides a comprehensive review of the development of breast cancer pathological imaging-related research and provides reliable recommendations for the structure of deep learning network models in different application scenarios.
Collapse
Affiliation(s)
- Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| | - Jie Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Dayu Hu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Hui Qu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Ye Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| |
Collapse
|
15
|
Ganoza-Quintana JL, Arce-Diego JL, Fanjul-Vélez F. Digital Histopathological Discrimination of Label-Free Tumoral Tissues by Artificial Intelligence Phase-Imaging Microscopy. SENSORS (BASEL, SWITZERLAND) 2022; 22:9295. [PMID: 36501995 PMCID: PMC9738430 DOI: 10.3390/s22239295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/18/2022] [Accepted: 11/26/2022] [Indexed: 06/17/2023]
Abstract
Histopathology is the gold standard for disease diagnosis. The use of digital histology on fresh samples can reduce processing time and potential image artifacts, as label-free samples do not need to be fixed nor stained. This fact allows for a faster diagnosis, increasing the speed of the process and the impact on patient prognosis. This work proposes, implements, and validates a novel digital diagnosis procedure of fresh label-free histological samples. The procedure is based on advanced phase-imaging microscopy parameters and artificial intelligence. Fresh human histological samples of healthy and tumoral liver, kidney, ganglion, testicle and brain were collected and imaged with phase-imaging microscopy. Advanced phase parameters were calculated from the images. The statistical significance of each parameter for each tissue type was evaluated at different magnifications of 10×, 20× and 40×. Several classification algorithms based on artificial intelligence were applied and evaluated. Artificial Neural Network and Decision Tree approaches provided the best general sensibility and specificity results, with values over 90% for the majority of biological tissues at some magnifications. These results show the potential to provide a label-free automatic significant diagnosis of fresh histological samples with advanced parameters of phase-imaging microscopy. This approach can complement the present clinical procedures.
Collapse
|
16
|
Oyelade ON, Ezugwu AE, Venter HS, Mirjalili S, Gandomi AH. Abnormality classification and localization using dual-branch whole-region-based CNN model with histopathological images. Comput Biol Med 2022; 149:105943. [PMID: 35986967 DOI: 10.1016/j.compbiomed.2022.105943] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 07/17/2022] [Accepted: 08/06/2022] [Indexed: 12/29/2022]
Abstract
The task of classification and localization with detecting abnormalities in medical images is considered very challenging. Computer-aided systems have been widely employed to address this issue, and the proliferation of deep learning network architectures is proof of the outstanding performance reported in the literature. However, localizing abnormalities in regions of images that can support the confidence of classification continues to attract research interest. The difficulty of using digital histopathology images for this task is another drawback, which needs high-level deep learning models to address the situation. Successful pathology localization automation will support automatic acquisition planning and post-imaging analysis. In this paper, we address issues related to the combination of classification with image localization and detection through a dual branch deep learning framework that uses two different configurations of convolutional neural networks (CNN) architectures. Whole-image based CNN (WCNN) and region-based CNN (RCNN) architectures are systematically combined to classify and localize abnormalities in samples. A multi-class classification and localization of abnormalities are achieved using the method with no annotation-dependent images. In addition, seamless confidence and explanation mechanism is provided so that outcomes from WCNN and RCNN are mapped together for further analysis. Using images from both BACH and BreakHis databases, an exhaustive set of experiments was carried out to validate the performance of the proposed method in achieving classification and localization simultaneously. Obtained results showed that the system achieved a classification accuracy of 97.08%, a localization accuracy of 94%, and an area under the curve (AUC) of 0.10 for classification. Further findings from this study revealed that a multi-neural network approach could provide a suitable method for addressing the combinatorial problem of classification and localization anomalies in digital medical images. Lastly, the study's outcome offers means for automating the annotation of histopathology images and the support for human pathologists in locating abnormalities.
Collapse
Affiliation(s)
- Olaide N Oyelade
- School of Mathematics, Statistics, and Computer Science, University of KwaZulu-Natal, King Edward Avenue, Pietermaritzburg Campus, Pietermaritzburg, 3201, KwaZulu-Natal, South Africa.
| | - Absalom E Ezugwu
- School of Mathematics, Statistics, and Computer Science, University of KwaZulu-Natal, King Edward Avenue, Pietermaritzburg Campus, Pietermaritzburg, 3201, KwaZulu-Natal, South Africa.
| | - Hein S Venter
- Department of Computer Science, University of Pretoria, Pretoria, 0028, South Africa.
| | - Seyedali Mirjalili
- Centre for Artificial Intelligence Research and Optimization, Torrens University, Australia.
| | - Amir H Gandomi
- Faculty of Engineering and Information Technology, University of Technology Sydney, Australia.
| |
Collapse
|
17
|
Weiss R, Karimijafarbigloo S, Roggenbuck D, Rödiger S. Applications of Neural Networks in Biomedical Data Analysis. Biomedicines 2022; 10:biomedicines10071469. [PMID: 35884772 PMCID: PMC9313085 DOI: 10.3390/biomedicines10071469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 06/16/2022] [Accepted: 06/17/2022] [Indexed: 12/04/2022] Open
Abstract
Neural networks for deep-learning applications, also called artificial neural networks, are important tools in science and industry. While their widespread use was limited because of inadequate hardware in the past, their popularity increased dramatically starting in the early 2000s when it became possible to train increasingly large and complex networks. Today, deep learning is widely used in biomedicine from image analysis to diagnostics. This also includes special topics, such as forensics. In this review, we discuss the latest networks and how they work, with a focus on the analysis of biomedical data, particularly biomarkers in bioimage data. We provide a summary on numerous technical aspects, such as activation functions and frameworks. We also present a data analysis of publications about neural networks to provide a quantitative insight into the use of network types and the number of journals per year to determine the usage in different scientific fields.
Collapse
Affiliation(s)
- Romano Weiss
- Faculty of Environment and Natural Sciences, Brandenburg University of Technology Cottbus-Senftenberg, Universitätsplatz 1, D-01968 Senftenberg, Germany; (R.W.); (S.K.); (D.R.)
| | - Sanaz Karimijafarbigloo
- Faculty of Environment and Natural Sciences, Brandenburg University of Technology Cottbus-Senftenberg, Universitätsplatz 1, D-01968 Senftenberg, Germany; (R.W.); (S.K.); (D.R.)
| | - Dirk Roggenbuck
- Faculty of Environment and Natural Sciences, Brandenburg University of Technology Cottbus-Senftenberg, Universitätsplatz 1, D-01968 Senftenberg, Germany; (R.W.); (S.K.); (D.R.)
- Faculty of Health Sciences Brandenburg, Brandenburg University of Technology Cottbus-Senftenberg, D-01968 Senftenberg, Germany
| | - Stefan Rödiger
- Faculty of Environment and Natural Sciences, Brandenburg University of Technology Cottbus-Senftenberg, Universitätsplatz 1, D-01968 Senftenberg, Germany; (R.W.); (S.K.); (D.R.)
- Faculty of Health Sciences Brandenburg, Brandenburg University of Technology Cottbus-Senftenberg, D-01968 Senftenberg, Germany
- Correspondence:
| |
Collapse
|
18
|
Multi-Classification of Breast Cancer Lesions in Histopathological Images Using DEEP_Pachi: Multiple Self-Attention Head. Diagnostics (Basel) 2022; 12:diagnostics12051152. [PMID: 35626307 PMCID: PMC9139754 DOI: 10.3390/diagnostics12051152] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 04/23/2022] [Accepted: 04/28/2022] [Indexed: 11/16/2022] Open
Abstract
Introduction and Background: Despite fast developments in the medical field, histological diagnosis is still regarded as the benchmark in cancer diagnosis. However, the input image feature extraction that is used to determine the severity of cancer at various magnifications is harrowing since manual procedures are biased, time consuming, labor intensive, and error-prone. Current state-of-the-art deep learning approaches for breast histopathology image classification take features from entire images (generic features). Thus, they are likely to overlook the essential image features for the unnecessary features, resulting in an incorrect diagnosis of breast histopathology imaging and leading to mortality. Methods: This discrepancy prompted us to develop DEEP_Pachi for classifying breast histopathology images at various magnifications. The suggested DEEP_Pachi collects global and regional features that are essential for effective breast histopathology image classification. The proposed model backbone is an ensemble of DenseNet201 and VGG16 architecture. The ensemble model extracts global features (generic image information), whereas DEEP_Pachi extracts spatial information (regions of interest). Statistically, the evaluation of the proposed model was performed on publicly available dataset: BreakHis and ICIAR 2018 Challenge datasets. Result: A detailed evaluation of the proposed model’s accuracy, sensitivity, precision, specificity, and f1-score metrics revealed the usefulness of the backbone model and the DEEP_Pachi model for image classifying. The suggested technique outperformed state-of-the-art classifiers, achieving an accuracy of 1.0 for the benign class and 0.99 for the malignant class in all magnifications of BreakHis datasets and an accuracy of 1.0 on the ICIAR 2018 Challenge dataset. Conclusion: The acquired findings were significantly resilient and proved helpful for the suggested system to assist experts at big medical institutions, resulting in early breast cancer diagnosis and a reduction in the death rate.
Collapse
|
19
|
Hu H, Qiao S, Hao Y, Bai Y, Cheng R, Zhang W, Zhang G. Breast cancer histopathological images recognition based on two-stage nuclei segmentation strategy. PLoS One 2022; 17:e0266973. [PMID: 35482728 PMCID: PMC9049370 DOI: 10.1371/journal.pone.0266973] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Accepted: 03/30/2022] [Indexed: 11/19/2022] Open
Abstract
Pathological examination is the gold standard for breast cancer diagnosis. The recognition of histopathological images of breast cancer has attracted a lot of attention in the field of medical image processing. In this paper, on the base of the Bioimaging 2015 dataset, a two-stage nuclei segmentation strategy, that is, a method of watershed segmentation based on histopathological images after stain separation, is proposed to make the dataset recognized to be the carcinoma and non-carcinoma recognition. Firstly, stain separation is performed on breast cancer histopathological images. Then the marker-based watershed segmentation method is used for images obtained from stain separation to achieve the nuclei segmentation target. Next, the completed local binary pattern is used to extract texture features from the nuclei regions (images after nuclei segmentation), and color features were extracted by using the color auto-correlation method on the stain-separated images. Finally, the two kinds of features were fused and the support vector machine was used for carcinoma and non-carcinoma recognition. The experimental results show that the two-stage nuclei segmentation strategy proposed in this paper has significant advantages in the recognition of carcinoma and non-carcinoma on breast cancer histopathological images, and the recognition accuracy arrives at 91.67%. The proposed method is also applied to the ICIAR 2018 dataset to realize the automatic recognition of carcinoma and non-carcinoma, and the recognition accuracy arrives at 92.50%.
Collapse
Affiliation(s)
- Hongping Hu
- School of Science, North University of China, Taiyuan, China
| | - Shichang Qiao
- School of Science, North University of China, Taiyuan, China
| | - Yan Hao
- School of Information and Communication Engineering, North University of China, Taiyuan, China
| | - Yanping Bai
- School of Science, North University of China, Taiyuan, China
| | - Rong Cheng
- School of Science, North University of China, Taiyuan, China
| | - Wendong Zhang
- School of Instrument and Electronics, State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| | - Guojun Zhang
- School of Instrument and Electronics, State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| |
Collapse
|
20
|
Hwang M, Wu C, Jiang WC, Hung WC. A sequential attention interface with a dense reward function for mitosis detection. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-022-01549-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
21
|
Heinemann F, Lempp C, Colbatzky F, Deschl U, Nolte T. Quantification of Hepatocellular Mitoses in a Toxicological Study in Rats Using a Convolutional Neural Network. Toxicol Pathol 2022; 50:344-352. [PMID: 35321595 DOI: 10.1177/01926233221083500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Convolutional neural networks (CNNs) have been recognized as valuable tools for rapid quantitative analysis of morphological changes in toxicologic histopathology. We have assessed the performance of CNN-based (Halo-AI) mitotic figure detection in hepatocytes in comparison with detection by pathologists. In addition, we compared with Ki-67 and 5-bromodesoxyuridin (BrdU) immunohistochemistry labeling indices (LIs) obtained by image analysis. Tissues were from an exploratory toxicity study with a glycogen synthase kinase-3 (GSK-3) inhibitor. Our investigations revealed that (1) the CNN achieved similarly accurate but faster results than pathologists, (2) results of mitotic figure detection were comparable to Ki-67 and BrdU LIs, and (3) data from different methods were only moderately correlated. The latter is likely related to differences in the cell cycle component captured by each method. This highlights the importance of considering the differences of the available methods upon selection. Also, the pharmacology of our test item acting as a GSK-3 inhibitor potentially reduced the correlation. We conclude that hepatocyte cell proliferation assessment by CNNs can have several advantages when compared with the current gold standard: it relieves the pathologist of tedious routine tasks and contributes to standardization of results; the CNN algorithm can be shared and iteratively improved; it can be performed on routine histological slides; it does not require an additional animal experiment and in this way can contribute to animal welfare according to the 3R principles.
Collapse
Affiliation(s)
- Fabian Heinemann
- Boehringer Ingelheim Pharma GmbH & Co. KG, Biberach an der Riß, Germany
| | - Charlotte Lempp
- Boehringer Ingelheim Pharma GmbH & Co. KG, Biberach an der Riß, Germany
| | - Florian Colbatzky
- Boehringer Ingelheim Pharma GmbH & Co. KG, Biberach an der Riß, Germany
| | - Ulrich Deschl
- Boehringer Ingelheim Pharma GmbH & Co. KG, Biberach an der Riß, Germany
| | - Thomas Nolte
- Boehringer Ingelheim Pharma GmbH & Co. KG, Biberach an der Riß, Germany
| |
Collapse
|
22
|
Sturm B, Creytens D, Smits J, Ooms AHAG, Eijken E, Kurpershoek E, Küsters-Vandevelde HVN, Wauters C, Blokx WAM, van der Laak JAWM. Computer-Aided Assessment of Melanocytic Lesions by Means of a Mitosis Algorithm. Diagnostics (Basel) 2022; 12:diagnostics12020436. [PMID: 35204526 PMCID: PMC8871065 DOI: 10.3390/diagnostics12020436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 12/31/2021] [Accepted: 01/14/2022] [Indexed: 11/16/2022] Open
Abstract
An increasing number of pathology laboratories are now fully digitised, using whole slide imaging (WSI) for routine diagnostics. WSI paves the road to use artificial intelligence (AI) that will play an increasing role in computer-aided diagnosis (CAD). In melanocytic skin lesions, the presence of a dermal mitosis may be an important clue for an intermediate or a malignant lesion and may indicate worse prognosis. In this study a mitosis algorithm primarily developed for breast carcinoma is applied to melanocytic skin lesions. This study aimed to assess whether the algorithm could be used in diagnosing melanocytic lesions, and to study the added value in diagnosing melanocytic lesions in a practical setting. WSI’s of a set of hematoxylin and eosin (H&E) stained slides of 99 melanocytic lesions (35 nevi, 4 intermediate melanocytic lesions, and 60 malignant melanomas, including 10 nevoid melanomas), for which a consensus diagnosis was reached by three academic pathologists, were subjected to a mitosis algorithm based on AI. Two academic and six general pathologists specialized in dermatopathology examined the WSI cases two times, first without mitosis annotations and after a washout period of at least 2 months with mitosis annotations based on the algorithm. The algorithm indicated true mitosis in lesional cells, i.e., melanocytes, and non-lesional cells, i.e., mainly keratinocytes and inflammatory cells. A high number of false positive mitosis was indicated as well, comprising melanin pigment, sebaceous glands nuclei, and spindle cell nuclei such as stromal cells and neuroid differentiated melanocytes. All but one pathologist reported more often a dermal mitosis with the mitosis algorithm, which on a regular basis, was incorrectly attributed to mitoses from mainly inflammatory cells. The overall concordance of the pathologists with the consensus diagnosis for all cases excluding nevoid melanoma (n = 89) appeared to be comparable with and without the use of AI (89% vs. 90%). However, the concordance increased by using AI in nevoid melanoma cases (n = 10) (75% vs. 68%). This study showed that in general cases, pathologists perform similarly with the aid of a mitosis algorithm developed primarily for breast cancer. In nevoid melanoma cases, pathologists perform better with the algorithm. From this study, it can be learned that pathologists need to be aware of potential pitfalls using CAD on H&E slides, e.g., misinterpreting dermal mitoses in non-melanotic cells.
Collapse
Affiliation(s)
- Bart Sturm
- Department of Pathology, Radboud University Medical Center, 6500 HB Nijmegen, The Netherlands;
- Pathan B.V., 3045 PM Rotterdam, The Netherlands; (J.S.); (A.H.A.G.O.); (E.K.)
| | - David Creytens
- Department of Pathology, Ghent University Hospital, 9000 Ghent, Belgium;
| | - Jan Smits
- Pathan B.V., 3045 PM Rotterdam, The Netherlands; (J.S.); (A.H.A.G.O.); (E.K.)
| | | | - Erik Eijken
- Laboratory for Pathology Oost Nederland (LabPON), 7550 AM Hengelo, The Netherlands;
| | - Eline Kurpershoek
- Pathan B.V., 3045 PM Rotterdam, The Netherlands; (J.S.); (A.H.A.G.O.); (E.K.)
| | | | - Carla Wauters
- Department of Pathology, Canisius Wilhelmina Hospital, 6500 GS Nijmegen, The Netherlands; (H.V.N.K.-V.); (C.W.)
| | - Willeke A. M. Blokx
- Division Laboratories, Pharmacy and Biomedical Genetics, University Medical Center Utrecht, 3508 GA Utrecht, The Netherlands;
| | - Jeroen A. W. M. van der Laak
- Department of Pathology, Radboud University Medical Center, 6500 HB Nijmegen, The Netherlands;
- Center for Medical Image Science and Visualization, Linköping University, 581 83 Linköping, Sweden
- Correspondence: ; Tel.: +31-638-814-869
| |
Collapse
|
23
|
Shah SM, Khan RA, Arif S, Sajid U. Artificial intelligence for breast cancer analysis: Trends & directions. Comput Biol Med 2022; 142:105221. [PMID: 35016100 DOI: 10.1016/j.compbiomed.2022.105221] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 12/18/2022]
Abstract
Breast cancer is one of the leading causes of death among women. Early detection of breast cancer can significantly improve the lives of millions of women across the globe. Given importance of finding solution/framework for early detection and diagnosis, recently many AI researchers are focusing to automate this task. The other reasons for surge in research activities in this direction are advent of robust AI algorithms (deep learning), availability of hardware that can run/train those robust and complex AI algorithms and accessibility of large enough dataset required for training AI algorithms. Different imaging modalities that have been exploited by researchers to automate the task of breast cancer detection are mammograms, ultrasound, magnetic resonance imaging, histopathological images or any combination of them. This article analyzes these imaging modalities and presents their strengths and limitations. It also enlists resources from where their datasets can be accessed for research purpose. This article then summarizes AI and computer vision based state-of-the-art methods proposed in the last decade to detect breast cancer using various imaging modalities. Primarily, in this article we have focused on reviewing frameworks that have reported results using mammograms as it is the most widely used breast imaging modality that serves as the first test that medical practitioners usually prescribe for the detection of breast cancer. Another reason for focusing on mammogram imaging modalities is the availability of its labelled datasets. Datasets availability is one of the most important aspects for the development of AI based frameworks as such algorithms are data hungry and generally quality of dataset affects performance of AI based algorithms. In a nutshell, this research article will act as a primary resource for the research community working in the field of automated breast imaging analysis.
Collapse
Affiliation(s)
- Shahid Munir Shah
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Rizwan Ahmed Khan
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan.
| | - Sheeraz Arif
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Unaiza Sajid
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| |
Collapse
|
24
|
Chen SB, Novoa RA. Artificial intelligence for dermatopathology: Current trends and the road ahead. Semin Diagn Pathol 2022; 39:298-304. [DOI: 10.1053/j.semdp.2022.01.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Revised: 01/06/2022] [Accepted: 01/12/2022] [Indexed: 02/07/2023]
|
25
|
Tan XJ, Mustafa N, Mashor MY, Rahman KSA. Automated knowledge-assisted mitosis cells detection framework in breast histopathology images. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:1721-1745. [PMID: 35135226 DOI: 10.3934/mbe.2022081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Based on the Nottingham Histopathology Grading (NHG) system, mitosis cells detection is one of the important criteria to determine the grade of breast carcinoma. Mitosis cells detection is a challenging task due to the heterogeneous microenvironment of breast histopathology images. Recognition of complex and inconsistent objects in the medical images could be achieved by incorporating domain knowledge in the field of interest. In this study, the strategies of the histopathologist and domain knowledge approach were used to guide the development of the image processing framework for automated mitosis cells detection in breast histopathology images. The detection framework starts with color normalization and hyperchromatic nucleus segmentation. Then, a knowledge-assisted false positive reduction method is proposed to eliminate the false positive (i.e., non-mitosis cells). This stage aims to minimize the percentage of false positive and thus increase the F1-score. Next, features extraction was performed. The mitosis candidates were classified using a Support Vector Machine (SVM) classifier. For evaluation purposes, the knowledge-assisted detection framework was tested using two datasets: a custom dataset and a publicly available dataset (i.e., MITOS dataset). The proposed knowledge-assisted false positive reduction method was found promising by eliminating at least 87.1% of false positive in both the dataset producing promising results in the F1-score. Experimental results demonstrate that the knowledge-assisted detection framework can achieve promising results in F1-score (custom dataset: 89.1%; MITOS dataset: 88.9%) and outperforms the recent works.
Collapse
Affiliation(s)
- Xiao Jian Tan
- Centre for Multimodal Signal Processing, Department of Electrical and Electronic Engineering, Faculty of Engineering and Technology, Tunku Abdul Rahman University College (TARUC), Jalan Genting Kelang, Setapak 53300, Kuala Lumpur, Malaysia
| | - Nazahah Mustafa
- Biomedical Electronic Engineering Programme, Faculty of Electronic Engineering Technology, Universiti Malaysia Perlis (UniMAP) 02600 Arau, Perlis, Malaysia
| | - Mohd Yusoff Mashor
- Biomedical Electronic Engineering Programme, Faculty of Electronic Engineering Technology, Universiti Malaysia Perlis (UniMAP) 02600 Arau, Perlis, Malaysia
| | - Khairul Shakir Ab Rahman
- Department of Pathology, Hospital Tuanku Fauziah 01000 Jalan Tun Abdul Razak Kangar Perlis, Malaysia
| |
Collapse
|
26
|
Mridha MF, Hamid MA, Monowar MM, Keya AJ, Ohi AQ, Islam MR, Kim JM. A Comprehensive Survey on Deep-Learning-Based Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:6116. [PMID: 34885225 PMCID: PMC8656730 DOI: 10.3390/cancers13236116] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 11/25/2021] [Accepted: 12/01/2021] [Indexed: 12/11/2022] Open
Abstract
Breast cancer is now the most frequently diagnosed cancer in women, and its percentage is gradually increasing. Optimistically, there is a good chance of recovery from breast cancer if identified and treated at an early stage. Therefore, several researchers have established deep-learning-based automated methods for their efficiency and accuracy in predicting the growth of cancer cells utilizing medical imaging modalities. As of yet, few review studies on breast cancer diagnosis are available that summarize some existing studies. However, these studies were unable to address emerging architectures and modalities in breast cancer diagnosis. This review focuses on the evolving architectures of deep learning for breast cancer detection. In what follows, this survey presents existing deep-learning-based architectures, analyzes the strengths and limitations of the existing studies, examines the used datasets, and reviews image pre-processing techniques. Furthermore, a concrete review of diverse imaging modalities, performance metrics and results, challenges, and research directions for future researchers is presented.
Collapse
Affiliation(s)
- Muhammad Firoz Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Abdul Hamid
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Muhammad Mostafa Monowar
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Ashfia Jannat Keya
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Abu Quwsar Ohi
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh;
| | - Jong-Myon Kim
- Department of Electrical, Electronics, and Computer Engineering, University of Ulsan, Ulsan 680-749, Korea
| |
Collapse
|
27
|
Rashmi R, Prasad K, Udupa CBK. Breast histopathological image analysis using image processing techniques for diagnostic puposes: A methodological review. J Med Syst 2021; 46:7. [PMID: 34860316 PMCID: PMC8642363 DOI: 10.1007/s10916-021-01786-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 10/21/2021] [Indexed: 12/24/2022]
Abstract
Breast cancer in women is the second most common cancer worldwide. Early detection of breast cancer can reduce the risk of human life. Non-invasive techniques such as mammograms and ultrasound imaging are popularly used to detect the tumour. However, histopathological analysis is necessary to determine the malignancy of the tumour as it analyses the image at the cellular level. Manual analysis of these slides is time consuming, tedious, subjective and are susceptible to human errors. Also, at times the interpretation of these images are inconsistent between laboratories. Hence, a Computer-Aided Diagnostic system that can act as a decision support system is need of the hour. Moreover, recent developments in computational power and memory capacity led to the application of computer tools and medical image processing techniques to process and analyze breast cancer histopathological images. This review paper summarizes various traditional and deep learning based methods developed to analyze breast cancer histopathological images. Initially, the characteristics of breast cancer histopathological images are discussed. A detailed discussion on the various potential regions of interest is presented which is crucial for the development of Computer-Aided Diagnostic systems. We summarize the recent trends and choices made during the selection of medical image processing techniques. Finally, a detailed discussion on the various challenges involved in the analysis of BCHI is presented along with the future scope.
Collapse
Affiliation(s)
- R Rashmi
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India
| | | |
Collapse
|
28
|
|
29
|
Kim H, Yoon H, Thakur N, Hwang G, Lee EJ, Kim C, Chong Y. Deep learning-based histopathological segmentation for whole slide images of colorectal cancer in a compressed domain. Sci Rep 2021; 11:22520. [PMID: 34795365 PMCID: PMC8602325 DOI: 10.1038/s41598-021-01905-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 10/28/2021] [Indexed: 02/06/2023] Open
Abstract
Automatic pattern recognition using deep learning techniques has become increasingly important. Unfortunately, due to limited system memory, general preprocessing methods for high-resolution images in the spatial domain can lose important data information such as high-frequency information and the region of interest. To overcome these limitations, we propose an image segmentation approach in the compressed domain based on principal component analysis (PCA) and discrete wavelet transform (DWT). After inference for each tile using neural networks, a whole prediction image was reconstructed by wavelet weighted ensemble (WWE) based on inverse discrete wavelet transform (IDWT). The training and validation were performed using 351 colorectal biopsy specimens, which were pathologically confirmed by two pathologists. For 39 test datasets, the average Dice score, the pixel accuracy, and the Jaccard score were 0.804 ± 0.125, 0.957 ± 0.025, and 0.690 ± 0.174, respectively. We can train the networks for the high-resolution image with the large region of interest compared to the result in the low-resolution and the small region of interest in the spatial domain. The average Dice score, pixel accuracy, and Jaccard score are significantly increased by 2.7%, 0.9%, and 2.7%, respectively. We believe that our approach has great potential for accurate diagnosis.
Collapse
Affiliation(s)
- Hyeongsub Kim
- Departments of Electrical Engineering, Creative IT Engineering, Mechanical Engineering, School of Interdisciplinary Bioscience and Bioengineering, Medical Device Innovation Center, and Graduate School of Artificial Intelligence, Pohang University of Science and Technology (POSTECH), Pohang, 37674, South Korea.,Deepnoid Inc., Seoul, 08376, South Korea
| | | | - Nishant Thakur
- Department of Hospital Pathology, The Catholic University of Korea, College of Medicine, Uijeongbu St. Mary's Hospital, Seoul, South Korea
| | - Gyoyeon Hwang
- Department of Hospital Pathology, The Catholic University of Korea, College of Medicine, Yeouido St. Mary's Hospital, Seoul, South Korea
| | - Eun Jung Lee
- Department of Hospital Pathology, The Catholic University of Korea, College of Medicine, Yeouido St. Mary's Hospital, Seoul, South Korea.,Department of Pathology, Shinwon Medical Foundation, Gwangmyeong-si, Gyeonggi-do, South Korea
| | - Chulhong Kim
- Departments of Electrical Engineering, Creative IT Engineering, Mechanical Engineering, School of Interdisciplinary Bioscience and Bioengineering, Medical Device Innovation Center, and Graduate School of Artificial Intelligence, Pohang University of Science and Technology (POSTECH), Pohang, 37674, South Korea.
| | - Yosep Chong
- Department of Hospital Pathology, The Catholic University of Korea, College of Medicine, Uijeongbu St. Mary's Hospital, Seoul, South Korea.
| |
Collapse
|
30
|
R R, Prasad K, Udupa CBK. BCHisto-Net: Breast histopathological image classification by global and local feature aggregation. Artif Intell Med 2021; 121:102191. [PMID: 34763806 DOI: 10.1016/j.artmed.2021.102191] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 09/15/2021] [Accepted: 10/05/2021] [Indexed: 02/06/2023]
Abstract
Breast cancer among women is the second most common cancer worldwide. Non-invasive techniques such as mammograms and ultrasound imaging are used to detect the tumor. However, breast histopathological image analysis is inevitable for the detection of malignancy of the tumor. Manual analysis of breast histopathological images is subjective, tedious, laborious and is prone to human errors. Recent developments in computational power and memory have made automation a popular choice for the analysis of these images. One of the key challenges of breast histopathological image classification at 100× magnification is to extract the features of the potential regions of interest to decide on the malignancy of the tumor. The current state-of-the-art CNN based methods for breast histopathological image classification extract features from the entire image (global features) and thus may overlook the features of the potential regions of interest. This can lead to inaccurate diagnosis of breast histopathological images. This research gap has motivated us to propose BCHisto-Net to classify breast histopathological images at 100× magnification. The proposed BCHisto-Net extracts both global and local features required for the accurate classification of breast histopathological images. The global features extract abstract image features while local features focus on potential regions of interest. Furthermore, a feature aggregation branch is proposed to combine these features for the classification of 100× images. The proposed method is quantitatively evaluated on red a private dataset and publicly available BreakHis dataset. An extensive evaluation of the proposed model showed the effectiveness of the local and global features for the classification of these images. The proposed method achieved an accuracy of 95% and 89% on KMC and BreakHis datasets respectively, outperforming state-of-the-art classifiers.
Collapse
Affiliation(s)
- Rashmi R
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India.
| | - Chethana Babu K Udupa
- Department of Pathology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, India.
| |
Collapse
|
31
|
Oyelade ON, Ezugwu AE. A bioinspired neural architecture search based convolutional neural network for breast cancer detection using histopathology images. Sci Rep 2021; 11:19940. [PMID: 34620891 PMCID: PMC8497552 DOI: 10.1038/s41598-021-98978-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Accepted: 09/16/2021] [Indexed: 12/12/2022] Open
Abstract
The design of neural architecture to address the challenge of detecting abnormalities in histopathology images can leverage the gains made in the field of neural architecture search (NAS). The NAS model consists of a search space, search strategy and evaluation strategy. The approach supports the automation of deep learning (DL) based networks such as convolutional neural networks (CNN). Automating the process of CNN architecture engineering using this approach allows for finding the best performing network for learning classification problems in specific domains and datasets. However, the engineering process of NAS is often limited by the potential solutions in search space and the search strategy. This problem often narrows the possibility of obtaining best performing networks for challenging tasks such as the classification of breast cancer in digital histopathological samples. This study proposes a NAS model with a novel search space initialization algorithm and a new search strategy. We designed a block-based stochastic categorical-to-binary (BSCB) algorithm for generating potential CNN solutions into the search space. Also, we applied and investigated the performance of a new bioinspired optimization algorithm, namely the Ebola optimization search algorithm (EOSA), for the search strategy. The evaluation strategy was achieved through computation of loss function, architectural latency and accuracy. The results obtained using images from the BACH and BreakHis databases showed that our approach obtained best performing architectures with the top-5 of the architectures yielding a significant detection rate. The top-1 CNN architecture demonstrated a state-of-the-art performance of base on classification accuracy. The NAS strategy applied in this study and the resulting candidate architecture provides researchers with the most appropriate or suitable network configuration for using digital histopathology.
Collapse
Affiliation(s)
- Olaide N Oyelade
- School of Mathematics, Statistics, and Computer Science, University of KwaZulu-Natal, King Edward Avenue, Pietermaritzburg Campus, Pietermaritzburg, KwaZulu-Natal, 3201, South Africa.
| | - Absalom E Ezugwu
- School of Mathematics, Statistics, and Computer Science, University of KwaZulu-Natal, King Edward Avenue, Pietermaritzburg Campus, Pietermaritzburg, KwaZulu-Natal, 3201, South Africa.
| |
Collapse
|
32
|
Transfer Learning Approach for Classification of Histopathology Whole Slide Images. SENSORS 2021; 21:s21165361. [PMID: 34450802 PMCID: PMC8401188 DOI: 10.3390/s21165361] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Revised: 08/06/2021] [Accepted: 08/07/2021] [Indexed: 02/07/2023]
Abstract
The classification of whole slide images (WSIs) provides physicians with an accurate analysis of diseases and also helps them to treat patients effectively. The classification can be linked to further detailed analysis and diagnosis. Deep learning (DL) has made significant advances in the medical industry, including the use of magnetic resonance imaging (MRI) scans, computerized tomography (CT) scans, and electrocardiograms (ECGs) to detect life-threatening diseases, including heart disease, cancer, and brain tumors. However, more advancement in the field of pathology is needed, but the main hurdle causing the slow progress is the shortage of large-labeled datasets of histopathology images to train the models. The Kimia Path24 dataset was particularly created for the classification and retrieval of histopathology images. It contains 23,916 histopathology patches with 24 tissue texture classes. A transfer learning-based framework is proposed and evaluated on two famous DL models, Inception-V3 and VGG-16. To improve the productivity of Inception-V3 and VGG-16, we used their pre-trained weights and concatenated these with an image vector, which is used as input for the training of the same architecture. Experiments show that the proposed innovation improves the accuracy of both famous models. The patch-to-scan accuracy of VGG-16 is improved from 0.65 to 0.77, and for the Inception-V3, it is improved from 0.74 to 0.79.
Collapse
|
33
|
Sarker MMK, Makhlouf Y, Craig SG, Humphries MP, Loughrey M, James JA, Salto-Tellez M, O’Reilly P, Maxwell P. A Means of Assessing Deep Learning-Based Detection of ICOS Protein Expression in Colon Cancer. Cancers (Basel) 2021; 13:3825. [PMID: 34359723 PMCID: PMC8345140 DOI: 10.3390/cancers13153825] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 07/20/2021] [Accepted: 07/23/2021] [Indexed: 02/07/2023] Open
Abstract
Biomarkers identify patient response to therapy. The potential immune-checkpoint biomarker, Inducible T-cell COStimulator (ICOS), expressed on regulating T-cell activation and involved in adaptive immune responses, is of great interest. We have previously shown that open-source software for digital pathology image analysis can be used to detect and quantify ICOS using cell detection algorithms based on traditional image processing techniques. Currently, artificial intelligence (AI) based on deep learning methods is significantly impacting the domain of digital pathology, including the quantification of biomarkers. In this study, we propose a general AI-based workflow for applying deep learning to the problem of cell segmentation/detection in IHC slides as a basis for quantifying nuclear staining biomarkers, such as ICOS. It consists of two main parts: a simplified but robust annotation process, and cell segmentation/detection models. This results in an optimised annotation process with a new user-friendly tool that can interact with1 other open-source software and assists pathologists and scientists in creating and exporting data for deep learning. We present a set of architectures for cell-based segmentation/detection to quantify and analyse the trade-offs between them, proving to be more accurate and less time consuming than traditional methods. This approach can identify the best tool to deliver the prognostic significance of ICOS protein expression.
Collapse
Affiliation(s)
- Md Mostafa Kamal Sarker
- Precision Medicine Centre of Excellence, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK; (M.M.K.S.); (Y.M.); (S.G.C.); (M.P.H.); (J.A.J.); (M.S.-T.)
| | - Yasmine Makhlouf
- Precision Medicine Centre of Excellence, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK; (M.M.K.S.); (Y.M.); (S.G.C.); (M.P.H.); (J.A.J.); (M.S.-T.)
| | - Stephanie G. Craig
- Precision Medicine Centre of Excellence, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK; (M.M.K.S.); (Y.M.); (S.G.C.); (M.P.H.); (J.A.J.); (M.S.-T.)
| | - Matthew P. Humphries
- Precision Medicine Centre of Excellence, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK; (M.M.K.S.); (Y.M.); (S.G.C.); (M.P.H.); (J.A.J.); (M.S.-T.)
| | - Maurice Loughrey
- Cellular Pathology, Belfast Health and Social Care Trust, Belfast City Hospital, Lisburn Road, Belfast BT9 7AB, UK;
| | - Jacqueline A. James
- Precision Medicine Centre of Excellence, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK; (M.M.K.S.); (Y.M.); (S.G.C.); (M.P.H.); (J.A.J.); (M.S.-T.)
- Cellular Pathology, Belfast Health and Social Care Trust, Belfast City Hospital, Lisburn Road, Belfast BT9 7AB, UK;
- Northern Ireland Biobank, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK
| | - Manuel Salto-Tellez
- Precision Medicine Centre of Excellence, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK; (M.M.K.S.); (Y.M.); (S.G.C.); (M.P.H.); (J.A.J.); (M.S.-T.)
- Cellular Pathology, Belfast Health and Social Care Trust, Belfast City Hospital, Lisburn Road, Belfast BT9 7AB, UK;
- Division of Molecular Pathology, The Institute of Cancer Research, Sutton SM2 5NG, UK
| | - Paul O’Reilly
- Precision Medicine Centre of Excellence, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK; (M.M.K.S.); (Y.M.); (S.G.C.); (M.P.H.); (J.A.J.); (M.S.-T.)
- Sonrai Analytics LTD, Lisburn Road, Belfast BT9 7BL, UK
| | - Perry Maxwell
- Precision Medicine Centre of Excellence, The Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast BT9 7AE, UK; (M.M.K.S.); (Y.M.); (S.G.C.); (M.P.H.); (J.A.J.); (M.S.-T.)
| |
Collapse
|
34
|
Howard FM, Dolezal J, Kochanny S, Schulte J, Chen H, Heij L, Huo D, Nanda R, Olopade OI, Kather JN, Cipriani N, Grossman RL, Pearson AT. The impact of site-specific digital histology signatures on deep learning model accuracy and bias. Nat Commun 2021; 12:4423. [PMID: 34285218 PMCID: PMC8292530 DOI: 10.1038/s41467-021-24698-1] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Accepted: 07/01/2021] [Indexed: 12/20/2022] Open
Abstract
The Cancer Genome Atlas (TCGA) is one of the largest biorepositories of digital histology. Deep learning (DL) models have been trained on TCGA to predict numerous features directly from histology, including survival, gene expression patterns, and driver mutations. However, we demonstrate that these features vary substantially across tissue submitting sites in TCGA for over 3,000 patients with six cancer subtypes. Additionally, we show that histologic image differences between submitting sites can easily be identified with DL. Site detection remains possible despite commonly used color normalization and augmentation methods, and we quantify the image characteristics constituting this site-specific digital histology signature. We demonstrate that these site-specific signatures lead to biased accuracy for prediction of features including survival, genomic mutations, and tumor stage. Furthermore, ethnicity can also be inferred from site-specific signatures, which must be accounted for to ensure equitable application of DL. These site-specific signatures can lead to overoptimistic estimates of model performance, and we propose a quadratic programming method that abrogates this bias by ensuring models are not trained and validated on samples from the same site.
Collapse
Affiliation(s)
- Frederick M Howard
- Section of Hematology/Oncology, Department of Medicine, University of Chicago, Chicago, IL, USA
| | - James Dolezal
- Section of Hematology/Oncology, Department of Medicine, University of Chicago, Chicago, IL, USA
| | - Sara Kochanny
- Section of Hematology/Oncology, Department of Medicine, University of Chicago, Chicago, IL, USA
| | - Jefree Schulte
- Department of Pathology, University of Chicago, Chicago, IL, USA
| | - Heather Chen
- Department of Pathology, University of Chicago, Chicago, IL, USA
| | - Lara Heij
- Department of Surgery and Transplantation, University Hospital RWTH Aachen, Aachen, Germany
- Institute of Pathology, University Hospital RWTH Aachen, Aachen, Germany
| | - Dezheng Huo
- Department of Public Health Sciences, University of Chicago, Chicago, IL, USA
- University of Chicago Comprehensive Cancer Center, Chicago, IL, USA
| | - Rita Nanda
- Section of Hematology/Oncology, Department of Medicine, University of Chicago, Chicago, IL, USA
- University of Chicago Comprehensive Cancer Center, Chicago, IL, USA
| | - Olufunmilayo I Olopade
- Section of Hematology/Oncology, Department of Medicine, University of Chicago, Chicago, IL, USA
- University of Chicago Comprehensive Cancer Center, Chicago, IL, USA
| | - Jakob N Kather
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
- Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
- Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany
| | - Nicole Cipriani
- Department of Pathology, University of Chicago, Chicago, IL, USA
- University of Chicago Comprehensive Cancer Center, Chicago, IL, USA
| | - Robert L Grossman
- Section of Hematology/Oncology, Department of Medicine, University of Chicago, Chicago, IL, USA.
- University of Chicago Comprehensive Cancer Center, Chicago, IL, USA.
| | - Alexander T Pearson
- Section of Hematology/Oncology, Department of Medicine, University of Chicago, Chicago, IL, USA.
- University of Chicago Comprehensive Cancer Center, Chicago, IL, USA.
| |
Collapse
|
35
|
Wang P, Li P, Li Y, Wang J, Xu J. Histopathological image classification based on cross-domain deep transferred feature fusion. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102705] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
36
|
Saha M, Guo X, Sharma A. TilGAN: GAN for Facilitating Tumor-Infiltrating Lymphocyte Pathology Image Synthesis With Improved Image Classification. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:79829-79840. [PMID: 34178560 PMCID: PMC8224465 DOI: 10.1109/access.2021.3084597] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Tumor-infiltrating lymphocytes (TILs) act as immune cells against cancer tissues. The manual assessment of TILs is usually erroneous, tedious, costly and subject to inter- and intraobserver variability. Machine learning approaches can solve these issues, but they require a large amount of labeled data for model training, which is expensive and not readily available. In this study, we present an efficient generative adversarial network, TilGAN, to generate high-quality synthetic pathology images followed by classification of TIL and non-TIL regions. Our proposed architecture is constructed with a generator network and a discriminator network. The novelty exists in the TilGAN architecture, loss functions, and evaluation techniques. Our TilGAN-generated images achieved a higher Inception score than the real images (2.90 vs. 2.32, respectively). They also achieved a lower kernel Inception distance (1.44) and a lower Fréchet Inception distance (0.312). It also passed the Turing test performed by experienced pathologists and clinicians. We further extended our evaluation studies and used almost one million synthetic data, generated by TilGAN, to train a classification model. Our proposed classification model achieved a 97.83% accuracy, a 97.37% F1-score, and a 97% area under the curve. Our extensive experiments and superior outcomes show the efficiency and effectiveness of our proposed TilGAN architecture. This architecture can also be used for other types of images for image synthesis.
Collapse
Affiliation(s)
- Monjoy Saha
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA
| | - Xiaoyuan Guo
- Department of Computer Science, Emory University, Atlanta, GA 30332, USA
| | - Ashish Sharma
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
37
|
Wang X, Chen P, Ding G, Xing Y, Tang R, Peng C, Ye Y, Fu Q. Dual-scale categorization based deep learning to evaluate programmed cell death ligand 1 expression in non-small cell lung cancer. Medicine (Baltimore) 2021; 100:e25994. [PMID: 34011092 PMCID: PMC8137090 DOI: 10.1097/md.0000000000025994] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 04/13/2021] [Accepted: 04/29/2021] [Indexed: 01/05/2023] Open
Abstract
ABSTRACT In precision oncology, immune check point blockade therapy has quickly emerged as novel strategy by its efficacy, where programmed death ligand 1 (PD-L1) expression is used as a clinically validated predictive biomarker of response for the therapy. Automating pathological image analysis and accelerating pathology evaluation is becoming an unmet need. Artificial Intelligence and deep learning tools in digital pathology have been studied in order to evaluate PD-L1 expression in PD-L1 immunohistochemistry image. We proposed a Dual-scale Categorization (DSC)-based deep learning method that employed 2 VGG16 neural networks, 1 network for 1 scale, to critically evaluate PD-L1 expression. The DSC-based deep learning method was tested in a cohort of 110 patients diagnosed as non-small cell lung cancer. This method showed a concordance of 88% with pathologist, which was higher than concordance of 83% of 1-scale categorization-based method. Our results show that the DSCbased method can empower the deep learning application in digital pathology and facilitate computer-aided diagnosis.
Collapse
Affiliation(s)
- Xiangyun Wang
- Department of Respiratory and Critical Care Medicine Changzheng Hospital, Naval Military Medical University
| | - Peilin Chen
- Department of Data Systems, 3D Medicines Inc
| | - Guangtai Ding
- School of Computer Engineering and Science, Shanghai University
| | - Yishi Xing
- Department of Data Systems, 3D Medicines Inc
| | | | | | - Yizhou Ye
- Department of Data Systems, 3D Medicines Inc
| | - Qiang Fu
- Oncology Department, Changhai Hospital of Shanghai, Shanghai, China
| |
Collapse
|
38
|
Mittal S. Ensemble of transfer learnt classifiers for recognition of cardiovascular tissues from histological images. Phys Eng Sci Med 2021; 44:655-665. [PMID: 34014495 DOI: 10.1007/s13246-021-01013-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Accepted: 05/10/2021] [Indexed: 12/16/2022]
Abstract
Recognition of tissues and organs is a recurrent step performed by experts during analyses of histological images. With advancement in the field of machine learning, such steps can be automated using computer vision methods. This paper presents an ensemble-based approach for improved classification of non-pathological tissues and organs in histological images using convolutional neural networks (CNNs). With limited dataset size, we relied upon transfer learning where pre-trained CNNs are re-used for new classification problems. The transfer learning was done using eleven CNN architectures upon 6000 image patches constituting training and validation subsets of a public dataset containing six cardiovascular categories. The CNN models were fine-tuned upon a much larger dataset obtained by augmenting training subset to obtain agreeable performance on validation subset. Lastly, we created various ensembles of trained classifiers and evaluate them on testing subset of 7500 patches. The best ensemble classifier gives, precision, recall, and accuracy of 0.876, 0.869 and 0.869, respectively upon test images. With an overall F1-score of 0.870, our ensemble-based approach outperforms previous approaches with single fine-tuned CNN, CNN trained from scratch, and traditional machine learning by 0.019, 0.064 and 0.183, respectively. Ensemble approach can perform better than individual classifier-based ones, provided the constituent classifiers are chosen wisely. The empirical choice of classifiers reinforces the intuition that models which are newer and outperformed in their native domain are more likely to outperform in transferred-domain, since the best ensemble dominantly consists of more lately proposed and better architectures.
Collapse
Affiliation(s)
- Shubham Mittal
- Department of Electronics and Communication Engineering, Ambedkar Institute of Advanced Communication Technologies and Research, Delhi, India.
| |
Collapse
|
39
|
Cherian Kurian N, Sethi A, Reddy Konduru A, Mahajan A, Rane SU. A 2021 update on cancer image analytics with deep learning. WIRES DATA MINING AND KNOWLEDGE DISCOVERY 2021. [DOI: 10.1002/widm.1410] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Nikhil Cherian Kurian
- Department of Electrical Engineering Indian Institute of Technology, Bombay Mumbai India
| | - Amit Sethi
- Department of Electrical Engineering Indian Institute of Technology, Bombay Mumbai India
| | - Anil Reddy Konduru
- Department of Pathology Tata Memorial Center‐ACTREC, HBNI Navi Mumbai India
| | - Abhishek Mahajan
- Department of Radiology Tata Memorial Hospital, HBNI Mumbai India
| | - Swapnil Ulhas Rane
- Department of Pathology Tata Memorial Center‐ACTREC, HBNI Navi Mumbai India
| |
Collapse
|
40
|
Mathialagan P, Chidambaranathan M. Computer vision techniques for Upper Aero-Digestive Tract tumor grading classification – Addressing pathological challenges. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2021.01.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
41
|
Nateghi R, Danyali H, Helfroush MS. A deep learning approach for mitosis detection: Application in tumor proliferation prediction from whole slide images. Artif Intell Med 2021; 114:102048. [PMID: 33875159 DOI: 10.1016/j.artmed.2021.102048] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 02/25/2021] [Accepted: 02/28/2021] [Indexed: 02/07/2023]
Abstract
The tumor proliferation, which is correlated with tumor grade, is a crucial biomarker indicative of breast cancer patients' prognosis. The most commonly used method in predicting tumor proliferation speed is the counting of mitotic figures in Hematoxylin and Eosin (H&E) histological slides. Manual mitosis counting is known to suffer from reproducibility problems. This paper presents a fully automated system for tumor proliferation prediction from whole slide images via mitosis counting. First, by considering the epithelial tissue as mitosis activity regions, we build a deep-learning-based region of interest detection method to select the high mitosis activity regions from whole slide images. Second, we learned a set of deep neural networks to detect mitosis detection from selected areas. The proposed mitosis detection system is designed to effectively overcome the mitosis detection challenges by two novel deep preprocessing and two-step hard negative mining approaches. Third, we trained a Support Vector Machine (SVM) classifier to predict the final tumor proliferation score. The proposed method was evaluated on the dataset of the Tumor Proliferation Assessment Challenge (TUPAC16) and achieved a 73.81 % F-measure and 0.612 weighted kappa score, respectively, outperforming all previous approaches significantly. Experimental results demonstrate that the proposed system considerably improves the tumor proliferation prediction accuracy and provides a reliable automated tool to support health care make-decisions.
Collapse
Affiliation(s)
- Ramin Nateghi
- Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz, Iran.
| | - Habibollah Danyali
- Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz, Iran.
| | - Mohammad Sadegh Helfroush
- Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz, Iran.
| |
Collapse
|
42
|
Li J, Li W, Sisk A, Ye H, Wallace WD, Speier W, Arnold CW. A multi-resolution model for histopathology image classification and localization with multiple instance learning. Comput Biol Med 2021; 131:104253. [PMID: 33601084 DOI: 10.1016/j.compbiomed.2021.104253] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 01/31/2021] [Accepted: 02/03/2021] [Indexed: 12/17/2022]
Abstract
Large numbers of histopathological images have been digitized into high resolution whole slide images, opening opportunities in developing computational image analysis tools to reduce pathologists' workload and potentially improve inter- and intra-observer agreement. Most previous work on whole slide image analysis has focused on classification or segmentation of small pre-selected regions-of-interest, which requires fine-grained annotation and is non-trivial to extend for large-scale whole slide analysis. In this paper, we proposed a multi-resolution multiple instance learning model that leverages saliency maps to detect suspicious regions for fine-grained grade prediction. Instead of relying on expensive region- or pixel-level annotations, our model can be trained end-to-end with only slide-level labels. The model is developed on a large-scale prostate biopsy dataset containing 20,229 slides from 830 patients. The model achieved 92.7% accuracy, 81.8% Cohen's Kappa for benign, low grade (i.e. Grade group 1) and high grade (i.e. Grade group ≥ 2) prediction, an area under the receiver operating characteristic curve (AUROC) of 98.2% and an average precision (AP) of 97.4% for differentiating malignant and benign slides. The model obtained an AUROC of 99.4% and an AP of 99.8% for cancer detection on an external dataset.
Collapse
Affiliation(s)
- Jiayun Li
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Radiology, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA.
| | - Wenyuan Li
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Radiology, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA
| | - Anthony Sisk
- Department of Pathology & Laboratory Medicine, UCLA, 10833 Le Conte Ave, Los Angeles, CA, 90095, USA
| | - Huihui Ye
- Department of Pathology & Laboratory Medicine, UCLA, 10833 Le Conte Ave, Los Angeles, CA, 90095, USA
| | - W Dean Wallace
- Department of Pathology, USC, 2011 Zonal Avenue, Los Angeles, CA, 90033, USA
| | - William Speier
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA
| | - Corey W Arnold
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Radiology, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Pathology & Laboratory Medicine, UCLA, 10833 Le Conte Ave, Los Angeles, CA, 90095, USA.
| |
Collapse
|
43
|
Lei H, Liu S, Elazab A, Gong X, Lei B. Attention-Guided Multi-Branch Convolutional Neural Network for Mitosis Detection From Histopathological Images. IEEE J Biomed Health Inform 2021; 25:358-370. [PMID: 32991296 DOI: 10.1109/jbhi.2020.3027566] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Mitotic count is an important indicator for assessing the invasiveness of breast cancers. Currently, the number of mitoses is manually counted by pathologists, which is both tedious and time-consuming. To address this situation, we propose a fast and accurate method to automatically detect mitosis from the histopathological images. The proposed method can automatically identify mitotic candidates from histological sections for mitosis screening. Specifically, our method exploits deep convolutional neural networks to extract high-level features of mitosis to detect mitotic candidates. Then, we use spatial attention modules to re-encode mitotic features, which allows the model to learn more efficient features. Finally, we use multi-branch classification subnets to screen the mitosis. Compared to existing related methods in literature, our method obtains the best detection results on the dataset of the International Pattern Recognition Conference (ICPR) 2012 Mitosis Detection Competition. Code has been made available at: https://github.com/liushaomin/MitosisDetection.
Collapse
|
44
|
Salvi M, Acharya UR, Molinari F, Meiburger KM. The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis. Comput Biol Med 2021; 128:104129. [DOI: 10.1016/j.compbiomed.2020.104129] [Citation(s) in RCA: 77] [Impact Index Per Article: 25.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 11/13/2020] [Indexed: 12/12/2022]
|
45
|
Nofallah S, Mehta S, Mercan E, Knezevich S, May CJ, Weaver D, Witten D, Elmore JG, Shapiro L. Machine learning techniques for mitoses classification. Comput Med Imaging Graph 2021; 87:101832. [PMID: 33302246 PMCID: PMC7855641 DOI: 10.1016/j.compmedimag.2020.101832] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2020] [Revised: 10/09/2020] [Accepted: 11/17/2020] [Indexed: 12/17/2022]
Abstract
BACKGROUND Pathologists analyze biopsy material at both the cellular and structural level to determine diagnosis and cancer stage. Mitotic figures are surrogate biomarkers of cellular proliferation that can provide prognostic information; thus, their precise detection is an important factor for clinical care. Convolutional Neural Networks (CNNs) have shown remarkable performance on several recognition tasks. Utilizing CNNs for mitosis classification may aid pathologists to improve the detection accuracy. METHODS We studied two state-of-the-art CNN-based models, ESPNet and DenseNet, for mitosis classification on six whole slide images of skin biopsies and compared their quantitative performance in terms of sensitivity, specificity, and F-score. We used raw RGB images of mitosis and non-mitosis samples with their corresponding labels as training input. In order to compare with other work, we studied the performance of these classifiers and two other architectures, ResNet and ShuffleNet, on the publicly available MITOS breast biopsy dataset and compared the performance of all four in terms of precision, recall, and F-score (which are standard for this data set), architecture, training time and inference time. RESULTS The ESPNet and DenseNet results on our primary melanoma dataset had a sensitivity of 0.976 and 0.968, and a specificity of 0.987 and 0.995, respectively, with F-scores of .968 and .976, respectively. On the MITOS dataset, ESPNet and DenseNet showed a sensitivity of 0.866 and 0.916, and a specificity of 0.973 and 0.980, respectively. The MITOS results using DenseNet had a precision of 0.939, recall of 0.916, and F-score of 0.927. The best published result on MITOS (Saha et al. 2018) reported precision of 0.92, recall of 0.88, and F-score of 0.90. In our architecture comparisons on MITOS, we found that DenseNet beats the others in terms of F-Score (DenseNet 0.927, ESPNet 0.890, ResNet 0.865, ShuffleNet 0.847) and especially Recall (DenseNet 0.916, ESPNet 0.866, ResNet 0.807, ShuffleNet 0.753), while ResNet and ESPNet have much faster inference times (ResNet 6 s, ESPNet 8 s, DenseNet 31 s). ResNet is faster than ESPNet, but ESPNet has a higher F-Score and Recall than ResNet, making it a good compromise solution. CONCLUSION We studied several state-of-the-art CNNs for detecting mitotic figures in whole slide biopsy images. We evaluated two CNNs on a melanoma cancer dataset and then compared four CNNs on a public breast cancer data set, using the same methodology on both. Our methodology and architecture for mitosis finding in both melanoma and breast cancer whole slide images has been thoroughly tested and is likely to be useful for finding mitoses in any whole slide biopsy images.
Collapse
Affiliation(s)
| | - Sachin Mehta
- University of Washington, Seattle WA 98195, USA.
| | - Ezgi Mercan
- University of Washington, Seattle WA 98195, USA.
| | | | | | | | | | - Joann G Elmore
- David Geffen School of Medicine, UCLA, Los Angeles CA 90024, USA.
| | | |
Collapse
|
46
|
Mathew T, Kini JR, Rajan J. Computational methods for automated mitosis detection in histopathology images: A review. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2020.11.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
47
|
Yamaguchi R, Kawazoe Y, Shimamoto K, Shinohara E, Tsukamoto T, Shintani-Domoto Y, Nagasu H, Uozaki H, Ushiku T, Nangaku M, Kashihara N, Shimizu A, Nagata M, Ohe K. Glomerular Classification Using Convolutional Neural Networks Based on Defined Annotation Criteria and Concordance Evaluation Among Clinicians. Kidney Int Rep 2020; 6:716-726. [PMID: 33732986 PMCID: PMC7938073 DOI: 10.1016/j.ekir.2020.11.037] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2020] [Revised: 11/02/2020] [Accepted: 11/30/2020] [Indexed: 11/24/2022] Open
Abstract
Introduction Diagnosing renal pathologies is important for performing treatments. However, classifying every glomerulus is difficult for clinicians; thus, a support system, such as a computer, is required. This paper describes the automatic classification of glomerular images using a convolutional neural network (CNN). Method To generate appropriate labeled data, annotation criteria including 12 features (e.g., “fibrous crescent”) were defined. The concordance among 5 clinicians was evaluated for 100 images using the kappa (κ) coefficient for each feature. Using the annotation criteria, 1 clinician annotated 10,102 images. We trained the CNNs to classify the features with an average κ ≥0.4 and evaluated their performance using the receiver operating characteristic–area under the curve (ROC–AUC). An error analysis was conducted and the gradient-weighted class activation mapping (Grad-CAM) was also applied; it expresses the CNN’s focusing point with a heat map when the CNN classifies the glomerular image for a feature. Results The average κ coefficient of the features ranged from 0.28 to 0.50. The ROC–AUC of the CNNs for test data varied from 0.65 to 0.98. Among the features, “capillary collapse” and “fibrous crescent” had high ROC–AUC values of 0.98 and 0.91, respectively. The error analysis and the Grad-CAM visually showed that the CNN could not distinguish between 2 different features that had similar visual structures or that occurred simultaneously. Conclusion The differences in the texture or frequency of the co-occurrence between the different features affected the CNN performance; thus, to improve the classification accuracy, methods such as segmentation are required.
Collapse
Affiliation(s)
- Ryohei Yamaguchi
- Artificial Intelligence in Healthcare, Graduate School of Medicine, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Yoshimasa Kawazoe
- Artificial Intelligence in Healthcare, Graduate School of Medicine, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Kiminori Shimamoto
- Artificial Intelligence in Healthcare, Graduate School of Medicine, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Emiko Shinohara
- Artificial Intelligence in Healthcare, Graduate School of Medicine, Faculty of Medicine, The University of Tokyo, Tokyo, Japan
| | - Tatsuo Tsukamoto
- Department of Nephrology and Dialysis, Tazuke Kofukai Medical Research Institute, Kitano Hospital, Osaka, Japan
| | - Yukako Shintani-Domoto
- Department of Pathology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Hajime Nagasu
- Department of Nephrology and Hypertension, Kawasaki Medical School, Okayama, Japan
| | - Hiroshi Uozaki
- Department of Pathology, Teikyo University School of Medicine, Tokyo, Japan
| | - Tetsuo Ushiku
- Department of Pathology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Masaomi Nangaku
- Division of Nephrology and Endocrinology, The University of Tokyo Graduate School of Medicine, Tokyo, Japan
| | - Naoki Kashihara
- Department of Nephrology and Hypertension, Kawasaki Medical School, Okayama, Japan
| | - Akira Shimizu
- Department of Analytic Human Pathology, Nippon Medical School, Tokyo, Japan
| | - Michio Nagata
- Kidney and Vascular Pathology, Faculty of Medicine, University of Tsukuba, Ibaraki, Japan
| | - Kazuhiko Ohe
- Department of Biomedical Informatics, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| |
Collapse
|
48
|
Cheng JY, Abel JT, Balis UGJ, McClintock DS, Pantanowitz L. Challenges in the Development, Deployment, and Regulation of Artificial Intelligence in Anatomic Pathology. THE AMERICAN JOURNAL OF PATHOLOGY 2020; 191:1684-1692. [PMID: 33245914 DOI: 10.1016/j.ajpath.2020.10.018] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 10/08/2020] [Accepted: 10/23/2020] [Indexed: 02/07/2023]
Abstract
Significant advances in artificial intelligence (AI), deep learning, and other machine-learning approaches have been made in recent years, with applications found in almost every industry, including health care. AI has proved to be capable of completing a spectrum of mundane to complex medically oriented tasks previously performed only by boarded physicians, most recently assisting with the detection of cancers difficult to find on histopathology slides. Although computers will not replace pathologists any time soon, properly designed AI-based tools hold great potential for increasing workflow efficiency and diagnostic accuracy in the practice of pathology. Recent trends, such as data augmentation, crowdsourcing for generating annotated data sets, and unsupervised learning with molecular and/or clinical outcomes versus human diagnoses as a source of ground truth, are eliminating the direct role of pathologists in algorithm development. Proper integration of AI-based systems into anatomic-pathology practice will necessarily require fully digital imaging platforms, an overhaul of legacy information-technology infrastructures, modification of laboratory/pathologist workflows, appropriate reimbursement/cost-offsetting models, and ultimately, the active participation of pathologists to encourage buy-in and oversight. Regulations tailored to the nature and limitations of AI are currently in development and, when instituted, are expected to promote safe and effective use. This review addresses the challenges in AI development, deployment, and regulation to be overcome prior to its widespread adoption in anatomic pathology.
Collapse
Affiliation(s)
- Jerome Y Cheng
- Department of Pathology, University of Michigan, Ann Arbor, Michigan.
| | - Jacob T Abel
- Department of Pathology, University of Michigan, Ann Arbor, Michigan
| | - Ulysses G J Balis
- Department of Pathology, University of Michigan, Ann Arbor, Michigan
| | | | - Liron Pantanowitz
- Department of Pathology, University of Michigan, Ann Arbor, Michigan
| |
Collapse
|
49
|
Levy-Jurgenson A, Tekpli X, Kristensen VN, Yakhini Z. Spatial transcriptomics inferred from pathology whole-slide images links tumor heterogeneity to survival in breast and lung cancer. Sci Rep 2020; 10:18802. [PMID: 33139755 PMCID: PMC7606448 DOI: 10.1038/s41598-020-75708-z] [Citation(s) in RCA: 59] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 10/14/2020] [Indexed: 12/12/2022] Open
Abstract
Digital analysis of pathology whole-slide images is fast becoming a game changer in cancer diagnosis and treatment. Specifically, deep learning methods have shown great potential to support pathology analysis, with recent studies identifying molecular traits that were not previously recognized in pathology H&E whole-slide images. Simultaneous to these developments, it is becoming increasingly evident that tumor heterogeneity is an important determinant of cancer prognosis and susceptibility to treatment, and should therefore play a role in the evolving practices of matching treatment protocols to patients. State of the art diagnostic procedures, however, do not provide automated methods for characterizing and/or quantifying tumor heterogeneity, certainly not in a spatial context. Further, existing methods for analyzing pathology whole-slide images from bulk measurements require many training samples and complex pipelines. Our work addresses these two challenges. First, we train deep learning models to spatially resolve bulk mRNA and miRNA expression levels on pathology whole-slide images (WSIs). Our models reach up to 0.95 AUC on held-out test sets from two cancer cohorts using a simple training pipeline and a small number of training samples. Using the inferred gene expression levels, we further develop a method to spatially characterize tumor heterogeneity. Specifically, we produce tumor molecular cartographies and heterogeneity maps of WSIs and formulate a heterogeneity index (HTI) that quantifies the level of heterogeneity within these maps. Applying our methods to breast and lung cancer slides, we show a significant statistical link between heterogeneity and survival. Our methods potentially open a new and accessible approach to investigating tumor heterogeneity and other spatial molecular properties and their link to clinical characteristics, including treatment susceptibility and survival.
Collapse
Affiliation(s)
- Alona Levy-Jurgenson
- Department of Computer Science, Technion - Israel Institute of Technology, Haifa, 32000, Israel.
| | - Xavier Tekpli
- Department of Medical Genetics, Institute of Clinical Medicine, University of Oslo and Oslo University Hospital, Oslo, Norway
- Department of Cancer Genetics, Institute for Cancer Research, Oslo University Hospital, 0310, Oslo, Norway
| | - Vessela N Kristensen
- Department of Medical Genetics, Institute of Clinical Medicine, University of Oslo and Oslo University Hospital, Oslo, Norway
- Department of Cancer Genetics, Institute for Cancer Research, Oslo University Hospital, 0310, Oslo, Norway
- Division of Medicine, Department of Clinical Molecular Biology and Laboratory Science (EpiGen), Akershus University Hospital, Lørenskog, Norway
| | - Zohar Yakhini
- Department of Computer Science, Technion - Israel Institute of Technology, Haifa, 32000, Israel.
- Interdisciplinary Center, Arazi School of Computer Science, Herzliya, 4610101, Israel.
| |
Collapse
|
50
|
Kumar D, Batra U. An ensemble algorithm for breast cancer histopathology image classification. JOURNAL OF STATISTICS & MANAGEMENT SYSTEMS 2020. [DOI: 10.1080/09720510.2020.1818451] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Deepika Kumar
- School of Engineering, G. D. Goenka University, Gurugram 122103 Haryana, India
| | - Usha Batra
- School of Engineering, G. D. Goenka University, Gurugram 122103 Haryana, India
| |
Collapse
|