1
|
Zhang J, Ding F, Guo Y, Wei X, Jing J, Xu F, Chen H, Guo Z, You Z, Liang B, Chen M, Jiang D, Niu X, Wang X, Xue Y. AI-based prediction of androgen receptor expression and its prognostic significance in prostate cancer. Sci Rep 2025; 15:3985. [PMID: 39893198 PMCID: PMC11787347 DOI: 10.1038/s41598-025-88199-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Accepted: 01/24/2025] [Indexed: 02/04/2025] Open
Abstract
Biochemical recurrence (BCR) of prostate cancer (PCa) negatively impacts patients' post-surgery quality of life, and the traditional predictive models have shown limited accuracy. This study develops an AI-based prognostic model using deep learning that incorporates androgen receptor (AR) regional features from whole-slide images (WSIs). Data from 545 patients across two centres are used for training and validation. The model showed strong performances, with high accuracy in identifying regions with high AR expression and BCR prediction. This AI model may help identify high-risk patients, aiding in better treatment strategies, particularly in underdeveloped areas.
Collapse
Affiliation(s)
- Jiawei Zhang
- Department of Urology, Zhongda Hospital, Southeast University, Nanjing, China
- Department of Medical College, Southeast University, Nanjing, China
| | - Feng Ding
- Nanjing University of Information Science and Technology, Nanjing, China
| | - Yitian Guo
- Department of Urology, Zhongda Hospital, Southeast University, Nanjing, China
- Department of Medical College, Southeast University, Nanjing, China
| | - Xiaoying Wei
- Department of Medical College, Southeast University, Nanjing, China
- Department of Pathology, Zhongda Hospital, Southeast University, Nanjing, China
| | - Jibo Jing
- Department of Urology, Peking Union Medical College Hospital, Beijing, China
| | - Feng Xu
- Jinhu County People's Hospital, Huai'an, China
| | - Huixing Chen
- Shanghai General Hospital, Urologic Medical Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhongying Guo
- Department of Pathology, Huaian First People's Hospital, Huai'an, China
| | - Zonghao You
- Department of Urology, Zhongda Hospital, Southeast University, Nanjing, China
- Department of Medical College, Southeast University, Nanjing, China
| | - Baotai Liang
- Department of Urology, Zhongda Hospital, Southeast University, Nanjing, China
- Department of Medical College, Southeast University, Nanjing, China
| | - Ming Chen
- Department of Urology, Zhongda Hospital, Southeast University, Nanjing, China
- Department of Medical College, Southeast University, Nanjing, China
| | - Dongfang Jiang
- Department of Urology, The People's Hospital of Danyang, Danyang, China.
| | - Xiaobing Niu
- Department of Urology, Huaian First People's Hospital, Huai'an, China.
| | - Xiangxue Wang
- Nanjing University of Information Science and Technology, Nanjing, China.
| | - Yifeng Xue
- The Affiliated Jintan Hospital of Jiangsu University, Changzhou, China.
- Changzhou jintan first people's hospital, Changzhou, China.
| |
Collapse
|
2
|
Tian H, Tian Y, Li D, Zhao M, Luo Q, Kong L, Qin T. Artificial intelligence model predicts M2 macrophage levels and HCC prognosis with only globally labeled pathological images. Front Oncol 2024; 14:1474155. [PMID: 39759153 PMCID: PMC11695232 DOI: 10.3389/fonc.2024.1474155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Accepted: 12/09/2024] [Indexed: 01/07/2025] Open
Abstract
Background and aims The levels of M2 macrophages are significantly associated with the prognosis of hepatocellular carcinoma (HCC), however, current detection methods in clinical settings remain challenging. Our study aims to develop a weakly supervised artificial intelligence model using globally labeled histological images, to predict M2 macrophage levels and forecast the prognosis of HCC patients by integrating clinical features. Methods CIBERSORTx was used to calculate M2 macrophage abundance. We developed a slide-level, weakly-supervised clustering method for Whole Slide Images (WSIs) by integrating Masked Autoencoders (MAE) with ResNet-32t to predict M2 macrophage abundance. Results We developed an MAE-ResNet model to predict M2 macrophage levels using WSIs. In the testing dataset, the area under the curve (AUC) (95% CI) was 0.73 (0.59-0.87). We constructed a Cox regression model showing that the predicted probabilities of M2 macrophage abundance were negatively associated with the prognosis of HCC (HR=1.89, p=0.031). Furthermore, we incorporated clinical data, screened variables using Lasso regression, and built the comprehensive prediction model that better predicted prognosis. (HR=2.359, p=0.001). Conclusion Our models effectively predicted M2 macrophage levels and HCC prognosis. The findings suggest that our models offer a novel method for determining biomarker levels and forecasting prognosis, eliminating additional clinical tests, thereby delivering substantial clinical benefits.
Collapse
Affiliation(s)
- Huiyuan Tian
- Department of Scientific Research and Foreign Affairs, Henan Provincial People’s Hospital, Zhengzhou University People’s Hospital, Zhengzhou, Henan, China
| | - Yongshao Tian
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, Anhui, China
| | - Dujuan Li
- Department of Pathology, Henan Provincial People’s Hospital, Zhengzhou University People’s Hospital, Zhengzhou, Henan, China
| | - Minfan Zhao
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, Anhui, China
| | - Qiankun Luo
- Department of Hepatobiliary and Pancreatic Surgery, Henan Provincial People’s Hospital, Zhengzhou University People’s Hospital, Zhengzhou, Henan, China
| | - Lingfei Kong
- Department of Pathology, Henan Provincial People’s Hospital, Zhengzhou University People’s Hospital, Zhengzhou, Henan, China
| | - Tao Qin
- Department of Hepatobiliary and Pancreatic Surgery, Henan Provincial People’s Hospital, Zhengzhou University People’s Hospital, Zhengzhou, Henan, China
| |
Collapse
|
3
|
Tafavvoghi M, Bongo LA, Shvetsov N, Busund LTR, Møllersen K. Publicly available datasets of breast histopathology H&E whole-slide images: A scoping review. J Pathol Inform 2024; 15:100363. [PMID: 38405160 PMCID: PMC10884505 DOI: 10.1016/j.jpi.2024.100363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 11/24/2023] [Accepted: 01/23/2024] [Indexed: 02/27/2024] Open
Abstract
Advancements in digital pathology and computing resources have made a significant impact in the field of computational pathology for breast cancer diagnosis and treatment. However, access to high-quality labeled histopathological images of breast cancer is a big challenge that limits the development of accurate and robust deep learning models. In this scoping review, we identified the publicly available datasets of breast H&E-stained whole-slide images (WSIs) that can be used to develop deep learning algorithms. We systematically searched 9 scientific literature databases and 9 research data repositories and found 17 publicly available datasets containing 10 385 H&E WSIs of breast cancer. Moreover, we reported image metadata and characteristics for each dataset to assist researchers in selecting proper datasets for specific tasks in breast cancer computational pathology. In addition, we compiled 2 lists of breast H&E patches and private datasets as supplementary resources for researchers. Notably, only 28% of the included articles utilized multiple datasets, and only 14% used an external validation set, suggesting that the performance of other developed models may be susceptible to overestimation. The TCGA-BRCA was used in 52% of the selected studies. This dataset has a considerable selection bias that can impact the robustness and generalizability of the trained algorithms. There is also a lack of consistent metadata reporting of breast WSI datasets that can be an issue in developing accurate deep learning models, indicating the necessity of establishing explicit guidelines for documenting breast WSI dataset characteristics and metadata.
Collapse
Affiliation(s)
- Masoud Tafavvoghi
- Department of Community Medicine, Uit The Arctic University of Norway, Tromsø, Norway
| | - Lars Ailo Bongo
- Department of Computer Science, Uit The Arctic University of Norway, Tromsø, Norway
| | - Nikita Shvetsov
- Department of Computer Science, Uit The Arctic University of Norway, Tromsø, Norway
| | | | - Kajsa Møllersen
- Department of Community Medicine, Uit The Arctic University of Norway, Tromsø, Norway
| |
Collapse
|
4
|
Budginaite E, Magee DR, Kloft M, Woodruff HC, Grabsch HI. Computational methods for metastasis detection in lymph nodes and characterization of the metastasis-free lymph node microarchitecture: A systematic-narrative hybrid review. J Pathol Inform 2024; 15:100367. [PMID: 38455864 PMCID: PMC10918266 DOI: 10.1016/j.jpi.2024.100367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 01/31/2024] [Accepted: 01/31/2024] [Indexed: 03/09/2024] Open
Abstract
Background Histological examination of tumor draining lymph nodes (LNs) plays a vital role in cancer staging and prognostication. However, as soon as a LN is classed as metastasis-free, no further investigation will be performed and thus, potentially clinically relevant information detectable in tumor-free LNs is currently not captured. Objective To systematically study and critically assess methods for the analysis of digitized histological LN images described in published research. Methods A systematic search was conducted in several public databases up to December 2023 using relevant search terms. Studies using brightfield light microscopy images of hematoxylin and eosin or immunohistochemically stained LN tissue sections aiming to detect and/or segment LNs, their compartments or metastatic tumor using artificial intelligence (AI) were included. Dataset, AI methodology, cancer type, and study objective were compared between articles. Results A total of 7201 articles were collected and 73 articles remained for detailed analyses after article screening. Of the remaining articles, 86% aimed at LN metastasis identification, 8% aimed at LN compartment segmentation, and remaining focused on LN contouring. Furthermore, 78% of articles used patch classification and 22% used pixel segmentation models for analyses. Five out of six studies (83%) of metastasis-free LNs were performed on publicly unavailable datasets, making quantitative article comparison impossible. Conclusions Multi-scale models mimicking multiple microscopy zooms show promise for computational LN analysis. Large-scale datasets are needed to establish the clinical relevance of analyzing metastasis-free LN in detail. Further research is needed to identify clinically interpretable metrics for LN compartment characterization.
Collapse
Affiliation(s)
- Elzbieta Budginaite
- Department of Pathology, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
- Department of Precision Medicine, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
| | | | - Maximilian Kloft
- Department of Pathology, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
- Department of Internal Medicine, Justus-Liebig-University, Giessen, Germany
| | - Henry C. Woodruff
- Department of Precision Medicine, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
| | - Heike I. Grabsch
- Department of Pathology, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
- Pathology and Data Analytics, Leeds Institute of Medical Research at St James’s, University of Leeds, Leeds, UK
| |
Collapse
|
5
|
Wang H, Luo L, Wang F, Tong R, Chen YW, Hu H, Lin L, Chen H. Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Bag-Level Classifier is a Good Instance-Level Teacher. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3964-3976. [PMID: 38781068 DOI: 10.1109/tmi.2024.3404549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Multiple Instance Learning (MIL) has demonstrated promise in Whole Slide Image (WSI) classification. However, a major challenge persists due to the high computational cost associated with processing these gigapixel images. Existing methods generally adopt a two-stage approach, comprising a non-learnable feature embedding stage and a classifier training stage. Though it can greatly reduce memory consumption by using a fixed feature embedder pre-trained on other domains, such a scheme also results in a disparity between the two stages, leading to suboptimal classification accuracy. To address this issue, we propose that a bag-level classifier can be a good instance-level teacher. Based on this idea, we design Iteratively Coupled Multiple Instance Learning (ICMIL) to couple the embedder and the bag classifier at a low cost. ICMIL initially fixes the patch embedder to train the bag classifier, followed by fixing the bag classifier to fine-tune the patch embedder. The refined embedder can then generate better representations in return, leading to a more accurate classifier for the next iteration. To realize more flexible and more effective embedder fine-tuning, we also introduce a teacher-student framework to efficiently distill the category knowledge in the bag classifier to help the instance-level embedder fine-tuning. Intensive experiments were conducted on four distinct datasets to validate the effectiveness of ICMIL. The experimental results consistently demonstrated that our method significantly improves the performance of existing MIL backbones, achieving state-of-the-art results. The code and the organized datasets can be accessed by: https://github.com/Dootmaan/ICMIL/tree/confidence-based.
Collapse
|
6
|
Zhou H, Zhao Q, Huang W, Liang Z, Cui C, Ma H, Luo C, Li S, Ruan G, Chen H, Zhu Y, Zhang G, Liu S, Liu L, Li H, Yang H, Xie H. A novel fully automatic segmentation and counting system for metastatic lymph nodes on multimodal magnetic resonance imaging: Evaluation and prognostic implications in nasopharyngeal carcinoma. Radiother Oncol 2024; 197:110367. [PMID: 38834152 DOI: 10.1016/j.radonc.2024.110367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 05/28/2024] [Accepted: 06/01/2024] [Indexed: 06/06/2024]
Abstract
BACKGROUND The number of metastatic lymph nodes (MLNs) is crucial for the survival of nasopharyngeal carcinoma (NPC), but manual counting is laborious. This study aims to explore the feasibility and prognostic value of automatic MLNs segmentation and counting. METHODS We retrospectively enrolled 980 newly diagnosed patients in the primary cohort and 224 patients from two external cohorts. We utilized the nnUnet model for automatic MLNs segmentation on multimodal magnetic resonance imaging. MLNs counting methods, including manual delineation-assisted counting (MDAC) and fully automatic lymph node counting system (AMLNC), were compared with manual evaluation (Gold standard). RESULTS In the internal validation group, the MLNs segmentation results showed acceptable agreement with manual delineation, with a mean Dice coefficient of 0.771. The consistency among three counting methods was as follows 0.778 (Gold vs. AMLNC), 0.638 (Gold vs. MDAC), and 0.739 (AMLNC vs. MDAC). MLNs numbers were categorized into three-category variable (1-4, 5-9, > 9) and two-category variable (<4, ≥ 4) based on the gold standard and AMLNC. These categorical variables demonstrated acceptable discriminating abilities for 5-year overall survival (OS), progression-free, and distant metastasis-free survival. Compared with base prediction model, the model incorporating two-category AMLNC-counting numbers showed improved C-indexes for 5-year OS prediction (0.658 vs. 0.675, P = 0.045). All results have been successfully validated in the external cohort. CONCLUSIONS The AMLNC system offers a time- and labor-saving approach for fully automatic MLNs segmentation and counting in NPC. MLNs counting using AMLNC demonstrated non-inferior performance in survival discrimination compared to manual detection.
Collapse
Affiliation(s)
- Haoyang Zhou
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, PR China.
| | - Qin Zhao
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Wenjie Huang
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Zhiying Liang
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Chunyan Cui
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Huali Ma
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Chao Luo
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Shuqi Li
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Guangying Ruan
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Hongbo Chen
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, PR China.
| | - Yuliang Zhu
- Department of Nasopharyngeal Head and Neck Tumor Radiotherapy, Zhongshan City People's Hospital, ZhongShan, PR China.
| | - Guoyi Zhang
- Department of Radiation Oncology, Foshan Academy of Medical Sciences, Sun Yat-Sen University Foshan Hospital and The First People's Hospital of Foshan, Foshan, PR China.
| | - Shanshan Liu
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, PR China.
| | - Lizhi Liu
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Haojiang Li
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Hui Yang
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Hui Xie
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| |
Collapse
|
7
|
McGenity C, Clarke EL, Jennings C, Matthews G, Cartlidge C, Freduah-Agyemang H, Stocken DD, Treanor D. Artificial intelligence in digital pathology: a systematic review and meta-analysis of diagnostic test accuracy. NPJ Digit Med 2024; 7:114. [PMID: 38704465 PMCID: PMC11069583 DOI: 10.1038/s41746-024-01106-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 04/12/2024] [Indexed: 05/06/2024] Open
Abstract
Ensuring diagnostic performance of artificial intelligence (AI) before introduction into clinical practice is essential. Growing numbers of studies using AI for digital pathology have been reported over recent years. The aim of this work is to examine the diagnostic accuracy of AI in digital pathology images for any disease. This systematic review and meta-analysis included diagnostic accuracy studies using any type of AI applied to whole slide images (WSIs) for any disease. The reference standard was diagnosis by histopathological assessment and/or immunohistochemistry. Searches were conducted in PubMed, EMBASE and CENTRAL in June 2022. Risk of bias and concerns of applicability were assessed using the QUADAS-2 tool. Data extraction was conducted by two investigators and meta-analysis was performed using a bivariate random effects model, with additional subgroup analyses also performed. Of 2976 identified studies, 100 were included in the review and 48 in the meta-analysis. Studies were from a range of countries, including over 152,000 whole slide images (WSIs), representing many diseases. These studies reported a mean sensitivity of 96.3% (CI 94.1-97.7) and mean specificity of 93.3% (CI 90.5-95.4). There was heterogeneity in study design and 99% of studies identified for inclusion had at least one area at high or unclear risk of bias or applicability concerns. Details on selection of cases, division of model development and validation data and raw performance data were frequently ambiguous or missing. AI is reported as having high diagnostic accuracy in the reported areas but requires more rigorous evaluation of its performance.
Collapse
Affiliation(s)
- Clare McGenity
- University of Leeds, Leeds, UK.
- Leeds Teaching Hospitals NHS Trust, Leeds, UK.
| | - Emily L Clarke
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | - Charlotte Jennings
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | | | | | | | | | - Darren Treanor
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
- Department of Clinical Pathology and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
- Centre for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| |
Collapse
|
8
|
Shafique A, Gonzalez R, Pantanowitz L, Tan PH, Machado A, Cree IA, Tizhoosh HR. A Preliminary Investigation into Search and Matching for Tumor Discrimination in World Health Organization Breast Taxonomy Using Deep Networks. Mod Pathol 2024; 37:100381. [PMID: 37939901 PMCID: PMC10891482 DOI: 10.1016/j.modpat.2023.100381] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 10/26/2023] [Accepted: 10/31/2023] [Indexed: 11/10/2023]
Abstract
Breast cancer is one of the most common cancers affecting women worldwide. It includes a group of malignant neoplasms with a variety of biological, clinical, and histopathologic characteristics. There are more than 35 different histologic forms of breast lesions that can be classified and diagnosed histologically according to cell morphology, growth, and architecture patterns. Recently, deep learning, in the field of artificial intelligence, has drawn a lot of attention for the computerized representation of medical images. Searchable digital atlases can provide pathologists with patch-matching tools, allowing them to search among evidently diagnosed and treated archival cases, a technology that may be regarded as computational second opinion. In this study, we indexed and analyzed the World Health Organization breast taxonomy (Classification of Tumors fifth ed.) spanning 35 tumor types. We visualized all tumor types using deep features extracted from a state-of-the-art deep-learning model, pretrained on millions of diagnostic histopathology images from the Cancer Genome Atlas repository. Furthermore, we tested the concept of a digital "atlas" as a reference for search and matching with rare test cases. The patch similarity search within the World Health Organization breast taxonomy data reached >88% accuracy when validating through "majority vote" and >91% accuracy when validating using top n tumor types. These results show for the first time that complex relationships among common and rare breast lesions can be investigated using an indexed digital archive.
Collapse
Affiliation(s)
- Abubakr Shafique
- Rhazes Lab, Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota; Kimia Lab, University of Waterloo, Waterloo, Ontario, Canada
| | - Ricardo Gonzalez
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota
| | - Liron Pantanowitz
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania
| | - Puay Hoon Tan
- Women's Imaging Centre, Luma Medical Centre, Singapore
| | - Alberto Machado
- WHO Classification of Tumours Group, International Agency for Research on Cancer, Lyon, France
| | - Ian A Cree
- WHO Classification of Tumours Group, International Agency for Research on Cancer, Lyon, France
| | - Hamid R Tizhoosh
- Rhazes Lab, Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota; Kimia Lab, University of Waterloo, Waterloo, Ontario, Canada.
| |
Collapse
|
9
|
Bai Y, Li W, An J, Xia L, Chen H, Zhao G, Gao Z. Masked autoencoders with handcrafted feature predictions: Transformer for weakly supervised esophageal cancer classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107936. [PMID: 38016392 DOI: 10.1016/j.cmpb.2023.107936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 10/28/2023] [Accepted: 11/19/2023] [Indexed: 11/30/2023]
Abstract
BACKGROUND AND OBJECTIVE Esophageal cancer is a serious disease with a high prevalence in Eastern Asia. Histopathology tissue analysis stands as the gold standard in diagnosing esophageal cancer. In recent years, there has been a shift towards digitizing histopathological images into whole slide images (WSIs), progressively integrating them into cancer diagnostics. However, the gigapixel sizes of WSIs present significant storage and processing challenges, and they often lack localized annotations. To address this issue, multi-instance learning (MIL) has been introduced for WSI classification, utilizing weakly supervised learning for diagnosis analysis. By applying the principles of MIL to WSI analysis, it is possible to reduce the workload of pathologists by facilitating the generation of localized annotations. Nevertheless, the approach's effectiveness is hindered by the traditional simple aggregation operation and the domain shift resulting from the prevalent use of convolutional feature extractors pretrained on ImageNet. METHODS We propose a MIL-based framework for WSI analysis and cancer classification. Concurrently, we introduce employing self-supervised learning, which obviates the need for manual annotation and demonstrates versatility in various tasks, to pretrain feature extractors. This method enhances the extraction of representative features from esophageal WSI for MIL, ensuring more robust and accurate performance. RESULTS We build a comprehensive dataset of whole esophageal slide images and conduct extensive experiments utilizing this dataset. The performance on our dataset demonstrates the efficiency of our proposed MIL framework and the pretraining process, with our framework outperforming existing methods, achieving an accuracy of 93.07% and AUC (area under the curve) of 95.31%. CONCLUSION This work proposes an effective MIL method to classify WSI of esophageal cancer. The promising results indicate that our cancer classification framework holds great potential in promoting the automatic whole esophageal slide image analysis.
Collapse
Affiliation(s)
- Yunhao Bai
- the School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Wenqi Li
- Department of Pathology, Key Laboratory of Cancer Prevention and Therapy, National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
| | - Jianpeng An
- the School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Lili Xia
- the School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Huazhen Chen
- the School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Gang Zhao
- Department of Pathology, Key Laboratory of Cancer Prevention and Therapy, National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
| | - Zhongke Gao
- the School of Electrical and Information Engineering, Tianjin University, Tianjin, China.
| |
Collapse
|
10
|
Yong MP, Hum YC, Lai KW, Lee YL, Goh CH, Yap WS, Tee YK. Histopathological Cancer Detection Using Intra-Domain Transfer Learning and Ensemble Learning. IEEE ACCESS 2024; 12:1434-1457. [DOI: 10.1109/access.2023.3343465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/26/2024]
Affiliation(s)
- Ming Ping Yong
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang, Malaysia
| | - Yan Chai Hum
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang, Malaysia
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia
| | - Ying Loong Lee
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang, Malaysia
| | - Choon-Hian Goh
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang, Malaysia
| | - Wun-She Yap
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang, Malaysia
| | - Yee Kai Tee
- Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kajang, Malaysia
| |
Collapse
|
11
|
Mukashyaka P, Sheridan TB, Foroughi Pour A, Chuang JH. SAMPLER: unsupervised representations for rapid analysis of whole slide tissue images. EBioMedicine 2024; 99:104908. [PMID: 38101298 PMCID: PMC10733087 DOI: 10.1016/j.ebiom.2023.104908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 11/27/2023] [Accepted: 11/27/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND Deep learning has revolutionized digital pathology, allowing automatic analysis of hematoxylin and eosin (H&E) stained whole slide images (WSIs) for diverse tasks. WSIs are broken into smaller images called tiles, and a neural network encodes each tile. Many recent works use supervised attention-based models to aggregate tile-level features into a slide-level representation, which is then used for downstream analysis. Training supervised attention-based models is computationally intensive, architecture optimization of the attention module is non-trivial, and labeled data are not always available. Therefore, we developed an unsupervised and fast approach called SAMPLER to generate slide-level representations. METHODS Slide-level representations of SAMPLER are generated by encoding the cumulative distribution functions of multiscale tile-level features. To assess effectiveness of SAMPLER, slide-level representations of breast carcinoma (BRCA), non-small cell lung carcinoma (NSCLC), and renal cell carcinoma (RCC) WSIs of The Cancer Genome Atlas (TCGA) were used to train separate classifiers distinguishing tumor subtypes in FFPE and frozen WSIs. In addition, BRCA and NSCLC classifiers were externally validated on frozen WSIs. Moreover, SAMPLER's attention maps identify regions of interest, which were evaluated by a pathologist. To determine time efficiency of SAMPLER, we compared runtime of SAMPLER with two attention-based models. SAMPLER concepts were used to improve the design of a context-aware multi-head attention model (context-MHA). FINDINGS SAMPLER-based classifiers were comparable to state-of-the-art attention deep learning models to distinguish subtypes of BRCA (AUC = 0.911 ± 0.029), NSCLC (AUC = 0.940 ± 0.018), and RCC (AUC = 0.987 ± 0.006) on FFPE WSIs (internal test sets). However, training SAMLER-based classifiers was >100 times faster. SAMPLER models successfully distinguished tumor subtypes on both internal and external test sets of frozen WSIs. Histopathological review confirmed that SAMPLER-identified high attention tiles contained subtype-specific morphological features. The improved context-MHA distinguished subtypes of BRCA and RCC (BRCA-AUC = 0.921 ± 0.027, RCC-AUC = 0.988 ± 0.010) with increased accuracy on internal test FFPE WSIs. INTERPRETATION Our unsupervised statistical approach is fast and effective for analyzing WSIs, with greatly improved scalability over attention-based deep learning methods. The high accuracy of SAMPLER-based classifiers and interpretable attention maps suggest that SAMPLER successfully encodes the distinct morphologies within WSIs and will be applicable to general histology image analysis problems. FUNDING This study was supported by the National Cancer Institute (Grant No. R01CA230031 and P30CA034196).
Collapse
Affiliation(s)
- Patience Mukashyaka
- The Jackson Laboratory for Genomic Medicine, Farmington, CT, USA; Department of Genetics and Genome Sciences, University of Connecticut Health Center, Farmington, CT, USA
| | - Todd B Sheridan
- The Jackson Laboratory for Genomic Medicine, Farmington, CT, USA; Department of Pathology, Hartford Hospital, Hartford, CT, USA
| | | | - Jeffrey H Chuang
- The Jackson Laboratory for Genomic Medicine, Farmington, CT, USA; Department of Genetics and Genome Sciences, University of Connecticut Health Center, Farmington, CT, USA.
| |
Collapse
|
12
|
Zhang C, Xu J, Tang R, Yang J, Wang W, Yu X, Shi S. Novel research and future prospects of artificial intelligence in cancer diagnosis and treatment. J Hematol Oncol 2023; 16:114. [PMID: 38012673 PMCID: PMC10680201 DOI: 10.1186/s13045-023-01514-5] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 11/20/2023] [Indexed: 11/29/2023] Open
Abstract
Research into the potential benefits of artificial intelligence for comprehending the intricate biology of cancer has grown as a result of the widespread use of deep learning and machine learning in the healthcare sector and the availability of highly specialized cancer datasets. Here, we review new artificial intelligence approaches and how they are being used in oncology. We describe how artificial intelligence might be used in the detection, prognosis, and administration of cancer treatments and introduce the use of the latest large language models such as ChatGPT in oncology clinics. We highlight artificial intelligence applications for omics data types, and we offer perspectives on how the various data types might be combined to create decision-support tools. We also evaluate the present constraints and challenges to applying artificial intelligence in precision oncology. Finally, we discuss how current challenges may be surmounted to make artificial intelligence useful in clinical settings in the future.
Collapse
Affiliation(s)
- Chaoyi Zhang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jin Xu
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Rong Tang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jianhui Yang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Wei Wang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Xianjun Yu
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China.
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China.
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China.
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China.
| | - Si Shi
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China.
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China.
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China.
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China.
| |
Collapse
|
13
|
Zheng T, Chen W, Li S, Quan H, Zou M, Zheng S, Zhao Y, Gao X, Cui X. Learning how to detect: A deep reinforcement learning method for whole-slide melanoma histopathology images. Comput Med Imaging Graph 2023; 108:102275. [PMID: 37567046 DOI: 10.1016/j.compmedimag.2023.102275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 07/18/2023] [Accepted: 07/22/2023] [Indexed: 08/13/2023]
Abstract
Cutaneous melanoma represents one of the most life-threatening malignancies. Histopathological image analysis serves as a vital tool for early melanoma detection. Deep neural network (DNN) models are frequently employed to aid pathologists in enhancing the efficiency and accuracy of diagnoses. However, due to the paucity of well-annotated, high-resolution, whole-slide histopathology image (WSI) datasets, WSIs are typically fragmented into numerous patches during the model training and testing stages. This process disregards the inherent interconnectedness among patches, potentially impeding the models' performance. Additionally, the presence of excess, non-contributing patches extends processing times and introduces substantial computational burdens. To mitigate these issues, we draw inspiration from the clinical decision-making processes of dermatopathologists to propose an innovative, weakly supervised deep reinforcement learning framework, titled Fast medical decision-making in melanoma histopathology images (FastMDP-RL). This framework expedites model inference by reducing the number of irrelevant patches identified within WSIs. FastMDP-RL integrates two DNN-based agents: the search agent (SeAgent) and the decision agent (DeAgent). The SeAgent initiates actions, steered by the image features observed in the current viewing field at various magnifications. Simultaneously, the DeAgent provides labeling probabilities for each patch. We utilize multi-instance learning (MIL) to construct a teacher-guided model (MILTG), serving a dual purpose: rewarding the SeAgent and guiding the DeAgent. Our evaluations were conducted using two melanoma datasets: the publicly accessible TCIA-CM dataset and the proprietary MELSC dataset. Our experimental findings affirm FastMDP-RL's ability to expedite inference and accurately predict WSIs, even in the absence of pixel-level annotations. Moreover, our research investigates the WSI-based interactive environment, encompassing the design of agents, state and reward functions, and feature extractors suitable for melanoma tissue images. This investigation offers valuable insights and references for researchers engaged in related studies. The code is available at: https://github.com/titizheng/FastMDP-RL.
Collapse
Affiliation(s)
- Tingting Zheng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Weixing Chen
- Shenzhen College of Advanced Technology, University of the Chinese Academy of Sciences, Beijing, China
| | - Shuqin Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Hao Quan
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Mingchen Zou
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Song Zheng
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Xinghua Gao
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| |
Collapse
|
14
|
Rauf Z, Khan AR, Sohail A, Alquhayz H, Gwak J, Khan A. Lymphocyte detection for cancer analysis using a novel fusion block based channel boosted CNN. Sci Rep 2023; 13:14047. [PMID: 37640739 PMCID: PMC10462751 DOI: 10.1038/s41598-023-40581-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 08/13/2023] [Indexed: 08/31/2023] Open
Abstract
Tumor-infiltrating lymphocytes, specialized immune cells, are considered an important biomarker in cancer analysis. Automated lymphocyte detection is challenging due to its heterogeneous morphology, variable distribution, and presence of artifacts. In this work, we propose a novel Boosted Channels Fusion-based CNN "BCF-Lym-Detector" for lymphocyte detection in multiple cancer histology images. The proposed network initially selects candidate lymphocytic regions at the tissue level and then detects lymphocytes at the cellular level. The proposed "BCF-Lym-Detector" generates diverse boosted channels by utilizing the feature learning capability of different CNN architectures. In this connection, a new adaptive fusion block is developed to combine and select the most relevant lymphocyte-specific features from the generated enriched feature space. Multi-level feature learning is used to retain lymphocytic spatial information and detect lymphocytes with variable appearances. The assessment of the proposed "BCF-Lym-Detector" show substantial improvement in terms of F-score (0.93 and 0.84 on LYSTO and NuClick, respectively), which suggests that the diverse feature extraction and dynamic feature selection enhanced the feature learning capacity of the proposed network. Moreover, the proposed technique's generalization on unseen test sets with a good recall (0.75) and F-score (0.73) shows its potential use for pathologists' assistance.
Collapse
Affiliation(s)
- Zunaira Rauf
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan
- PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan
| | - Abdul Rehman Khan
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan
| | - Anabia Sohail
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan
- Department of Electrical Engineering and Computer Science, Khalifa University of Science and Technology, Abu Dhabi, UAE
| | - Hani Alquhayz
- Department of Computer Science and Information, College of Science in Zulfi, Majmaah University, 11952, Al-Majmaah, Saudi Arabia
| | - Jeonghwan Gwak
- Department of Software, Korea National University of Transportation, Chungju, 27469, Republic of Korea.
| | - Asifullah Khan
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan.
- PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan.
- Center for Mathematical Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan.
| |
Collapse
|
15
|
Mukashyaka P, Sheridan TB, Foroughi Pour A, Chuang JH. SAMPLER: Empirical distribution representations for rapid analysis of whole slide tissue images. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.01.551468. [PMID: 37577691 PMCID: PMC10418159 DOI: 10.1101/2023.08.01.551468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Deep learning has revolutionized digital pathology, allowing for automatic analysis of hematoxylin and eosin (H&E) stained whole slide images (WSIs) for diverse tasks. In such analyses, WSIs are typically broken into smaller images called tiles, and a neural network backbone encodes each tile in a feature space. Many recent works have applied attention based deep learning models to aggregate tile-level features into a slide-level representation, which is then used for slide-level prediction tasks. However, training attention models is computationally intensive, necessitating hyperparameter optimization and specialized training procedures. Here, we propose SAMPLER, a fully statistical approach to generate efficient and informative WSI representations by encoding the empirical cumulative distribution functions (CDFs) of multiscale tile features. We demonstrate that SAMPLER-based classifiers are as accurate or better than state-of-the-art fully deep learning attention models for classification tasks including distinction of: subtypes of breast carcinoma (BRCA: AUC=0.911 ± 0.029); subtypes of non-small cell lung carcinoma (NSCLC: AUC=0.940±0.018); and subtypes of renal cell carcinoma (RCC: AUC=0.987±0.006). A major advantage of the SAMPLER representation is that predictive models are >100X faster compared to attention models. Histopathological review confirms that SAMPLER-identified high attention tiles contain tumor morphological features specific to the tumor type, while low attention tiles contain fibrous stroma, blood, or tissue folding artifacts. We further apply SAMPLER concepts to improve the design of attention-based neural networks, yielding a context aware multi-head attention model with increased accuracy for subtype classification within BRCA and RCC (BRCA: AUC=0.921±0.027, and RCC: AUC=0.988±0.010). Finally, we provide theoretical results identifying sufficient conditions for which SAMPLER is optimal. SAMPLER is a fast and effective approach for analyzing WSIs, with greatly improved scalability over attention methods to benefit digital pathology analysis.
Collapse
Affiliation(s)
- Patience Mukashyaka
- The Jackson Laboratory for Genomic Medicine, Farmington, CT
- University of Connecticut Health Center, Department of Genetics and Genome Sciences, Farmington, CT
| | - Todd B Sheridan
- The Jackson Laboratory for Genomic Medicine, Farmington, CT
- Department of Pathology, Hartford hospital, Hartford, CT
| | | | - Jeffrey H Chuang
- The Jackson Laboratory for Genomic Medicine, Farmington, CT
- University of Connecticut Health Center, Department of Genetics and Genome Sciences, Farmington, CT
| |
Collapse
|
16
|
Ram S, Tang W, Bell AJ, Pal R, Spencer C, Buschhaus A, Hatt CR, diMagliano MP, Rehemtulla A, Rodríguez JJ, Galban S, Galban CJ. Lung cancer lesion detection in histopathology images using graph-based sparse PCA network. Neoplasia 2023; 42:100911. [PMID: 37269818 DOI: 10.1016/j.neo.2023.100911] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 05/17/2023] [Indexed: 06/05/2023]
Abstract
Early detection of lung cancer is critical for improvement of patient survival. To address the clinical need for efficacious treatments, genetically engineered mouse models (GEMM) have become integral in identifying and evaluating the molecular underpinnings of this complex disease that may be exploited as therapeutic targets. Assessment of GEMM tumor burden on histopathological sections performed by manual inspection is both time consuming and prone to subjective bias. Therefore, an interplay of needs and challenges exists for computer-aided diagnostic tools, for accurate and efficient analysis of these histopathology images. In this paper, we propose a simple machine learning approach called the graph-based sparse principal component analysis (GS-PCA) network, for automated detection of cancerous lesions on histological lung slides stained by hematoxylin and eosin (H&E). Our method comprises four steps: 1) cascaded graph-based sparse PCA, 2) PCA binary hashing, 3) block-wise histograms, and 4) support vector machine (SVM) classification. In our proposed architecture, graph-based sparse PCA is employed to learn the filter banks of the multiple stages of a convolutional network. This is followed by PCA hashing and block histograms for indexing and pooling. The meaningful features extracted from this GS-PCA are then fed to an SVM classifier. We evaluate the performance of the proposed algorithm on H&E slides obtained from an inducible K-rasG12D lung cancer mouse model using precision/recall rates, Fβ-score, Tanimoto coefficient, and area under the curve (AUC) of the receiver operator characteristic (ROC) and show that our algorithm is efficient and provides improved detection accuracy compared to existing algorithms.
Collapse
Affiliation(s)
- Sundaresh Ram
- Departments of Radiology, and Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109, USA.
| | - Wenfei Tang
- Department of Computer Science and Engineering, University of Michigan, Ann Arbor, MI 48109, USA
| | - Alexander J Bell
- Departments of Radiology, and Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
| | - Ravi Pal
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Cara Spencer
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI 48109, USA
| | | | - Charles R Hatt
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; Imbio LLC, Minneapolis, MN 55405, USA
| | - Marina Pasca diMagliano
- Departments of Surgery, and Cell and Developmental Biology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Alnawaz Rehemtulla
- Departments of Radiology, and Radiation Oncology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Jeffrey J Rodríguez
- Departments of Electrical and Computer Engineering, and Biomedical Engineering, The University of Arizona, Tucson, AZ 85721, USA
| | - Stefanie Galban
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Craig J Galban
- Departments of Radiology, and Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
| |
Collapse
|
17
|
Hu W, Li X, Li C, Li R, Jiang T, Sun H, Huang X, Grzegorzek M, Li X. A state-of-the-art survey of artificial neural networks for Whole-slide Image analysis: From popular Convolutional Neural Networks to potential visual transformers. Comput Biol Med 2023; 161:107034. [PMID: 37230019 DOI: 10.1016/j.compbiomed.2023.107034] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 04/13/2023] [Accepted: 05/10/2023] [Indexed: 05/27/2023]
Abstract
In recent years, with the advancement of computer-aided diagnosis (CAD) technology and whole slide image (WSI), histopathological WSI has gradually played a crucial aspect in the diagnosis and analysis of diseases. To increase the objectivity and accuracy of pathologists' work, artificial neural network (ANN) methods have been generally needed in the segmentation, classification, and detection of histopathological WSI. However, the existing review papers only focus on equipment hardware, development status and trends, and do not summarize the art neural network used for full-slide image analysis in detail. In this paper, WSI analysis methods based on ANN are reviewed. Firstly, the development status of WSI and ANN methods is introduced. Secondly, we summarize the common ANN methods. Next, we discuss publicly available WSI datasets and evaluation metrics. These ANN architectures for WSI processing are divided into classical neural networks and deep neural networks (DNNs) and then analyzed. Finally, the application prospect of the analytical method in this field is discussed. The important potential method is Visual Transformers.
Collapse
Affiliation(s)
- Weiming Hu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xintong Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| | - Rui Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Xinyu Huang
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Shenyang, China.
| |
Collapse
|
18
|
Winkelmaier G, Koch B, Bogardus S, Borowsky AD, Parvin B. Biomarkers of Tumor Heterogeneity in Glioblastoma Multiforme Cohort of TCGA. Cancers (Basel) 2023; 15:cancers15082387. [PMID: 37190318 DOI: 10.3390/cancers15082387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 04/06/2023] [Accepted: 04/14/2023] [Indexed: 05/17/2023] Open
Abstract
Tumor Whole Slide Images (WSI) are often heterogeneous, which hinders the discovery of biomarkers in the presence of confounding clinical factors. In this study, we present a pipeline for identifying biomarkers from the Glioblastoma Multiforme (GBM) cohort of WSIs from TCGA archive. The GBM cohort endures many technical artifacts while the discovery of GBM biomarkers is challenged because "age" is the single most confounding factor for predicting outcomes. The proposed approach relies on interpretable features (e.g., nuclear morphometric indices), effective similarity metrics for heterogeneity analysis, and robust statistics for identifying biomarkers. The pipeline first removes artifacts (e.g., pen marks) and partitions each WSI into patches for nuclear segmentation via an extended U-Net for subsequent quantitative representation. Given the variations in fixation and staining that can artificially modulate hematoxylin optical density (HOD), we extended Navab's Lab method to normalize images and reduce the impact of batch effects. The heterogeneity of each WSI is then represented either as probability density functions (PDF) per patient or as the composition of a dictionary predicted from the entire cohort of WSIs. For PDF- or dictionary-based methods, morphometric subtypes are constructed based on distances computed from optimal transport and linkage analysis or consensus clustering with Euclidean distances, respectively. For each inferred subtype, Kaplan-Meier and/or the Cox regression model are used to regress the survival time. Since age is the single most important confounder for predicting survival in GBM and there is an observed violation of the proportionality assumption in the Cox model, we use both age and age-squared coupled with the Likelihood ratio test and forest plots for evaluating competing statistics. Next, the PDF- and dictionary-based methods are combined to identify biomarkers that are predictive of survival. The combined model has the advantage of integrating global (e.g., cohort scale) and local (e.g., patient scale) attributes of morphometric heterogeneity, coupled with robust statistics, to reveal stable biomarkers. The results indicate that, after normalization of the GBM cohort, mean HOD, eccentricity, and cellularity are predictive of survival. Finally, we also stratified the GBM cohort as a function of EGFR expression and published genomic subtypes to reveal genomic-dependent morphometric biomarkers.
Collapse
Affiliation(s)
- Garrett Winkelmaier
- Department of Electrical and Biomedical Engineering, College of Engineering, University of Nevada Reno, 1664 N. Virginia St., Reno, NV 89509, USA
| | - Brandon Koch
- Department of Biostatics, College of Public Health, Ohio State University, 281 W. Lane Ave., Columbus, OH 43210, USA
| | - Skylar Bogardus
- Department of Electrical and Biomedical Engineering, College of Engineering, University of Nevada Reno, 1664 N. Virginia St., Reno, NV 89509, USA
| | - Alexander D Borowsky
- Department of Pathology, UC Davis Comprehensive Cancer Center, University of California Davis, 1 Shields Ave, Davis, CA 95616, USA
| | - Bahram Parvin
- Department of Electrical and Biomedical Engineering, College of Engineering, University of Nevada Reno, 1664 N. Virginia St., Reno, NV 89509, USA
- Pennington Cancer Institute, Renown Health, Reno, NV 89502, USA
| |
Collapse
|
19
|
Wang R, Gu Y, Zhang T, Yang J. Fast cancer metastasis location based on dual magnification hard example mining network in whole-slide images. Comput Biol Med 2023; 158:106880. [PMID: 37044050 DOI: 10.1016/j.compbiomed.2023.106880] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 02/28/2023] [Accepted: 03/30/2023] [Indexed: 04/03/2023]
Abstract
Breast cancer has become the most common form of cancer among women. In recent years, deep learning has shown great potential in aiding the diagnosis of pathological images, particularly through the use of convolutional neural networks for locating lymph node metastasis under gigapixel whole slide images (WSIs). However, the massive size of these images at the highest magnification introduces redundant computation during the inference process. Additionally, the diversity of biological textures and structures within WSIs can cause confusion for classifiers, particularly in identifying hard examples. As a result, the trade-off between accuracy and efficiency remains a critical issue for whole-slide image metastasis localization. In this paper, we propose a novel two-stream network that takes a pair of low- and high-magnification image patches as input for identifying hard examples during the training phase. Specifically, our framework focuses on samples where the outputs of the two magnification networks are dissimilar. We adopt a dual magnification hard mining loss to re-weight the ambiguous samples. To more efficiently locate tumor metastasis cells in whole slide images, the two stream networks are decomposed into a cascaded network during the inference phase. The low magnification WSIs scanned by the low-mag network generate a coarse probability map, and the suspicious areas in the map are refined by the high-mag network. Finally, we evaluate our fast location dual magnification hard example mining network on the Camelyon16 breast cancer whole-slide image dataset. Experiments demonstrate that our proposed method achieves a 0.871 FROC score with a faster inference time, and our high magnification network also achieves a 0.88 FROC score.
Collapse
Affiliation(s)
- Rui Wang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China.
| | - Yun Gu
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China.
| | - Tianyi Zhang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China
| | - Jie Yang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China.
| |
Collapse
|
20
|
Deep learning for computational cytology: A survey. Med Image Anal 2023; 84:102691. [PMID: 36455333 DOI: 10.1016/j.media.2022.102691] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 10/22/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Computational cytology is a critical, rapid-developing, yet challenging topic in medical image computing concerned with analyzing digitized cytology images by computer-aided technologies for cancer screening. Recently, an increasing number of deep learning (DL) approaches have made significant achievements in medical image analysis, leading to boosting publications of cytological studies. In this article, we survey more than 120 publications of DL-based cytology image analysis to investigate the advanced methods and comprehensive applications. We first introduce various deep learning schemes, including fully supervised, weakly supervised, unsupervised, and transfer learning. Then, we systematically summarize public datasets, evaluation metrics, versatile cytology image analysis applications including cell classification, slide-level cancer screening, nuclei or cell detection and segmentation. Finally, we discuss current challenges and potential research directions of computational cytology.
Collapse
|
21
|
Pan J, Hong G, Zeng H, Liao C, Li H, Yao Y, Gan Q, Wang Y, Wu S, Lin T. An artificial intelligence model for the pathological diagnosis of invasion depth and histologic grade in bladder cancer. J Transl Med 2023; 21:42. [PMID: 36691055 PMCID: PMC9869632 DOI: 10.1186/s12967-023-03888-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Accepted: 01/12/2023] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Accurate pathological diagnosis of invasion depth and histologic grade is key for clinical management in patients with bladder cancer (BCa), but it is labour-intensive, experience-dependent and subject to interobserver variability. Here, we aimed to develop a pathological artificial intelligence diagnostic model (PAIDM) for BCa diagnosis. METHODS A total of 854 whole slide images (WSIs) from 692 patients were included and divided into training and validation sets. The PAIDM was developed using the training set based on the deep learning algorithm ScanNet, and the performance was verified at the patch level in validation set 1 and at the WSI level in validation set 2. An independent validation cohort (validation set 3) was employed to compare the PAIDM and pathologists. Model performance was evaluated using the area under the curve (AUC), accuracy, sensitivity, specificity, positive predictive value and negative predictive value. RESULTS The AUCs of the PAIDM were 0.878 (95% CI 0.875-0.881) at the patch level in validation set 1 and 0.870 (95% CI 0.805-0.923) at the WSI level in validation set 2. In comparing the PAIDM and pathologists, the PAIDM achieved an AUC of 0.847 (95% CI 0.779-0.905), which was non-inferior to the average diagnostic level of pathologists. There was high consistency between the model-predicted and manually annotated areas, improving the PAIDM's interpretability. CONCLUSIONS We reported an artificial intelligence-based diagnostic model for BCa that performed well in identifying invasion depth and histologic grade. Importantly, the PAIDM performed admirably in patch-level recognition, with a promising application for transurethral resection specimens.
Collapse
Affiliation(s)
- Jiexin Pan
- Department of Urology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107th Yanjiangxi Road, Guangzhou, China
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Guibin Hong
- Department of Urology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107th Yanjiangxi Road, Guangzhou, China
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Hong Zeng
- Department of Pathology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Chengxiao Liao
- Department of Urology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107th Yanjiangxi Road, Guangzhou, China
| | - Huarun Li
- Department of Urology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107th Yanjiangxi Road, Guangzhou, China
| | - Yuhui Yao
- Department of Urology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107th Yanjiangxi Road, Guangzhou, China
| | - Qinghua Gan
- Department of Urology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107th Yanjiangxi Road, Guangzhou, China
| | - Yun Wang
- Department of Urology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107th Yanjiangxi Road, Guangzhou, China
| | - Shaoxu Wu
- Department of Urology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107th Yanjiangxi Road, Guangzhou, China.
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.
- Guangdong Provincial Clinical Research Center for Urological Diseases, Guangzhou, Guangdong, China.
| | - Tianxin Lin
- Department of Urology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107th Yanjiangxi Road, Guangzhou, China.
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.
- Guangdong Provincial Clinical Research Center for Urological Diseases, Guangzhou, Guangdong, China.
| |
Collapse
|
22
|
Wang Z, Lu H, Wu Y, Ren S, Diaty DM, Fu Y, Zou Y, Zhang L, Wang Z, Wang F, Li S, Huo X, Yu W, Xu J, Ye Z. Predicting recurrence in osteosarcoma via a quantitative histological image classifier derived from tumour nuclear morphological features. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2023. [DOI: 10.1049/cit2.12175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Affiliation(s)
- Zhan Wang
- Department of Orthopedic Surgery The Second Affiliated Hospital Zhejiang University School of Medicine Hangzhou China
- Orthopedics Research Institute of Zhejiang University Hangzhou China
- Key Laboratory of Motor System Disease Research and Precision Therapy of Zhejiang Province Hangzhou China
- Clinical Research Center of Motor System Disease of Zhejiang Province Hangzhou China
| | - Haoda Lu
- Institute for AI in Medicine School of Artificial Intelligence, Nanjing University of Information Science & Technology Nanjing China
| | - Yan Wu
- Department of Orthopedic Surgery The Second Affiliated Hospital Zhejiang University School of Medicine Hangzhou China
- Orthopedics Research Institute of Zhejiang University Hangzhou China
- Key Laboratory of Motor System Disease Research and Precision Therapy of Zhejiang Province Hangzhou China
- Clinical Research Center of Motor System Disease of Zhejiang Province Hangzhou China
| | - Shihong Ren
- Department of Orthopedic Surgery The Second Affiliated Hospital Zhejiang University School of Medicine Hangzhou China
| | - Diarra mohamed Diaty
- Department of Orthopedic Surgery The Second Affiliated Hospital Zhejiang University School of Medicine Hangzhou China
- Orthopedics Research Institute of Zhejiang University Hangzhou China
- Key Laboratory of Motor System Disease Research and Precision Therapy of Zhejiang Province Hangzhou China
- Clinical Research Center of Motor System Disease of Zhejiang Province Hangzhou China
| | - Yanbiao Fu
- Department of Pathology The Second Affiliated Hospital Zhejiang University School of Medicine Hangzhou China
| | - Yi Zou
- Department of Pathology The Second Affiliated Hospital Zhejiang University School of Medicine Hangzhou China
| | - Lingling Zhang
- Department of Orthopedic Surgery The Second Affiliated Hospital Zhejiang University School of Medicine Hangzhou China
- Orthopedics Research Institute of Zhejiang University Hangzhou China
- Key Laboratory of Motor System Disease Research and Precision Therapy of Zhejiang Province Hangzhou China
- Clinical Research Center of Motor System Disease of Zhejiang Province Hangzhou China
| | - Zenan Wang
- Department of Orthopedic Surgery The Second Affiliated Hospital Zhejiang University School of Medicine Hangzhou China
| | - Fangqian Wang
- Department of Orthopedic Surgery The Second Affiliated Hospital Zhejiang University School of Medicine Hangzhou China
| | - Shu Li
- Department of Hematology Shanghai General Hospital Shanghai Jiao Tong University School of Medicine Shanghai China
| | - Xinmi Huo
- Bioinformatics Institute (BII) Agency for Science, Technology and Research (A*STAR) Singapore Singapore
| | - Weimiao Yu
- Bioinformatics Institute (BII) Agency for Science, Technology and Research (A*STAR) Singapore Singapore
| | - Jun Xu
- Institute for AI in Medicine School of Artificial Intelligence, Nanjing University of Information Science & Technology Nanjing China
| | - Zhaoming Ye
- Department of Orthopedic Surgery The Second Affiliated Hospital Zhejiang University School of Medicine Hangzhou China
- Orthopedics Research Institute of Zhejiang University Hangzhou China
- Key Laboratory of Motor System Disease Research and Precision Therapy of Zhejiang Province Hangzhou China
- Clinical Research Center of Motor System Disease of Zhejiang Province Hangzhou China
| |
Collapse
|
23
|
Zhao Y, Zhang J, Hu D, Qu H, Tian Y, Cui X. Application of Deep Learning in Histopathology Images of Breast Cancer: A Review. MICROMACHINES 2022; 13:2197. [PMID: 36557496 PMCID: PMC9781697 DOI: 10.3390/mi13122197] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/04/2022] [Accepted: 12/09/2022] [Indexed: 06/17/2023]
Abstract
With the development of artificial intelligence technology and computer hardware functions, deep learning algorithms have become a powerful auxiliary tool for medical image analysis. This study was an attempt to use statistical methods to analyze studies related to the detection, segmentation, and classification of breast cancer in pathological images. After an analysis of 107 articles on the application of deep learning to pathological images of breast cancer, this study is divided into three directions based on the types of results they report: detection, segmentation, and classification. We introduced and analyzed models that performed well in these three directions and summarized the related work from recent years. Based on the results obtained, the significant ability of deep learning in the application of breast cancer pathological images can be recognized. Furthermore, in the classification and detection of pathological images of breast cancer, the accuracy of deep learning algorithms has surpassed that of pathologists in certain circumstances. Our study provides a comprehensive review of the development of breast cancer pathological imaging-related research and provides reliable recommendations for the structure of deep learning network models in different application scenarios.
Collapse
Affiliation(s)
- Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| | - Jie Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Dayu Hu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Hui Qu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Ye Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| |
Collapse
|
24
|
Alzoubi I, Bao G, Zheng Y, Wang X, Graeber MB. Artificial intelligence techniques for neuropathological diagnostics and research. Neuropathology 2022. [PMID: 36443935 DOI: 10.1111/neup.12880] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 10/17/2022] [Accepted: 10/23/2022] [Indexed: 12/03/2022]
Abstract
Artificial intelligence (AI) research began in theoretical neurophysiology, and the resulting classical paper on the McCulloch-Pitts mathematical neuron was written in a psychiatry department almost 80 years ago. However, the application of AI in digital neuropathology is still in its infancy. Rapid progress is now being made, which prompted this article. Human brain diseases represent distinct system states that fall outside the normal spectrum. Many differ not only in functional but also in structural terms, and the morphology of abnormal nervous tissue forms the traditional basis of neuropathological disease classifications. However, only a few countries have the medical specialty of neuropathology, and, given the sheer number of newly developed histological tools that can be applied to the study of brain diseases, a tremendous shortage of qualified hands and eyes at the microscope is obvious. Similarly, in neuroanatomy, human observers no longer have the capacity to process the vast amounts of connectomics data. Therefore, it is reasonable to assume that advances in AI technology and, especially, whole-slide image (WSI) analysis will greatly aid neuropathological practice. In this paper, we discuss machine learning (ML) techniques that are important for understanding WSI analysis, such as traditional ML and deep learning, introduce a recently developed neuropathological AI termed PathoFusion, and present thoughts on some of the challenges that must be overcome before the full potential of AI in digital neuropathology can be realized.
Collapse
Affiliation(s)
- Islam Alzoubi
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Guoqing Bao
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Yuqi Zheng
- Ken Parker Brain Tumour Research Laboratories Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney Camperdown New South Wales Australia
| | - Xiuying Wang
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Manuel B. Graeber
- Ken Parker Brain Tumour Research Laboratories Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney Camperdown New South Wales Australia
| |
Collapse
|
25
|
Kim I, Kang K, Song Y, Kim TJ. Application of Artificial Intelligence in Pathology: Trends and Challenges. Diagnostics (Basel) 2022; 12:2794. [PMID: 36428854 PMCID: PMC9688959 DOI: 10.3390/diagnostics12112794] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 11/03/2022] [Accepted: 11/11/2022] [Indexed: 11/16/2022] Open
Abstract
Given the recent success of artificial intelligence (AI) in computer vision applications, many pathologists anticipate that AI will be able to assist them in a variety of digital pathology tasks. Simultaneously, tremendous advancements in deep learning have enabled a synergy with artificial intelligence (AI), allowing for image-based diagnosis on the background of digital pathology. There are efforts for developing AI-based tools to save pathologists time and eliminate errors. Here, we describe the elements in the development of computational pathology (CPATH), its applicability to AI development, and the challenges it faces, such as algorithm validation and interpretability, computing systems, reimbursement, ethics, and regulations. Furthermore, we present an overview of novel AI-based approaches that could be integrated into pathology laboratory workflows.
Collapse
Affiliation(s)
- Inho Kim
- College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Kyungmin Kang
- College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Youngjae Song
- College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Tae-Jung Kim
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 10, 63-ro, Yeongdeungpo-gu, Seoul 07345, Republic of Korea
| |
Collapse
|
26
|
Wang Z, Yu L, Ding X, Liao X, Wang L. Lymph Node Metastasis Prediction From Whole Slide Images With Transformer-Guided Multiinstance Learning and Knowledge Transfer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2777-2787. [PMID: 35486559 DOI: 10.1109/tmi.2022.3171418] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The gold standard for diagnosing lymph node metastasis of papillary thyroid carcinoma is to analyze the whole slide histopathological images (WSIs). Due to the large size of WSIs, recent computer-aided diagnosis approaches adopt the multi-instance learning (MIL) strategy and the key part is how to effectively aggregate the information of different instances (patches). In this paper, a novel transformer-guided framework is proposed to predict lymph node metastasis from WSIs, where we incorporate the transformer mechanism to improve the accuracy from three different aspects. First, we propose an effective transformer-based module for discriminative patch feature extraction, including a lightweight feature extractor with a pruned transformer (Tiny-ViT) and a clustering-based instance selection scheme. Next, we propose a new Transformer-MIL module to capture the relationship of different discriminative patches with sparse distribution on WSIs and better nonlinearly aggregate patch-level features into the slide-level prediction. Considering that the slide-level annotation is relatively limited to training a robust Transformer-MIL, we utilize the pathological relationship between the primary tumor and its lymph node metastasis and develop an effective attention-based mutual knowledge distillation (AMKD) paradigm. Experimental results on our collected WSI dataset demonstrate the efficiency of the proposed Transformer-MIL and attention-based knowledge distillation. Our method outperforms the state-of-the-art methods by over 2.72% in AUC (area under the curve).
Collapse
|
27
|
Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, Bu D, Zhao Y. Multi-modality artificial intelligence in digital pathology. Brief Bioinform 2022; 23:6702380. [PMID: 36124675 PMCID: PMC9677480 DOI: 10.1093/bib/bbac367] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/27/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022] Open
Abstract
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin-eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors' work and discusses the opportunities and challenges of AI.
Collapse
Affiliation(s)
- Yixuan Qiao
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianhe Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| | - Chunlong Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yufan Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Wu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Shengtong Li
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Dechao Bu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| |
Collapse
|
28
|
Deep Learning Using Endobronchial-Ultrasound-Guided Transbronchial Needle Aspiration Image to Improve the Overall Diagnostic Yield of Sampling Mediastinal Lymphadenopathy. Diagnostics (Basel) 2022; 12:diagnostics12092234. [PMID: 36140635 PMCID: PMC9497910 DOI: 10.3390/diagnostics12092234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 08/23/2022] [Accepted: 09/13/2022] [Indexed: 11/17/2022] Open
Abstract
Lung cancer is the biggest cause of cancer-related death worldwide. An accurate nodal staging is critical for the determination of treatment strategy for lung cancer patients. Endobronchial-ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) has revolutionized the field of pulmonology and is considered to be extremely sensitive, specific, and secure for lung cancer staging through rapid on-site evaluation (ROSE), but manual visual inspection on the entire slide of EBUS smears is challenging, time consuming, and worse, subjective, on a large interobserver scale. To satisfy ROSE’s needs, a rapid, automated, and accurate diagnosis system using EBUS-TBNA whole-slide images (WSIs) is highly desired to improve diagnosis accuracy and speed, minimize workload and labor costs, and ensure reproducibility. We present a fast, efficient, and fully automatic deep-convolutional-neural-network-based system for advanced lung cancer staging on gigapixel EBUS-TBNA cytological WSIs. Each WSI was converted into a patch-based hierarchical structure and examined by the proposed deep convolutional neural network, generating the segmentation of metastatic lesions in EBUS-TBNA WSIs. To the best of the authors’ knowledge, this is the first research on fully automated enlarged mediastinal lymph node analysis using EBUS-TBNA cytological WSIs. We evaluated the robustness of the proposed framework on a dataset of 122 WSIs, and the proposed method achieved a high precision of 93.4%, sensitivity of 89.8%, DSC of 82.2%, and IoU of 83.2% for the first experiment (37.7% training and 62.3% testing) and a high precision of 91.8 ± 1.2, sensitivity of 96.3 ± 0.8, DSC of 94.0 ± 1.0, and IoU of 88.7 ± 1.8 for the second experiment using a three-fold cross-validation, respectively. Furthermore, the proposed method significantly outperformed the three state-of-the-art baseline models, including U-Net, SegNet, and FCN, in terms of precision, sensitivity, DSC, and Jaccard index, based on Fisher’s least significant difference (LSD) test (p<0.001). For a computational time comparison on a WSI, the proposed method was 2.5 times faster than U-Net, 2.3 times faster than SegNet, and 3.4 times faster than FCN, using a single GeForce GTX 1080 Ti, respectively. With its high precision and sensitivity, the proposed method demonstrated that it manifested the potential to reduce the workload of pathologists in their routine clinical practice.
Collapse
|
29
|
Zhou P, Cao Y, Li M, Ma Y, Chen C, Gan X, Wu J, Lv X, Chen C. HCCANet: histopathological image grading of colorectal cancer using CNN based on multichannel fusion attention mechanism. Sci Rep 2022; 12:15103. [PMID: 36068309 PMCID: PMC9448811 DOI: 10.1038/s41598-022-18879-1] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 08/22/2022] [Indexed: 12/17/2022] Open
Abstract
Histopathological image analysis is the gold standard for pathologists to grade colorectal cancers of different differentiation types. However, the diagnosis by pathologists is highly subjective and prone to misdiagnosis. In this study, we constructed a new attention mechanism named MCCBAM based on channel attention mechanism and spatial attention mechanism, and developed a computer-aided diagnosis (CAD) method based on CNN and MCCBAM, called HCCANet. In this study, 630 histopathology images processed with Gaussian filtering denoising were included and gradient-weighted class activation map (Grad-CAM) was used to visualize regions of interest in HCCANet to improve its interpretability. The experimental results show that the proposed HCCANet model outperforms four advanced deep learning (ResNet50, MobileNetV2, Xception, and DenseNet121) and four classical machine learning (KNN, NB, RF, and SVM) techniques, achieved 90.2%, 85%, and 86.7% classification accuracy for colorectal cancers with high, medium, and low differentiation levels, respectively, with an overall accuracy of 87.3% and an average AUC value of 0.9.In addition, the MCCBAM constructed in this study outperforms several commonly used attention mechanisms SAM, SENet, SKNet, Non_Local, CBAM, and BAM on the backbone network. In conclusion, the HCCANet model proposed in this study is feasible for postoperative adjuvant diagnosis and grading of colorectal cancer.
Collapse
Affiliation(s)
- Panyun Zhou
- College of Software, Xinjiang University, Urumqi, 830046, China
| | - Yanzhen Cao
- The Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, 830011, China
| | - Min Li
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China.,Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, 830046, China
| | - Yuhua Ma
- Department of Oncology, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 200120, China.,Karamay Central Hospital of Xinjiang Karamay, Karamay, Xinjiang Uygur Autonomous Region, Department of Pathology, Karamay, 834000, China
| | - Chen Chen
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China.,Xinjiang Cloud Computing Application Laboratory, Karamay, 834099, China
| | - Xiaojing Gan
- The Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, 830011, China
| | - Jianying Wu
- College of Physics and Electronic Engineering, Xinjiang Normal University, Urumqi, 830054, China
| | - Xiaoyi Lv
- College of Software, Xinjiang University, Urumqi, 830046, China. .,College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China. .,Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, 830046, China. .,Xinjiang Cloud Computing Application Laboratory, Karamay, 834099, China. .,Key Laboratory of Software Engineering Technology, Xinjiang University, Urumqi, 830046, China.
| | - Cheng Chen
- College of Software, Xinjiang University, Urumqi, 830046, China.
| |
Collapse
|
30
|
Spatiality Sensitive Learning for Cancer Metastasis Detection in Whole-Slide Images. MATHEMATICS 2022. [DOI: 10.3390/math10152657] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Metastasis detection in lymph nodes via microscopic examination of histopathological images is one of the most crucial diagnostic procedures for breast cancer staging. The manual analysis is extremely labor-intensive and time-consuming because of complexities and diversities of histopathology images. Deep learning has been utilized in automatic cancer metastasis detection in recent years. Due to the huge size of whole-slide images, most existing approaches split each image into smaller patches and simply treat these patches independently, which ignores the spatial correlations among them. To solve this problem, this paper proposes an effective spatially sensitive learning framework for cancer metastasis detection in whole-slide images. Moreover, a novel spatial loss function is designed to ensure the consistency of prediction over neighboring patches. Specifically, through incorporating long short-term memory and spatial loss constraint on top of a convolutional neural network feature extractor, the proposed method can effectively learn both the appearance of each patch and spatial relationships between adjacent image patches. With the standard back-propagation algorithm, the whole framework can be trained in an end-to-end way. Finally, the regions with high tumor probability in the resulting probability map are the metastasis locations. Extensive experiments on the benchmark Camelyon 2016 Grand Challenge dataset show the effectiveness of the proposed approach with respect to state-of-the-art competitors. The obtained precision, recall, and balanced accuracy are 0.9565, 0.9167, and 0.9458, respectively. It is also demonstrated that the proposed approach can provide more accurate detection results and is helpful for early diagnosis of cancer metastasis.
Collapse
|
31
|
Abstract
The metastasis detection in lymph nodes via microscopic examination of H&E stained histopathological images is one of the most crucial diagnostic procedures for breast cancer staging. The manual analysis is extremely labor-intensive and time-consuming because of complexities and diversities of histopathological images. Deep learning has been utilized in automatic cancer metastasis detection in recent years. The success of supervised deep learning is credited to a large labeled dataset, which is hard to obtain in medical image analysis. Contrastive learning, a branch of self-supervised learning, can help in this aspect through introducing an advanced strategy to learn discriminative feature representations from unlabeled images. In this paper, we propose to improve breast cancer metastasis detection through self-supervised contrastive learning, which is used as an accessional task in the detection pipeline, allowing a feature extractor to learn more valuable representations, even if there are fewer annotation images. Furthermore, we extend the proposed approach to exploit unlabeled images in a semi-supervised manner, as self-supervision does not need labeled data at all. Extensive experiments on the benchmark Camelyon2016 Grand Challenge dataset demonstrate that self-supervision can improve cancer metastasis detection performance leading to state-of-the-art results.
Collapse
|
32
|
Chen X, Wang X, Zhang K, Fung KM, Thai TC, Moore K, Mannel RS, Liu H, Zheng B, Qiu Y. Recent advances and clinical applications of deep learning in medical image analysis. Med Image Anal 2022; 79:102444. [PMID: 35472844 PMCID: PMC9156578 DOI: 10.1016/j.media.2022.102444] [Citation(s) in RCA: 275] [Impact Index Per Article: 91.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 03/09/2022] [Accepted: 04/01/2022] [Indexed: 02/07/2023]
Abstract
Deep learning has received extensive research interest in developing new medical image processing algorithms, and deep learning based models have been remarkably successful in a variety of medical imaging tasks to support disease detection and diagnosis. Despite the success, the further improvement of deep learning models in medical image analysis is majorly bottlenecked by the lack of large-sized and well-annotated datasets. In the past five years, many studies have focused on addressing this challenge. In this paper, we reviewed and summarized these recent studies to provide a comprehensive overview of applying deep learning methods in various medical image analysis tasks. Especially, we emphasize the latest progress and contributions of state-of-the-art unsupervised and semi-supervised deep learning in medical image analysis, which are summarized based on different application scenarios, including classification, segmentation, detection, and image registration. We also discuss major technical challenges and suggest possible solutions in the future research efforts.
Collapse
Affiliation(s)
- Xuxin Chen
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Ximin Wang
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Ke Zhang
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Kar-Ming Fung
- Department of Pathology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Theresa C Thai
- Department of Radiology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Kathleen Moore
- Department of Obstetrics and Gynecology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Robert S Mannel
- Department of Obstetrics and Gynecology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Hong Liu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA.
| |
Collapse
|
33
|
Shen Y, Ke J. Sampling Based Tumor Recognition in Whole-Slide Histology Image With Deep Learning Approaches. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2022; 19:2431-2441. [PMID: 33630739 DOI: 10.1109/tcbb.2021.3062230] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Histopathological identification of tumor tissue is one of the routine pathological diagnoses for pathologists. Recently, computational pathology has been successfully interpreted by a variety of deep learning-based applications. Nevertheless, the high-efficient and spatial-correlated processing of individual patches have always attracted attention in whole-slide image (WSI) analysis. In this paper, we propose a high-throughput system to detect tumor regions in colorectal cancer histology slides precisely. We train a deep convolutional neural network (CNN) model and design a Monte Carlo (MC) adaptive sampling method to estimate the most representative patches in a WSI. Two conditional random field (CRF) models are designed, namely the correction CRF and the prediction CRF are integrated for spatial dependencies of patches. We use three datasets of colorectal cancer from The Cancer Genome Atlas (TCGA) to evaluate the performance of the system. The overall diagnostic time can be reduced from 56.7 percent to 71.7 percent on the slides of a varying tumor distribution, with an increase in classification accuracy.
Collapse
|
34
|
Li Y, Dan T, Li H, Chen J, Peng H, Liu L, Cai H. NPCNet: Jointly Segment Primary Nasopharyngeal Carcinoma Tumors and Metastatic Lymph Nodes in MR Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1639-1650. [PMID: 35041597 DOI: 10.1109/tmi.2022.3144274] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Nasopharyngeal carcinoma (NPC) is a malignant tumor whose survivability is greatly improved if early diagnosis and timely treatment are provided. Accurate segmentation of both the primary NPC tumors and metastatic lymph nodes (MLNs) is crucial for patient staging and radiotherapy scheduling. However, existing studies mainly focus on the segmentation of primary tumors, eliding the recognition of MLNs, and thus fail to comprehensively provide a landscape for tumor identification. There are three main challenges in segmenting primary NPC tumors and MLNs: variable location, variable size, and irregular boundary. To address these challenges, we propose an automatic segmentation network, named by NPCNet, to achieve segmentation of primary NPC tumors and MLNs simultaneously. Specifically, we design three modules, including position enhancement module (PEM), scale enhancement module (SEM), and boundary enhancement module (BEM), to address the above challenges. First, the PEM enhances the feature representations of the most suspicious regions. Subsequently, the SEM captures multiscale context information and target context information. Finally, the BEM rectifies the unreliable predictions in the segmentation mask. To that end, extensive experiments are conducted on our dataset of 9124 samples collected from 754 patients. Empirical results demonstrate that each module realizes its designed functionalities and is complementary to the others. By incorporating the three proposed modules together, our model achieves state-of-the-art performance compared with nine popular models.
Collapse
|
35
|
A Novel Method Based on GAN Using a Segmentation Module for Oligodendroglioma Pathological Image Generation. SENSORS 2022; 22:s22103960. [PMID: 35632368 PMCID: PMC9144585 DOI: 10.3390/s22103960] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 04/22/2022] [Accepted: 05/20/2022] [Indexed: 02/05/2023]
Abstract
Digital pathology analysis using deep learning has been the subject of several studies. As with other medical data, pathological data are not easily obtained. Because deep learning-based image analysis requires large amounts of data, augmentation techniques are used to increase the size of pathological datasets. This study proposes a novel method for synthesizing brain tumor pathology data using a generative model. For image synthesis, we used embedding features extracted from a segmentation module in a general generative model. We also introduce a simple solution for training a segmentation model in an environment in which the masked label of the training dataset is not supplied. As a result of this experiment, the proposed method did not make great progress in quantitative metrics but showed improved results in the confusion rate of more than 70 subjects and the quality of the visual output.
Collapse
|
36
|
Fast Segmentation of Metastatic Foci in H&E Whole-Slide Images for Breast Cancer Diagnosis. Diagnostics (Basel) 2022; 12:diagnostics12040990. [PMID: 35454038 PMCID: PMC9030573 DOI: 10.3390/diagnostics12040990] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 04/13/2022] [Accepted: 04/13/2022] [Indexed: 12/12/2022] Open
Abstract
Breast cancer is the leading cause of death for women globally. In clinical practice, pathologists visually scan over enormous amounts of gigapixel microscopic tissue slide images, which is a tedious and challenging task. In breast cancer diagnosis, micro-metastases and especially isolated tumor cells are extremely difficult to detect and are easily neglected because tiny metastatic foci might be missed in visual examinations by medical doctors. However, the literature poorly explores the detection of isolated tumor cells, which could be recognized as a viable marker to determine the prognosis for T1NoMo breast cancer patients. To address these issues, we present a deep learning-based framework for efficient and robust lymph node metastasis segmentation in routinely used histopathological hematoxylin−eosin-stained (H−E) whole-slide images (WSI) in minutes, and a quantitative evaluation is conducted using 188 WSIs, containing 94 pairs of H−E-stained WSIs and immunohistochemical CK(AE1/AE3)-stained WSIs, which are used to produce a reliable and objective reference standard. The quantitative results demonstrate that the proposed method achieves 89.6% precision, 83.8% recall, 84.4% F1-score, and 74.9% mIoU, and that it performs significantly better than eight deep learning approaches, including two recently published models (v3_DCNN and Xception-65), and three variants of Deeplabv3+ with three different backbones, namely, U-Net, SegNet, and FCN, in precision, recall, F1-score, and mIoU (p<0.001). Importantly, the proposed system is shown to be capable of identifying tiny metastatic foci in challenging cases, for which there are high probabilities of misdiagnosis in visual inspection, while the baseline approaches tend to fail in detecting tiny metastatic foci. For computational time comparison, the proposed method takes 2.4 min for processing a WSI utilizing four NVIDIA Geforce GTX 1080Ti GPU cards and 9.6 min using a single NVIDIA Geforce GTX 1080Ti GPU card, and is notably faster than the baseline methods (4-times faster than U-Net and SegNet, 5-times faster than FCN, 2-times faster than the 3 different variants of Deeplabv3+, 1.4-times faster than v3_DCNN, and 41-times faster than Xception-65).
Collapse
|
37
|
Thiagarajan P, Khairnar P, Ghosh S. Explanation and Use of Uncertainty Quantified by Bayesian Neural Network Classifiers for Breast Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:815-825. [PMID: 34699354 DOI: 10.1109/tmi.2021.3123300] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Despite the promise of Convolutional neural network (CNN) based classification models for histopathological images, it is infeasible to quantify its uncertainties. Moreover, CNNs may suffer from overfitting when the data is biased. We show that Bayesian-CNN can overcome these limitations by regularizing automatically and by quantifying the uncertainty. We have developed a novel technique to utilize the uncertainties provided by the Bayesian-CNN that significantly improves the performance on a large fraction of the test data (about 6% improvement in accuracy on 77% of test data). Further, we provide a novel explanation for the uncertainty by projecting the data into a low dimensional space through a nonlinear dimensionality reduction technique. This dimensionality reduction enables interpretation of the test data through visualization and reveals the structure of the data in a low dimensional feature space. We show that the Bayesian-CNN can perform much better than the state-of-the-art transfer learning CNN (TL-CNN) by reducing the false negative and false positive by 11% and 7.7% respectively for the present data set. It achieves this performance with only 1.86 million parameters as compared to 134.33 million for TL-CNN. Besides, we modify the Bayesian-CNN by introducing a stochastic adaptive activation function. The modified Bayesian-CNN performs slightly better than Bayesian-CNN on all performance metrics and significantly reduces the number of false negatives and false positives (3% reduction for both). We also show that these results are statistically significant by performing McNemar's statistical significance test. This work shows the advantages of Bayesian-CNN against the state-of-the-art, explains and utilizes the uncertainties for histopathological images. It should find applications in various medical image classifications.
Collapse
|
38
|
Automated vs. human evaluation of corneal staining. Graefes Arch Clin Exp Ophthalmol 2022; 260:2605-2612. [PMID: 35357547 PMCID: PMC9325848 DOI: 10.1007/s00417-022-05574-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Revised: 12/26/2021] [Accepted: 01/21/2022] [Indexed: 11/04/2022] Open
Abstract
BACKGROUND AND PURPOSE Corneal fluorescein staining is one of the most important diagnostic tests in dry eye disease (DED). Nevertheless, the result of this examination is depending on the grader. So far, there is no method for an automated quantification of corneal staining commercially available. Aim of this study was to develop a software-assisted grading algorithm and to compare it with a group of human graders with variable clinical experience in patients with DED. METHODS Fifty images of eyes stained with 2 µl of 2% fluorescein presenting different severity of superficial punctate keratopathy in patients with DED were taken under standardized conditions. An algorithm for detecting and counting superficial punctate keratitis was developed using ImageJ with a training dataset of 20 randomly picked images. Then, the test dataset of 30 images was analyzed (1) by the ImageJ algorithm and (2) by 22 graders, all ophthalmologists with different levels of experience. All graders evaluated the images using the Oxford grading scheme for corneal staining at baseline and after 6-8 weeks. Intrarater agreement was also evaluated by adding a mirrored version of all original images into the set of images during the 2nd grading. RESULTS The count of particles detected by the algorithm correlated significantly (n = 30; p < 0.01) with the estimated true Oxford grade (Sr = 0,91). Overall human graders showed only moderate intrarater agreement (K = 0,426), while software-assisted grading was always the same (K = 1,0). Little difference was found between specialists and non-specialists in terms of intrarater agreement (K = 0,436 specialists; K = 0,417 non-specialists). The highest interrater agreement was seen with 75,6% in the most experienced grader, a cornea specialist with 29 years of experience, and the lowest was seen in a resident with 25,6% who had only 2 years of experience. CONCLUSION The variance in human grading of corneal staining - if only small - is likely to have only little impact on clinical management and thus seems to be acceptable. While human graders give results sufficient for clinical application, software-assisted grading of corneal staining ensures higher consistency and thus is preferrable for re-evaluating patients, e.g., in clinical trials.
Collapse
|
39
|
A promising deep learning-assistive algorithm for histopathological screening of colorectal cancer. Sci Rep 2022; 12:2222. [PMID: 35140318 PMCID: PMC8828883 DOI: 10.1038/s41598-022-06264-x] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 01/24/2022] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer is one of the most common cancers worldwide, accounting for an annual estimated 1.8 million incident cases. With the increasing number of colonoscopies being performed, colorectal biopsies make up a large proportion of any histopathology laboratory workload. We trained and validated a unique artificial intelligence (AI) deep learning model as an assistive tool to screen for colonic malignancies in colorectal specimens, in order to improve cancer detection and classification; enabling busy pathologists to focus on higher order decision-making tasks. The study cohort consists of Whole Slide Images (WSI) obtained from 294 colorectal specimens. Qritive’s unique composite algorithm comprises both a deep learning model based on a Faster Region Based Convolutional Neural Network (Faster-RCNN) architecture for instance segmentation with a ResNet-101 feature extraction backbone that provides glandular segmentation, and a classical machine learning classifier. The initial training used pathologists’ annotations on a cohort of 66,191 image tiles extracted from 39 WSIs. A subsequent application of a classical machine learning-based slide classifier sorted the WSIs into ‘low risk’ (benign, inflammation) and ‘high risk’ (dysplasia, malignancy) categories. We further trained the composite AI-model’s performance on a larger cohort of 105 resections WSIs and then validated our findings on a cohort of 150 biopsies WSIs against the classifications of two independently blinded pathologists. We evaluated the area under the receiver-operator characteristic curve (AUC) and other performance metrics. The AI model achieved an AUC of 0.917 in the validation cohort, with excellent sensitivity (97.4%) in detection of high risk features of dysplasia and malignancy. We demonstrate an unique composite AI-model incorporating both a glandular segmentation deep learning model and a classical machine learning classifier, with excellent sensitivity in picking up high risk colorectal features. As such, AI plays a role as a potential screening tool in assisting busy pathologists by outlining the dysplastic and malignant glands.
Collapse
|
40
|
A comprehensive review of computer-aided whole-slide image analysis: from datasets to feature extraction, segmentation, classification and detection approaches. Artif Intell Rev 2022. [DOI: 10.1007/s10462-021-10121-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
41
|
Di D, Zhang J, Lei F, Tian Q, Gao Y. Big-Hypergraph Factorization Neural Network for Survival Prediction From Whole Slide Image. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1149-1160. [PMID: 34982683 DOI: 10.1109/tip.2021.3139229] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Survival prediction for patients based on histopa- thological whole-slide images (WSIs) has attracted increasing attention in recent years. Due to the massive pixel data in a single WSI, fully exploiting cell-level structural information (e.g., stromal/tumor microenvironment) from the gigapixel WSI is challenging. Most of the current studies resolve the problem by sampling limited image patches to construct a graph-based model (e.g., hypergraph). However, the sampling scale is a critical bottleneck since it is a fundamental obstacle of broadening samples for transductive learning. To overcome the limitation of the sampling scale for constructing a big hypergraph model, we propose a factorization neural network that embeds the correlation among large-scale vertices and hyperedges into two low-dimensional latent semantic spaces separately, empowering the dense sampling. Thanks to the compressed low-dimensional correlation embedding, the hypergraph convolutional layers generate the high-order global representation for each WSI. To minimize the effect of the uncertainty data as well as to achieve the metric-driven learning, we also propose a multi-level ranking supervision to enable the network learning by a queue of patients on the global horizon. Extensive experiments are conducted on three public carcinoma datasets (i.e., LUSC, GBM, and NLST), and the quantitative results demonstrate the proposed method outperforms state-of-the-art methods across-the-board.
Collapse
|
42
|
Yu G, Chen Z, Wu J, Tan Y. A diagnostic prediction framework on auxiliary medical system for breast cancer in developing countries. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107459] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
43
|
Shaban M, Raza SEA, Hassan M, Jamshed A, Mushtaq S, Loya A, Batis N, Brooks J, Nankivell P, Sharma N, Robinson M, Mehanna H, Khurram SA, Rajpoot N. A digital score of tumour-associated stroma infiltrating lymphocytes predicts survival in head and neck squamous cell carcinoma. J Pathol 2021; 256:174-185. [PMID: 34698394 DOI: 10.1002/path.5819] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Revised: 10/01/2021] [Accepted: 10/23/2021] [Indexed: 12/20/2022]
Abstract
The infiltration of T-lymphocytes in the stroma and tumour is an indication of an effective immune response against the tumour, resulting in better survival. In this study, our aim was to explore the prognostic significance of tumour-associated stroma infiltrating lymphocytes (TASILs) in head and neck squamous cell carcinoma (HNSCC) through an AI-based automated method. A deep learning-based automated method was employed to segment tumour, tumour-associated stroma, and lymphocytes in digitally scanned whole slide images of HNSCC tissue slides. The spatial patterns of lymphocytes and tumour-associated stroma were digitally quantified to compute the tumour-associated stroma infiltrating lymphocytes score (TASIL-score). Finally, the prognostic significance of the TASIL-score for disease-specific and disease-free survival was investigated using the Cox proportional hazard analysis. Three different cohorts of haematoxylin and eosin (H&E)-stained tissue slides of HNSCC cases (n = 537 in total) were studied, including publicly available TCGA head and neck cancer cases. The TASIL-score carries prognostic significance (p = 0.002) for disease-specific survival of HNSCC patients. The TASIL-score also shows a better separation between low- and high-risk patients compared with the manual tumour-infiltrating lymphocytes (TILs) scoring by pathologists for both disease-specific and disease-free survival. A positive correlation of TASIL-score with molecular estimates of CD8+ T cells was also found, which is in line with existing findings. To the best of our knowledge, this is the first study to automate the quantification of TASILs from routine H&E slides of head and neck cancer. Our TASIL-score-based findings are aligned with the clinical knowledge, with the added advantages of objectivity, reproducibility, and strong prognostic value. Although we validated our method on three different cohorts (n = 537 cases in total), a comprehensive evaluation on large multicentric cohorts is required before the proposed digital score can be adopted in clinical practice. © 2021 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Muhammad Shaban
- Department of Computer Science, University of Warwick, Coventry, UK
| | | | - Mariam Hassan
- Department of Pathology, Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Arif Jamshed
- Department of Pathology, Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Sajid Mushtaq
- Department of Pathology, Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Asif Loya
- Department of Pathology, Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Nikolaos Batis
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Jill Brooks
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Paul Nankivell
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Neil Sharma
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Max Robinson
- School of Dental Sciences, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, UK
| | - Hisham Mehanna
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Syed Ali Khurram
- School of Clinical Dentistry, University of Sheffield, Sheffield, UK
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, UK.,The Alan Turing Institute, London, UK.,Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Coventry, UK
| |
Collapse
|
44
|
Sun R, Hou X, Li X, Xie Y, Nie S. Transfer Learning Strategy Based on Unsupervised Learning and Ensemble Learning for Breast Cancer Molecular Subtype Prediction Using Dynamic Contrast-Enhanced MRI. J Magn Reson Imaging 2021; 55:1518-1534. [PMID: 34668601 DOI: 10.1002/jmri.27955] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 09/27/2021] [Accepted: 10/01/2021] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND Imaging-driven deep learning strategies focus on training from scratch and transfer learning. However, the performance of training from scratch is often impeded by the lack of large-scale labeled training data. Additionally, owing to the differences between source and target domains, analyzing medical image tasks satisfactorily via transfer learning based on ImageNet is difficult. PURPOSE To investigate two transfer learning algorithms for breast cancer molecular subtype prediction (luminal and non-luminal) based on unsupervised pre-training and ensemble learning: M_EL and B_EL, using malignant and benign datasets as the source domain, respectively. STUDY TYPE Retrospective. POPULATION Eight hundred and thirty-three female patients with histologically confirmed breast lesions (567 benign and 266 malignant cases) were selected. In the 5-fold cross-validation, the malignant cohort was randomly divided into 5 subsets to form a training set (80%) and a validation set (20%). FIELD STRENGTH/SEQUENCE 3.0 T, dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) using T1-weighted high-resolution isotropic volume examination. ASSESSMENT First, three datasets acquired at different times post-contrast were preprocessed as unlabeled source domains. Second, three baseline networks corresponding to the different MRI post-contrast phases were built, optimized by a combination of mutual information maximization between high- and low-level representations and prior distribution constraints. Next, the pre-trained networks were fine-tuned on the labeled target domain. Finally, prediction results were integrated using weighted voting-based ensemble learning. STATISTICAL TESTS Mean accuracy, precision, specificity, and area under receiver operating characteristic curve (AUC) were obtained with 5-fold cross-validation. P < 0.05 was considered to be statistically significant. RESULTS Compared with a convolutional long short-term memory network, pre-trained VGG-16, VGG-19, and DenseNet-121 from ImageNet, M_EL and B_EL exhibited significantly more optimized prediction performance (specificity: 90.5% and 89.9%; accuracy: 82.6% and 81.1%; precision: 91.2% and 90.9%; AUC: 0.836 and 0.823, respectively). DATA CONCLUSION Transfer learning based on unsupervised pre-training may facilitate automatic prediction of breast cancer molecular subtypes. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Rong Sun
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Xuewen Hou
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Xiujuan Li
- Medical Imaging Center, Tai'an Central Hospital, Tai'an, China
| | - Yuanzhong Xie
- Medical Imaging Center, Tai'an Central Hospital, Tai'an, China
| | - Shengdong Nie
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, China
| |
Collapse
|
45
|
Wang CW, Huang SC, Lee YC, Shen YJ, Meng SI, Gaol JL. Deep learning for bone marrow cell detection and classification on whole-slide images. Med Image Anal 2021; 75:102270. [PMID: 34710655 DOI: 10.1016/j.media.2021.102270] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Revised: 10/06/2021] [Accepted: 10/13/2021] [Indexed: 12/19/2022]
Abstract
Bone marrow (BM) examination is an essential step in both diagnosing and managing numerous hematologic disorders. BM nucleated differential count (NDC) analysis, as part of BM examination, holds the most fundamental and crucial information. However, there are many challenges to perform automated BM NDC analysis on whole-slide images (WSIs), including large dimensions of data to process, complicated cell types with subtle differences. To the authors best knowledge, this is the first study on fully automatic BM NDC using WSIs with 40x objective magnification, which can replace traditional manual counting relying on light microscopy via oil-immersion 100x objective lens with a total 1000x magnification. In this study, we develop an efficient and fully automatic hierarchical deep learning framework for BM NDC WSI analysis in seconds. The proposed hierarchical framework consists of (1) a deep learning model for rapid localization of BM particles and cellular trails generating regions of interest (ROI) for further analysis, (2) a patch-based deep learning model for cell identification of 16 cell types, including megakaryocytes, mitotic cells, and four stages of erythroblasts which have not been demonstrated in previous studies before, and (3) a fast stitching model for integrating patch-based results and producing final outputs. In evaluation, the proposed method is firstly tested on a dataset with a total of 12,426 annotated cells using cross validation, achieving high recall and accuracy of 0.905 ± 0.078 and 0.989 ± 0.006, respectively, and taking only 44 seconds to perform BM NDC analysis for a WSI. To further examine the generalizability of our model, we conduct an evaluation on the second independent dataset with a total of 3005 cells, and the results show that the proposed method also obtains high recall and accuracy of 0.842 and 0.988, respectively. In comparison with the existing small-image-based benchmark methods, the proposed method demonstrates superior performance in recall, accuracy and computational time.
Collapse
Affiliation(s)
- Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, 106, Taiwan; Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei, 106, Taiwan.
| | - Sheng-Chuan Huang
- Department of Laboratory Medicine, National Taiwan University Hospital, Taipei, 100, Taiwan; Department of Hematology and Oncology, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien, Taiwan; Department of Clinical Pathology, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien, Taiwan
| | - Yu-Ching Lee
- Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei, 106, Taiwan
| | - Yu-Jie Shen
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, 106, Taiwan
| | - Shwu-Ing Meng
- Department of Laboratory Medicine, National Taiwan University Hospital, Taipei, 100, Taiwan
| | - Jeff L Gaol
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, 106, Taiwan
| |
Collapse
|
46
|
Korzynska A, Roszkowiak L, Zak J, Siemion K. A review of current systems for annotation of cell and tissue images in digital pathology. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.04.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
47
|
Chen J, Jiao J, He S, Han G, Qin J. Few-Shot Breast Cancer Metastases Classification via Unsupervised Cell Ranking. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:1914-1923. [PMID: 31841420 DOI: 10.1109/tcbb.2019.2960019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Tumor metastases detection is of great importance for the treatment of breast cancer patients. Various CNN (convolutional neural network) based methods get excellent performance in object detection/segmentation. However, the detection of metastases in hematoxylin and eosin (H&E) stained whole-slide images (WSI) is still challenging mainly due to two aspects. (1) The resolution of the image is too large. (2) lacking labeled training data. Whole-slide images generally stored in a multi-resolution structure with multiple downsampled tiles. It is difficult to feed the whole image into memory without compression. Moreover, labeling images for the pathologists are time-consuming and expensive. In this paper, we study the problem of detecting breast cancer metastases in the pathological image on patch level. To address the abovementioned challenges, we propose a few-shot learning method to classify whether an image patch contains tumor cells. Specifically, we propose a patch-level unsupervised cell ranking approach, which only relies on images with limited labels. The main idea of the proposed method is that when cropping a patch A from the WSI and further cropping a sub-patch B from A, the cell number of A is always larger than that of B. Based on this observation, we make use of the unlabeled images to learn the ranking information of cell counting to extract the abstract features. Experimental results show that our method is effective to improve the patch-level classification accuracy, compared to the traditional supervised method. The source code is publicly available at https://github.com/fewshot-camelyon.
Collapse
|
48
|
Aatresh AA, Yatgiri RP, Chanchal AK, Kumar A, Ravi A, Das D, Bs R, Lal S, Kini J. Efficient deep learning architecture with dimension-wise pyramid pooling for nuclei segmentation of histopathology images. Comput Med Imaging Graph 2021; 93:101975. [PMID: 34461375 DOI: 10.1016/j.compmedimag.2021.101975] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 08/05/2021] [Accepted: 08/19/2021] [Indexed: 11/30/2022]
Abstract
Image segmentation remains to be one of the most vital tasks in the area of computer vision and more so in the case of medical image processing. Image segmentation quality is the main metric that is often considered with memory and computation efficiency overlooked, limiting the use of power hungry models for practical use. In this paper, we propose a novel framework (Kidney-SegNet) that combines the effectiveness of an attention based encoder-decoder architecture with atrous spatial pyramid pooling with highly efficient dimension-wise convolutions. The segmentation results of the proposed Kidney-SegNet architecture have been shown to outperform existing state-of-the-art deep learning methods by evaluating them on two publicly available kidney and TNBC breast H&E stained histopathology image datasets. Further, our simulation experiments also reveal that the computational complexity and memory requirement of our proposed architecture is very efficient compared to existing deep learning state-of-the-art methods for the task of nuclei segmentation of H&E stained histopathology images. The source code of our implementation will be available at https://github.com/Aaatresh/Kidney-SegNet.
Collapse
Affiliation(s)
- Anirudh Ashok Aatresh
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Rohit Prashant Yatgiri
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Amit Kumar Chanchal
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Aman Kumar
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Akansh Ravi
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Devikalyan Das
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Raghavendra Bs
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Shyam Lal
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Jyoti Kini
- Department of Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, India.
| |
Collapse
|
49
|
Possolo M, Bajcsy P. Exact Tile-Based Segmentation Inference for Images Larger than GPU Memory. JOURNAL OF RESEARCH OF THE NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY 2021; 126:126009. [PMID: 39015626 PMCID: PMC10914126 DOI: 10.6028/jres.126.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 05/12/2021] [Indexed: 07/18/2024]
Abstract
We address the problem of performing exact (tiling-error free) out-of-core semantic segmentation inference of arbitrarily large images using fully convolutional neural networks (FCN). FCN models have the property that once a model is trained, it can be applied on arbitrarily sized images, although it is still constrained by the available GPU memory. This work is motivated by overcoming the GPU memory size constraint without numerically impacting the final result. Our approach is to select a tile size that will fit into GPU memory with a halo border of half the network receptive field. Next, stride across the image by that tile size without the halo. The input tile halos will overlap, while the output tiles join exactly at the seams. Such an approach enables inference to be performed on whole slide microscopy images, such as those generated by a slide scanner. The novelty of this work is in documenting the formulas for determining tile size and stride and then validating them on U-Net and FC-DenseNet architectures. In addition, we quantify the errors due to tiling configurations which do not satisfy the constraints, and we explore the use of architecture effective receptive fields to estimate the tiling parameters.
Collapse
Affiliation(s)
- Michael Possolo
- National Institute of Standards and Technology,
Gaithersburg, MD 20899, USA
| | - Peter Bajcsy
- National Institute of Standards and Technology,
Gaithersburg, MD 20899, USA
| |
Collapse
|
50
|
Deep Learning for the Classification of Non-Hodgkin Lymphoma on Histopathological Images. Cancers (Basel) 2021; 13:cancers13102419. [PMID: 34067726 PMCID: PMC8156071 DOI: 10.3390/cancers13102419] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 05/12/2021] [Accepted: 05/13/2021] [Indexed: 01/16/2023] Open
Abstract
Simple Summary Histopathological examination of lymph node (LN) specimens allows the detection of hematological diseases. The identification and the classification of lymphoma, a blood cancer with a manifestation in LNs, are difficult and require many years of training, as well as additional expensive investigations. Today, artificial intelligence (AI) can be used to support the pathologist in identifying abnormalities in LN specimens. In this article, we trained and optimized an AI algorithm to automatically detect two common lymphoma subtypes that require different therapies using normal LN parenchyma as a control. The balanced accuracy in an independent test cohort was above 95%, which means that the vast majority of cases were classified correctly and only a few cases were misclassified. We applied specific methods to explain which parts of the image were important for the AI algorithm and to ensure a reliable result. Our study shows that classifications of lymphoma subtypes is possible with high accuracy. We think that routine histopathological applications for AI should be pursued. Abstract The diagnosis and the subtyping of non-Hodgkin lymphoma (NHL) are challenging and require expert knowledge, great experience, thorough morphological analysis, and often additional expensive immunohistological and molecular methods. As these requirements are not always available, supplemental methods supporting morphological-based decision making and potentially entity subtyping are required. Deep learning methods have been shown to classify histopathological images with high accuracy, but data on NHL subtyping are limited. After annotation of histopathological whole-slide images and image patch extraction, we trained and optimized an EfficientNet convolutional neuronal network algorithm on 84,139 image patches from 629 patients and evaluated its potential to classify tumor-free reference lymph nodes, nodal small lymphocytic lymphoma/chronic lymphocytic leukemia, and nodal diffuse large B-cell lymphoma. The optimized algorithm achieved an accuracy of 95.56% on an independent test set including 16,960 image patches from 125 patients after the application of quality controls. Automatic classification of NHL is possible with high accuracy using deep learning on histopathological images and routine diagnostic applications should be pursued.
Collapse
|