51
|
van Tulder G, de Bruijne M. Unpaired, unsupervised domain adaptation assumes your domains are already similar. Med Image Anal 2023; 87:102825. [PMID: 37116296 DOI: 10.1016/j.media.2023.102825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 03/30/2023] [Accepted: 04/17/2023] [Indexed: 04/30/2023]
Abstract
Unsupervised domain adaptation is a popular method in medical image analysis, but it can be tricky to make it work: without labels to link the domains, domains must be matched using feature distributions. If there is no additional information, this often leaves a choice between multiple possibilities to map the data that may be equally likely but not equally correct. In this paper we explore the fundamental problems that may arise in unsupervised domain adaptation, and discuss conditions that might still make it work. Focusing on medical image analysis, we argue that images from different domains may have similar class balance, similar intensities, similar spatial structure, or similar textures. We demonstrate how these implicit conditions can affect domain adaptation performance in experiments with synthetic data, MNIST digits, and medical images. We observe that practical success of unsupervised domain adaptation relies on existing similarities in the data, and is anything but guaranteed in the general case. Understanding these implicit assumptions is a key step in identifying potential problems in domain adaptation and improving the reliability of the results.
Collapse
Affiliation(s)
- Gijs van Tulder
- Data Science group, Faculty of Science, Radboud University, Postbus 9010, 6500 GL Nijmegen, The Netherlands; Biomedical Imaging Group, Erasmus MC, Postbus 2040, 3000 CA Rotterdam, The Netherlands.
| | - Marleen de Bruijne
- Biomedical Imaging Group, Erasmus MC, Postbus 2040, 3000 CA Rotterdam, The Netherlands; Department of Computer Science, University of Copenhagen, Universitetsparken 1, 2100 Copenhagen, Denmark.
| |
Collapse
|
52
|
Li Y, Lao Q, Kang Q, Jiang Z, Du S, Zhang S, Li K. Self-supervised anomaly detection, staging and segmentation for retinal images. Med Image Anal 2023; 87:102805. [PMID: 37104995 DOI: 10.1016/j.media.2023.102805] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 11/14/2022] [Accepted: 03/30/2023] [Indexed: 04/29/2023]
Abstract
Unsupervised anomaly detection (UAD) is to detect anomalies through learning the distribution of normal data without labels and therefore has a wide application in medical images by alleviating the burden of collecting annotated medical data. Current UAD methods mostly learn the normal data by the reconstruction of the original input, but often lack the consideration of any prior information that has semantic meanings. In this paper, we first propose a universal unsupervised anomaly detection framework SSL-AnoVAE, which utilizes a self-supervised learning (SSL) module for providing more fine-grained semantics depending on the to-be detected anomalies in the retinal images. We also explore the relationship between the data transformation adopted in the SSL module and the quality of anomaly detection for retinal images. Moreover, to take full advantage of the proposed SSL-AnoVAE and apply towards clinical usages for computer-aided diagnosis of retinal-related diseases, we further propose to stage and segment the anomalies in retinal images detected by SSL-AnoVAE in an unsupervised manner. Experimental results demonstrate the effectiveness of our proposed method for unsupervised anomaly detection, staging and segmentation on both retinal optical coherence tomography images and color fundus photograph images.
Collapse
Affiliation(s)
- Yiyue Li
- Department of Ophthalmology and West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, China; West China Biomedical Big Data Center, Med-X Center for Informatics, Sichuan University, Chengdu, Sichuan, 610041, China
| | - Qicheng Lao
- School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, 100876, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200030, China.
| | - Qingbo Kang
- West China Biomedical Big Data Center, Med-X Center for Informatics, Sichuan University, Chengdu, Sichuan, 610041, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200030, China
| | - Zekun Jiang
- West China Biomedical Big Data Center, Med-X Center for Informatics, Sichuan University, Chengdu, Sichuan, 610041, China
| | - Shiyi Du
- West China Biomedical Big Data Center, Med-X Center for Informatics, Sichuan University, Chengdu, Sichuan, 610041, China
| | - Shaoting Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai, 200030, China
| | - Kang Li
- West China Biomedical Big Data Center, Med-X Center for Informatics, Sichuan University, Chengdu, Sichuan, 610041, China; Sichuan University Pittsburgh Institute, Chengdu, Sichuan, 610065, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200030, China.
| |
Collapse
|
53
|
Vu QD, Rajpoot K, Raza SEA, Rajpoot N. Handcrafted Histological Transformer (H2T): Unsupervised representation of whole slide images. Med Image Anal 2023; 85:102743. [PMID: 36702037 DOI: 10.1016/j.media.2023.102743] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 11/30/2022] [Accepted: 01/05/2023] [Indexed: 01/20/2023]
Abstract
Diagnostic, prognostic and therapeutic decision-making of cancer in pathology clinics can now be carried out based on analysis of multi-gigapixel tissue images, also known as whole-slide images (WSIs). Recently, deep convolutional neural networks (CNNs) have been proposed to derive unsupervised WSI representations; these are attractive as they rely less on expert annotation which is cumbersome. However, a major trade-off is that higher predictive power generally comes at the cost of interpretability, posing a challenge to their clinical use where transparency in decision-making is generally expected. To address this challenge, we present a handcrafted framework based on deep CNN for constructing holistic WSI-level representations. Building on recent findings about the internal working of the Transformer in the domain of natural language processing, we break down its processes and handcraft them into a more transparent framework that we term as the Handcrafted Histological Transformer or H2T. Based on our experiments involving various datasets consisting of a total of 10,042 WSIs, the results demonstrate that H2T based holistic WSI-level representations offer competitive performance compared to recent state-of-the-art methods and can be readily utilized for various downstream analysis tasks. Finally, our results demonstrate that the H2T framework can be up to 14 times faster than the Transformer models.
Collapse
Affiliation(s)
- Quoc Dang Vu
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Kashif Rajpoot
- School of Computer Science, University of Birmingham, UK
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK; The Alan Turing Institute, London, UK; Department of Pathology, University Hospitals Coventry & Warwickshire, UK.
| |
Collapse
|
54
|
Bütün E, Uçan M, Kaya M. Automatic detection of cancer metastasis in lymph node using deep learning. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
55
|
Investigation of semi- and self-supervised learning methods in the histopathological domain. J Pathol Inform 2023; 14:100305. [PMID: 37025325 PMCID: PMC10070179 DOI: 10.1016/j.jpi.2023.100305] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 03/03/2023] [Accepted: 03/03/2023] [Indexed: 03/14/2023] Open
Abstract
Training models with semi- or self-supervised learning methods is one way to reduce annotation effort since they rely on unlabeled or sparsely labeled datasets. Such approaches are particularly promising for domains with a time-consuming annotation process requiring specialized expertise and where high-quality labeled machine learning datasets are scarce, like in computational pathology. Even though some of these methods have been used in the histopathological domain, there is, so far, no comprehensive study comparing different approaches. Therefore, this work compares feature extractors models trained with state-of-the-art semi- or self-supervised learning methods PAWS, SimCLR, and SimSiam within a unified framework. We show that such models, across different architectures and network configurations, have a positive performance impact on histopathological classification tasks, even in low data regimes. Moreover, our observations suggest that features learned from a particular dataset, i.e., tissue type, are only in-domain transferable to a certain extent. Finally, we share our experience using each method in computational pathology and provide recommendations for its use.
Collapse
|
56
|
Cai H, Feng X, Yin R, Zhao Y, Guo L, Fan X, Liao J. MIST: multiple instance learning network based on Swin Transformer for whole slide image classification of colorectal adenomas. J Pathol 2023; 259:125-135. [PMID: 36318158 DOI: 10.1002/path.6027] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 09/30/2022] [Accepted: 10/28/2022] [Indexed: 12/12/2022]
Abstract
Colorectal adenoma is a recognized precancerous lesion of colorectal cancer (CRC), and at least 80% of colorectal cancers are malignantly transformed from it. Therefore, it is essential to distinguish benign from malignant adenomas in the early screening of colorectal cancer. Many deep learning computational pathology studies based on whole slide images (WSIs) have been proposed. Most approaches require manual annotation of lesion regions on WSIs, which is time-consuming and labor-intensive. This study proposes a new approach, MIST - Multiple Instance learning network based on the Swin Transformer, which can accurately classify colorectal adenoma WSIs only with slide-level labels. MIST uses the Swin Transformer as the backbone to extract features of images through self-supervised contrastive learning and uses a dual-stream multiple instance learning network to predict the class of slides. We trained and validated MIST on 666 WSIs collected from 480 colorectal adenoma patients in the Department of Pathology, The Affiliated Drum Tower Hospital of Nanjing University Medical School. These slides contained six common types of colorectal adenomas. The accuracy of external validation on 273 newly collected WSIs from Nanjing First Hospital was 0.784, which was superior to the existing methods and reached a level comparable to that of the local pathologist's accuracy of 0.806. Finally, we analyzed the interpretability of MIST and observed that the lesion areas of interest in MIST were generally consistent with those of interest to local pathologists. In conclusion, MIST is a low-burden, interpretable, and effective approach that can be used in colorectal cancer screening and may lead to a potential reduction in the mortality of CRC patients by assisting clinicians in the decision-making process. © 2022 The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Hongbin Cai
- School of Science, China Pharmaceutical University, Nanjing, PR China
| | - Xiaobing Feng
- College of Electrical and Information Engineering, Hunan University, Changsha, PR China
| | - Ruomeng Yin
- School of Science, China Pharmaceutical University, Nanjing, PR China
| | - Youcai Zhao
- Department of Pathology, Nanjing First Hospital, Nanjing, PR China
| | - Lingchuan Guo
- Department of Pathology, The First Affiliated Hospital of Soochow University, Soochow, PR China
| | - Xiangshan Fan
- Department of Pathology, The Affiliated Drum Tower Hospital of Nanjing University Medical School, Nanjing, PR China
| | - Jun Liao
- School of Science, China Pharmaceutical University, Nanjing, PR China
| |
Collapse
|
57
|
Wang Y, Guo J, Yang Y, Kang Y, Xia Y, Li Z, Duan Y, Wang K. CWC-transformer: a visual transformer approach for compressed whole slide image classification. Neural Comput Appl 2023. [DOI: 10.1007/s00521-022-07857-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
|
58
|
Chen Y, Dong Y, Si L, Yang W, Du S, Tian X, Li C, Liao Q, Ma H. Dual Polarization Modality Fusion Network for Assisting Pathological Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:304-316. [PMID: 36155433 DOI: 10.1109/tmi.2022.3210113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Polarization imaging is sensitive to sub-wavelength microstructures of various cancer tissues, providing abundant optical characteristics and microstructure information of complex pathological specimens. However, how to reasonably utilize polarization information to strengthen pathological diagnosis ability remains a challenging issue. In order to take full advantage of pathological image information and polarization features of samples, we propose a dual polarization modality fusion network (DPMFNet), which consists of a multi-stream CNN structure and a switched attention fusion module for complementarily aggregating the features from different modality images. Our proposed switched attention mechanism could obtain the joint feature embeddings by switching the attention map of different modality images to improve their semantic relatedness. By including a dual-polarization contrastive training scheme, our method can synthesize and align the interaction and representation of two polarization features. Experimental evaluations on three cancer datasets show the superiority of our method in assisting pathological diagnosis, especially in small datasets and low imaging resolution cases. Grad-CAM visualizes the important regions of the pathological images and the polarization images, indicating that the two modalities play different roles and allow us to give insightful corresponding explanations and analysis on cancer diagnosis conducted by the DPMFNet. This technique has potential to facilitate the performance of pathological aided diagnosis and broaden the current digital pathology boundary based on pathological image features.
Collapse
|
59
|
Wang X, Du Y, Yang S, Zhang J, Wang M, Zhang J, Yang W, Huang J, Han X. RetCCL: Clustering-guided contrastive learning for whole-slide image retrieval. Med Image Anal 2023; 83:102645. [PMID: 36270093 DOI: 10.1016/j.media.2022.102645] [Citation(s) in RCA: 73] [Impact Index Per Article: 36.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 07/21/2022] [Accepted: 09/27/2022] [Indexed: 02/07/2023]
Abstract
Benefiting from the large-scale archiving of digitized whole-slide images (WSIs), computer-aided diagnosis has been well developed to assist pathologists in decision-making. Content-based WSI retrieval can be a new approach to find highly correlated WSIs in a historically diagnosed WSI archive, which has the potential usages for assisted clinical diagnosis, medical research, and trainee education. During WSI retrieval, it is particularly challenging to encode the semantic content of histopathological images and to measure the similarity between images for interpretable results due to the gigapixel size of WSIs. In this work, we propose a Retrieval with Clustering-guided Contrastive Learning (RetCCL) framework for robust and accurate WSI-level image retrieval, which integrates a novel self-supervised feature learning method and a global ranking and aggregation algorithm for much improved performance. The proposed feature learning method makes use of existing large-scale unlabeled histopathological image data, which helps learn universal features that could be used directly for subsequent WSI retrieval tasks without extra fine-tuning. The proposed WSI retrieval method not only returns a set of WSIs similar to a query WSI, but also highlights patches or sub-regions of each WSI that share high similarity with patches of the query WSI, which helps pathologists interpret the searching results. Our WSI retrieval framework has been evaluated on the tasks of anatomical site retrieval and cancer subtype retrieval using over 22,000 slides, and the performance exceeds other state-of-the-art methods significantly (around 10% for the anatomic site retrieval in terms of average mMV@10). Besides, the patch retrieval using our learned feature representation offers a performance improvement of 24% on the TissueNet dataset in terms of mMV@5 compared with using ImageNet pre-trained features, which further demonstrates the effectiveness of the proposed CCL feature learning method.
Collapse
Affiliation(s)
- Xiyue Wang
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China; College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Yuexi Du
- College of Engineering, University of Michigan, Ann Arbor, MI, 48109, United States
| | - Sen Yang
- Tencent AI Lab, Shenzhen 518057, China
| | - Jun Zhang
- Tencent AI Lab, Shenzhen 518057, China
| | - Minghui Wang
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China; College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Jing Zhang
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China.
| | - Wei Yang
- Tencent AI Lab, Shenzhen 518057, China
| | | | - Xiao Han
- Tencent AI Lab, Shenzhen 518057, China.
| |
Collapse
|
60
|
He Q, He L, Duan H, Sun Q, Zheng R, Guan J, He Y, Huang W, Guan T. Expression site agnostic histopathology image segmentation framework by self supervised domain adaption. Comput Biol Med 2023; 152:106412. [PMID: 36516576 DOI: 10.1016/j.compbiomed.2022.106412] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 11/22/2022] [Accepted: 12/03/2022] [Indexed: 12/12/2022]
Abstract
MOTIVATION With the sites of antigen expression different, the segmentation of immunohistochemical (IHC) histopathology images is challenging, due to the visual variances. With H&E images highlighting the tissue structure and cell distribution more broadly, transferring more salient features from H&E images can achieve considerable performance on expression site agnostic IHC images segmentation. METHODS To the best of our knowledge, this is the first work that focuses on domain adaptive segmentation for different expression sites. We propose an expression site agnostic domain adaptive histopathology image semantic segmentation framework (ESASeg). In ESASeg, multi-level feature alignment encodes expression site invariance by learning generic representations of global and multi-scale local features. Moreover, self-supervision enhances domain adaptation to perceive high-level semantics by predicting pseudo-labels. RESULTS We construct a dataset with three IHCs (Her2 with membrane stained, Ki67 with nucleus stained, GPC3 with cytoplasm stained) with different expression sites from two diseases (breast and liver cancer). Intensive experiments on tumor region segmentation illustrate that ESASeg performs best across all metrics, and the implementation of each module proves to achieve impressive improvements. CONCLUSION The performance of ESASeg on the tumor region segmentation demonstrates the efficiency of the proposed framework, which provides a novel solution on expression site agnostic IHC related tasks. Moreover, the proposed domain adaption and self-supervision module can improve feature domain adaption and extraction without labels. In addition, ESASeg lays the foundation to perform joint analysis and information interaction for IHCs with different expression sites.
Collapse
Affiliation(s)
- Qiming He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Ling He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Hufei Duan
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Qiehe Sun
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Runliang Zheng
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Jian Guan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China.
| | - Yonghong He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Wenting Huang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China.
| | - Tian Guan
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| |
Collapse
|
61
|
Computer-aided detection and prognosis of colorectal cancer on whole slide images using dual resolution deep learning. J Cancer Res Clin Oncol 2023; 149:91-101. [PMID: 36331654 DOI: 10.1007/s00432-022-04435-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 10/18/2022] [Indexed: 11/06/2022]
Abstract
PURPOSE Rapid diagnosis and risk stratification can provide timely treatment for colorectal cancer (CRC) patients. Deep learning (DL) is not only used to identify tumor regions in histopathological images, but also applied to predict survival and achieve risk stratification. Whereas, most of methods dependent on regions of interest annotated by pathologist and ignore the global information in the image. METHODS A dual resolution DL network based on weakly supervised learning (WDRNet) was proposed for CRC identification and prognosis. The proposed method was trained and validated on the dataset from The Cancer Genome Atlas (TCGA) and tested on the external dataset from Affiliated Cancer Hospital and Institute of Guangzhou Medical University (ACHIGMU). RESULTS In identification task, WDRNet accurately identified tumor images with an accuracy of 0.977 in slide level and 0.953 in patch level. Besides, in prognosis task, WDRNet showed an excellent prediction performance in both datasets with the concordance index (C-index) of 0.716 ± 0.037 and 0.598 ± 0.024 respectively. Moreover, the results of risk stratification were statistically significant in univariate analysis (p < 0.001, HR = 7.892 in TCGA-CRC, and p = 0.009, HR = 1.718 in ACHIGMU) and multivariate analysis (p < 0.001, HR = 5.914 in TCGA-CRC, and p = 0.025, HR = 1.674 in ACHIGMU). CONCLUSIONS We developed a weakly supervised resolution DL network to achieve precise identification and prognosis of CRC patients, which will assist doctors in diagnosis on histopathological images and stratify patients to select appropriate therapeutic schedule.
Collapse
|
62
|
Couture HD. Deep Learning-Based Prediction of Molecular Tumor Biomarkers from H&E: A Practical Review. J Pers Med 2022; 12:2022. [PMID: 36556243 PMCID: PMC9784641 DOI: 10.3390/jpm12122022] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 10/26/2022] [Accepted: 12/05/2022] [Indexed: 12/12/2022] Open
Abstract
Molecular and genomic properties are critical in selecting cancer treatments to target individual tumors, particularly for immunotherapy. However, the methods to assess such properties are expensive, time-consuming, and often not routinely performed. Applying machine learning to H&E images can provide a more cost-effective screening method. Dozens of studies over the last few years have demonstrated that a variety of molecular biomarkers can be predicted from H&E alone using the advancements of deep learning: molecular alterations, genomic subtypes, protein biomarkers, and even the presence of viruses. This article reviews the diverse applications across cancer types and the methodology to train and validate these models on whole slide images. From bottom-up to pathologist-driven to hybrid approaches, the leading trends include a variety of weakly supervised deep learning-based approaches, as well as mechanisms for training strongly supervised models in select situations. While results of these algorithms look promising, some challenges still persist, including small training sets, rigorous validation, and model explainability. Biomarker prediction models may yield a screening method to determine when to run molecular tests or an alternative when molecular tests are not possible. They also create new opportunities in quantifying intratumoral heterogeneity and predicting patient outcomes.
Collapse
|
63
|
Gao E, Jiang H, Zhou Z, Yang C, Chen M, Zhu W, Shi F, Chen X, Zheng J, Bian Y, Xiang D. Automatic multi-tissue segmentation in pancreatic pathological images with selected multi-scale attention network. Comput Biol Med 2022; 151:106228. [PMID: 36306579 DOI: 10.1016/j.compbiomed.2022.106228] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 10/13/2022] [Accepted: 10/16/2022] [Indexed: 12/27/2022]
Abstract
The morphology of tissues in pathological images has been used routinely by pathologists to assess the degree of malignancy of pancreatic ductal adenocarcinoma (PDAC). Automatic and accurate segmentation of tumor cells and their surrounding tissues is often a crucial step to obtain reliable morphological statistics. Nonetheless, it is still a challenge due to the great variation of appearance and morphology. In this paper, a selected multi-scale attention network (SMANet) is proposed to segment tumor cells, blood vessels, nerves, islets and ducts in pancreatic pathological images. The selected multi-scale attention module is proposed to enhance effective information, supplement useful information and suppress redundant information at different scales from the encoder and decoder. It includes selection unit (SU) module and multi-scale attention (MA) module. The selection unit module can effectively filter features. The multi-scale attention module enhances effective information through spatial attention and channel attention, and combines different level features to supplement useful information. This helps learn the information of different receptive fields to improve the segmentation of tumor cells, blood vessels and nerves. An original-feature fusion unit is also proposed to supplement the original image information to reduce the under-segmentation of small tissues such as islets and ducts. The proposed method outperforms state-of-the-arts deep learning algorithms on our PDAC pathological images and achieves competitive results on the GlaS challenge dataset. The mDice and mIoU have reached 0.769 and 0.665 in our PDAC dataset.
Collapse
Affiliation(s)
- Enting Gao
- School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou, China
| | - Hui Jiang
- Department of Pathology, Changhai Hospital, The Navy Military Medical University, Shanghai, China
| | - Zhibang Zhou
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Changxing Yang
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Muyang Chen
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Weifang Zhu
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Fei Shi
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Xinjian Chen
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Jian Zheng
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Jiangsu 215163, China
| | - Yun Bian
- Department of Radiology, Changhai Hospital, The Navy Military Medical University, Shanghai, China
| | - Dehui Xiang
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China.
| |
Collapse
|
64
|
Yang P, Yin X, Lu H, Hu Z, Zhang X, Jiang R, Lv H. CS-CO: A Hybrid Self-Supervised Visual Representation Learning Method for H&E-stained Histopathological Images. Med Image Anal 2022; 81:102539. [DOI: 10.1016/j.media.2022.102539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 06/11/2022] [Accepted: 07/11/2022] [Indexed: 12/01/2022]
|
65
|
Zhou W, Deng Z, Liu Y, Shen H, Deng H, Xiao H. Global Research Trends of Artificial Intelligence on Histopathological Images: A 20-Year Bibliometric Analysis. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:11597. [PMID: 36141871 PMCID: PMC9517580 DOI: 10.3390/ijerph191811597] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 08/31/2022] [Accepted: 09/01/2022] [Indexed: 06/13/2023]
Abstract
Cancer has become a major threat to global health care. With the development of computer science, artificial intelligence (AI) has been widely applied in histopathological images (HI) analysis. This study analyzed the publications of AI in HI from 2001 to 2021 by bibliometrics, exploring the research status and the potential popular directions in the future. A total of 2844 publications from the Web of Science Core Collection were included in the bibliometric analysis. The country/region, institution, author, journal, keyword, and references were analyzed by using VOSviewer and CiteSpace. The results showed that the number of publications has grown rapidly in the last five years. The USA is the most productive and influential country with 937 publications and 23,010 citations, and most of the authors and institutions with higher numbers of publications and citations are from the USA. Keyword analysis showed that breast cancer, prostate cancer, colorectal cancer, and lung cancer are the tumor types of greatest concern. Co-citation analysis showed that classification and nucleus segmentation are the main research directions of AI-based HI studies. Transfer learning and self-supervised learning in HI is on the rise. This study performed the first bibliometric analysis of AI in HI from multiple indicators, providing insights for researchers to identify key cancer types and understand the research trends of AI application in HI.
Collapse
Affiliation(s)
- Wentong Zhou
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Ziheng Deng
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Yong Liu
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Hui Shen
- Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, School of Medicine, Tulane University School, New Orleans, LA 70112, USA
| | - Hongwen Deng
- Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, School of Medicine, Tulane University School, New Orleans, LA 70112, USA
| | - Hongmei Xiao
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| |
Collapse
|
66
|
Shmatko A, Ghaffari Laleh N, Gerstung M, Kather JN. Artificial intelligence in histopathology: enhancing cancer research and clinical oncology. NATURE CANCER 2022; 3:1026-1038. [PMID: 36138135 DOI: 10.1038/s43018-022-00436-4] [Citation(s) in RCA: 182] [Impact Index Per Article: 60.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
Artificial intelligence (AI) methods have multiplied our capabilities to extract quantitative information from digital histopathology images. AI is expected to reduce workload for human experts, improve the objectivity and consistency of pathology reports, and have a clinical impact by extracting hidden information from routinely available data. Here, we describe how AI can be used to predict cancer outcome, treatment response, genetic alterations and gene expression from digitized histopathology slides. We summarize the underlying technologies and emerging approaches, noting limitations, including the need for data sharing and standards. Finally, we discuss the broader implications of AI in cancer research and oncology.
Collapse
Affiliation(s)
- Artem Shmatko
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- European Molecular Biology Laboratory, European Bioinformatics Institute, Cambridge, UK
| | | | - Moritz Gerstung
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.
- European Molecular Biology Laboratory, European Bioinformatics Institute, Cambridge, UK.
| | - Jakob Nikolas Kather
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany.
- Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany.
- Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK.
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany.
| |
Collapse
|
67
|
Fashi PA, Hemati S, Babaie M, Gonzalez R, Tizhoosh H. A self-supervised contrastive learning approach for whole slide image representation in digital pathology. J Pathol Inform 2022; 13:100133. [PMID: 36605114 PMCID: PMC9808093 DOI: 10.1016/j.jpi.2022.100133] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 08/16/2022] [Accepted: 08/22/2022] [Indexed: 02/07/2023] Open
Abstract
Image analysis in digital pathology has proven to be one of the most challenging fields in medical imaging for AI-driven classification and search tasks. Due to their gigapixel dimensions, whole slide images (WSIs) are difficult to represent for computational pathology. Self-supervised learning (SSL) has recently demonstrated excellent performance in learning effective representations on pretext objectives, which may improve the generalizations of downstream tasks. Previous self-supervised representation methods rely on patch selection and classification such that the effect of SSL on end-to-end WSI representation is not investigated. In contrast to existing augmentation-based SSL methods, this paper proposes a novel self-supervised learning scheme based on the available primary site information. We also design a fully supervised contrastive learning setup to increase the robustness of the representations for WSI classification and search for both pretext and downstream tasks. We trained and evaluated the model on more than 6000 WSIs from The Cancer Genome Atlas (TCGA) repository provided by the National Cancer Institute. The proposed architecture achieved excellent results on most primary sites and cancer subtypes. We also achieved the best result on validation on a lung cancer classification task.
Collapse
Affiliation(s)
| | - Sobhan Hemati
- Kimia Lab, University of Waterloo, Waterloo, ON, Canada,Vector Institute, MaRS Centre, Toronto, ON, Canada
| | - Morteza Babaie
- Kimia Lab, University of Waterloo, Waterloo, ON, Canada,Vector Institute, MaRS Centre, Toronto, ON, Canada,Corresponding author.
| | - Ricardo Gonzalez
- Kimia Lab, University of Waterloo, Waterloo, ON, Canada,McMaster University, Hamilton, ON, Canada
| | - H.R. Tizhoosh
- Kimia Lab, University of Waterloo, Waterloo, ON, Canada,Vector Institute, MaRS Centre, Toronto, ON, Canada,Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
68
|
Shurrab S, Duwairi R. Self-supervised learning methods and applications in medical imaging analysis: a survey. PeerJ Comput Sci 2022; 8:e1045. [PMID: 36091989 PMCID: PMC9455147 DOI: 10.7717/peerj-cs.1045] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 06/27/2022] [Indexed: 05/12/2023]
Abstract
The scarcity of high-quality annotated medical imaging datasets is a major problem that collides with machine learning applications in the field of medical imaging analysis and impedes its advancement. Self-supervised learning is a recent training paradigm that enables learning robust representations without the need for human annotation which can be considered an effective solution for the scarcity of annotated medical data. This article reviews the state-of-the-art research directions in self-supervised learning approaches for image data with a concentration on their applications in the field of medical imaging analysis. The article covers a set of the most recent self-supervised learning methods from the computer vision field as they are applicable to the medical imaging analysis and categorize them as predictive, generative, and contrastive approaches. Moreover, the article covers 40 of the most recent research papers in the field of self-supervised learning in medical imaging analysis aiming at shedding the light on the recent innovation in the field. Finally, the article concludes with possible future research directions in the field.
Collapse
Affiliation(s)
- Saeed Shurrab
- Department of Computer Information Systems, Jordan University of Science and Technology, Irbid, Jordan
| | - Rehab Duwairi
- Department of Computer Information Systems, Jordan University of Science and Technology, Irbid, Jordan
| |
Collapse
|
69
|
A multi-view deep learning model for pathology image diagnosis. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03918-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
70
|
Liu H, Kurc T. Deep learning for survival analysis in breast cancer with whole slide image data. Bioinformatics 2022; 38:3629-3637. [PMID: 35674341 PMCID: PMC9272797 DOI: 10.1093/bioinformatics/btac381] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 04/22/2022] [Accepted: 06/04/2022] [Indexed: 02/07/2023] Open
Abstract
MOTIVATION Whole slide tissue images contain detailed data on the sub-cellular structure of cancer. Quantitative analyses of this data can lead to novel biomarkers for better cancer diagnosis and prognosis and can improve our understanding of cancer mechanisms. Such analyses are challenging to execute because of the sizes and complexity of whole slide image data and relatively limited volume of training data for machine learning methods. RESULTS We propose and experimentally evaluate a multi-resolution deep learning method for breast cancer survival analysis. The proposed method integrates image data at multiple resolutions and tumor, lymphocyte and nuclear segmentation results from deep learning models. Our results show that this approach can significantly improve the deep learning model performance compared to using only the original image data. The proposed approach achieves a c-index value of 0.706 compared to a c-index value of 0.551 from an approach that uses only color image data at the highest image resolution. Furthermore, when clinical features (sex, age and cancer stage) are combined with image data, the proposed approach achieves a c-index of 0.773. AVAILABILITY AND IMPLEMENTATION https://github.com/SBU-BMI/deep_survival_analysis.
Collapse
Affiliation(s)
- Huidong Liu
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, USA
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY 11794, USA
| |
Collapse
|
71
|
Wang X, Yang S, Zhang J, Wang M, Zhang J, Yang W, Huang J, Han X. Transformer-based unsupervised contrastive learning for histopathological image classification. Med Image Anal 2022; 81:102559. [DOI: 10.1016/j.media.2022.102559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 06/24/2022] [Accepted: 07/25/2022] [Indexed: 10/16/2022]
|
72
|
Amgad M, Atteya LA, Hussein H, Mohammed KH, Hafiz E, Elsebaie MAT, Alhusseiny AM, AlMoslemany MA, Elmatboly AM, Pappalardo PA, Sakr RA, Mobadersany P, Rachid A, Saad AM, Alkashash AM, Ruhban IA, Alrefai A, Elgazar NM, Abdulkarim A, Farag AA, Etman A, Elsaeed AG, Alagha Y, Amer YA, Raslan AM, Nadim MK, Elsebaie MAT, Ayad A, Hanna LE, Gadallah A, Elkady M, Drumheller B, Jaye D, Manthey D, Gutman DA, Elfandy H, Cooper LAD. NuCLS: A scalable crowdsourcing approach and dataset for nucleus classification and segmentation in breast cancer. Gigascience 2022; 11:giac037. [PMID: 35579553 PMCID: PMC9112766 DOI: 10.1093/gigascience/giac037] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 12/24/2021] [Accepted: 03/18/2022] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND Deep learning enables accurate high-resolution mapping of cells and tissue structures that can serve as the foundation of interpretable machine-learning models for computational pathology. However, generating adequate labels for these structures is a critical barrier, given the time and effort required from pathologists. RESULTS This article describes a novel collaborative framework for engaging crowds of medical students and pathologists to produce quality labels for cell nuclei. We used this approach to produce the NuCLS dataset, containing >220,000 annotations of cell nuclei in breast cancers. This builds on prior work labeling tissue regions to produce an integrated tissue region- and cell-level annotation dataset for training that is the largest such resource for multi-scale analysis of breast cancer histology. This article presents data and analysis results for single and multi-rater annotations from both non-experts and pathologists. We present a novel workflow that uses algorithmic suggestions to collect accurate segmentation data without the need for laborious manual tracing of nuclei. Our results indicate that even noisy algorithmic suggestions do not adversely affect pathologist accuracy and can help non-experts improve annotation quality. We also present a new approach for inferring truth from multiple raters and show that non-experts can produce accurate annotations for visually distinctive classes. CONCLUSIONS This study is the most extensive systematic exploration of the large-scale use of wisdom-of-the-crowd approaches to generate data for computational pathology applications.
Collapse
Affiliation(s)
- Mohamed Amgad
- Department of Pathology, Northwestern University, 750 N Lake Shore Dr., Chicago, IL 60611, USA
| | - Lamees A Atteya
- Cairo Health Care Administration, Egyptian Ministry of Health, 3 Magles El Shaab Street, Cairo, Postal code 222, Egypt
| | - Hagar Hussein
- Department of Pathology, Nasser institute for research and treatment, 3 Magles El Shaab Street, Cairo, Postal code 222, Egypt
| | - Kareem Hosny Mohammed
- Department of Pathology and Laboratory Medicine, University of Pennsylvania, 3620 Hamilton Walk M163, Philadelphia, PA 19104, USA
| | - Ehab Hafiz
- Department of Clinical Laboratory Research, Theodor Bilharz Research Institute, 1 El-Nile Street, Imbaba Warrak El-Hadar, Giza, Postal code 12411, Egypt
| | - Maha A T Elsebaie
- Department of Medicine, Cook County Hospital, 1969 W Ogden Ave, Chicago, IL 60612, USA
| | - Ahmed M Alhusseiny
- Department of Pathology, Baystate Medical Center, University of Massachusetts, 759 Chestnut St, Springfield, MA 01199, USA
| | - Mohamed Atef AlMoslemany
- Faculty of Medicine, Menoufia University, Gamal Abd El-Nasir, Qism Shebeen El-Kom, Shibin el Kom, Menofia Governorate, Postal code: 32511, Egypt
| | - Abdelmagid M Elmatboly
- Faculty of Medicine, Al-Azhar University, 15 Mohammed Abdou, El-Darb El-Ahmar, Cairo Governorate, Postal code 11651, Egypt
| | - Philip A Pappalardo
- Consultant for The Center for Applied Proteomics and Molecular Medicine (CAPMM), George Mason University, 10920 George Mason Circle Institute for Advanced Biomedical Research Room 2008, MS1A9 Manassas, Virginia 20110, USA
| | - Rokia Adel Sakr
- Department of Pathology, National Liver Institute, Gamal Abd El-Nasir, Qism Shebeen El-Kom, Shibin el Kom, Menofia Governorate, Postal code: 32511, Egypt
| | - Pooya Mobadersany
- Department of Pathology, Northwestern University, 750 N Lake Shore Dr., Chicago, IL 60611, USA
| | - Ahmad Rachid
- Faculty of Medicine, Ain Shams University, 38 Abbassia, Next to the Al-Nour Mosque, Cairo, Postal code: 1181, Egypt
| | - Anas M Saad
- Cleveland Clinic Foundation, 9500 Euclid Ave. Cleveland, Ohio 44195, USA
| | - Ahmad M Alkashash
- Department of Pathology, Indiana University, 635 Barnhill Drive Medical Science Building A-128 Indianapolis, IN 46202, USA
| | - Inas A Ruhban
- Faculty of Medicine, Damascus University, Damascus, Damaskus, PO Box 30621, Syria
| | - Anas Alrefai
- Faculty of Medicine, Ain Shams University, 38 Abbassia, Next to the Al-Nour Mosque, Cairo, Postal code: 1181, Egypt
| | - Nada M Elgazar
- Faculty of Medicine, Mansoura University, 1 El Gomhouria St, Dakahlia Governorate 35516, Egypt
| | - Ali Abdulkarim
- Faculty of Medicine, Cairo University, Kasr Al Ainy Hospitals, Kasr Al Ainy St., Cairo, Postal code: 11562, Egypt
| | - Abo-Alela Farag
- Faculty of Medicine, Ain Shams University, 38 Abbassia, Next to the Al-Nour Mosque, Cairo, Postal code: 1181, Egypt
| | - Amira Etman
- Faculty of Medicine, Menoufia University, Gamal Abd El-Nasir, Qism Shebeen El-Kom, Shibin el Kom, Menofia Governorate, Postal code: 32511, Egypt
| | - Ahmed G Elsaeed
- Faculty of Medicine, Mansoura University, 1 El Gomhouria St, Dakahlia Governorate 35516, Egypt
| | - Yahya Alagha
- Faculty of Medicine, Cairo University, Kasr Al Ainy Hospitals, Kasr Al Ainy St., Cairo, Postal code: 11562, Egypt
| | - Yomna A Amer
- Faculty of Medicine, Menoufia University, Gamal Abd El-Nasir, Qism Shebeen El-Kom, Shibin el Kom, Menofia Governorate, Postal code: 32511, Egypt
| | - Ahmed M Raslan
- Department of Anaesthesia and Critical Care, Menoufia University Hospital, Gamal Abd El-Nasir, Qism Shebeen El-Kom, Shibin el Kom, Menofia Governorate, Postal code: 32511, Egypt
| | - Menatalla K Nadim
- Department of Clinical Pathology, Ain Shams University, 38 Abbassia, Next to the Al-Nour Mosque, Cairo, Postal code: 1181, Egypt
| | - Mai A T Elsebaie
- Faculty of Medicine, Ain Shams University, 38 Abbassia, Next to the Al-Nour Mosque, Cairo, Postal code: 1181, Egypt
| | - Ahmed Ayad
- Research Department, Oncology Consultants, 2130 W. Holcombe Blvd, 10th Floor, Houston, Texas 77030, USA
| | - Liza E Hanna
- Department of Pathology, Nasser institute for research and treatment, 3 Magles El Shaab Street, Cairo, Postal code 222, Egypt
| | - Ahmed Gadallah
- Faculty of Medicine, Ain Shams University, 38 Abbassia, Next to the Al-Nour Mosque, Cairo, Postal code: 1181, Egypt
| | - Mohamed Elkady
- Siparadigm Diagnostic Informatics, 25 Riverside Dr no. 2, Pine Brook, NJ 07058, USA
| | - Bradley Drumheller
- Department of Pathology and Laboratory Medicine, Emory University School of Medicine, 201 Dowman Dr, Atlanta, GA 30322, USA
| | - David Jaye
- Department of Pathology and Laboratory Medicine, Emory University School of Medicine, 201 Dowman Dr, Atlanta, GA 30322, USA
| | - David Manthey
- Kitware Inc., 1712 Route 9. Suite 300. Clifton Park, New York 12065, USA
| | - David A Gutman
- Department of Neurology, Emory University School of Medicine, 201 Dowman Dr, Atlanta, GA 30322, USA
| | - Habiba Elfandy
- Department of Pathology, National Cancer Institute, Kasr Al Eini Street, Fom El Khalig, Cairo, Postal code: 11562, Egypt
- Department of Pathology, Children’s Cancer Hospital Egypt (CCHE 57357), 1 Seket Al-Emam Street, El-Madbah El-Kadeem Yard, El-Saida Zenab, Cairo, Postal code: 11562, Egypt
| | - Lee A D Cooper
- Department of Pathology, Northwestern University, 750 N Lake Shore Dr., Chicago, IL 60611, USA
- Lurie Cancer Center, Northwestern University, 675 N St Clair St Fl 21 Ste 100, Chicago, IL 60611, USA
- Center for Computational Imaging and Signal Analytics, Northwestern University Feinberg School of Medicine, 750 N Lake Shore Dr., Chicago, IL 60611, USA
| |
Collapse
|
73
|
Abbet C, Studer L, Fischer A, Dawson H, Zlobec I, Bozorgtabar B, Thiran JP. Self-Rule to Multi-Adapt: Generalized Multi-source Feature Learning Using Unsupervised Domain Adaptation for Colorectal Cancer Tissue Detection. Med Image Anal 2022; 79:102473. [DOI: 10.1016/j.media.2022.102473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 03/07/2022] [Accepted: 05/03/2022] [Indexed: 10/18/2022]
|
74
|
Weakly Supervised Segmentation on Neural Compressed Histopathology with Self-Equivariant Regularization. Med Image Anal 2022; 80:102482. [DOI: 10.1016/j.media.2022.102482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 04/06/2022] [Accepted: 05/20/2022] [Indexed: 11/17/2022]
|
75
|
Xu Y, Jiang L, Huang S, Liu Z, Zhang J. Dual resolution deep learning network with self-attention mechanism for classification and localisation of colorectal cancer in histopathological images. J Clin Pathol 2022:jclinpath-2021-208042. [PMID: 35273120 DOI: 10.1136/jclinpath-2021-208042] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 02/09/2022] [Indexed: 12/29/2022]
Abstract
AIMS Microscopic examination is a basic diagnostic technology for colorectal cancer (CRC), but it is very laborious. We developed a dual resolution deep learning network with self-attention mechanism (DRSANet) which combines context and details for CRC binary classification and localisation in whole slide images (WSIs), and as a computer-aided diagnosis (CAD) to improve the sensitivity and specificity of doctors' diagnosis. METHODS Representative regions of interest (ROI) of each tissue type were manually delineated in WSIs by pathologists. Based on the same coordinates of centre position, patches were extracted at different magnification levels from the ROI. Specifically, patches from low magnification level contain contextual information, while from high magnification level provide important details. A dual-inputs network was designed to learn context and details simultaneously, and self-attention mechanism was used to selectively learn different positions in the images to enhance the performance. RESULTS In classification task, DRSANet outperformed the benchmark networks which only depended on the high magnification patches on two test set. Furthermore, in localisation task, DRSANet demonstrated a better localisation capability of tumour area in WSI with less areas of misidentification. CONCLUSIONS We compared DRSANet with benchmark networks which only use the patches from high magnification level. Experimental results reveal that the performance of DRSANet is better than the benchmark networks. Both context and details should be considered in deep learning method.
Collapse
Affiliation(s)
- Yan Xu
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Liwen Jiang
- Department of Pathology, Affiliated Cancer Hospital & Institute of Guangzhou Medical University, Guangzhou, China
| | - Shuting Huang
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Zhenyu Liu
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Jiangyu Zhang
- Department of Pathology, Affiliated Cancer Hospital & Institute of Guangzhou Medical University, Guangzhou, China
| |
Collapse
|
76
|
Yan J, Chen H, Li X, Yao J. Deep Contrastive Learning Based Tissue Clustering for Annotation-free Histopathology Image Analysis. Comput Med Imaging Graph 2022; 97:102053. [DOI: 10.1016/j.compmedimag.2022.102053] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 12/08/2021] [Accepted: 03/04/2022] [Indexed: 01/18/2023]
|
77
|
Li B, Li Y, Eliceiri KW. Dual-stream Multiple Instance Learning Network for Whole Slide Image Classification with Self-supervised Contrastive Learning. CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. WORKSHOPS 2022; 2021:14318-14328. [PMID: 35047230 DOI: 10.1109/cvpr46437.2021.01409] [Citation(s) in RCA: 163] [Impact Index Per Article: 54.3] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
We address the challenging problem of whole slide image (WSI) classification. WSIs have very high resolutions and usually lack localized annotations. WSI classification can be cast as a multiple instance learning (MIL) problem when only slide-level labels are available. We propose a MIL-based method for WSI classification and tumor detection that does not require localized annotations. Our method has three major components. First, we introduce a novel MIL aggregator that models the relations of the instances in a dual-stream architecture with trainable distance measurement. Second, since WSIs can produce large or unbalanced bags that hinder the training of MIL models, we propose to use self-supervised contrastive learning to extract good representations for MIL and alleviate the issue of prohibitive memory cost for large bags. Third, we adopt a pyramidal fusion mechanism for multiscale WSI features, and further improve the accuracy of classification and localization. Our model is evaluated on two representative WSI datasets. The classification accuracy of our model compares favorably to fully-supervised methods, with less than 2% accuracy gap across datasets. Our results also outperform all previous MIL-based methods. Additional benchmark results on standard MIL datasets further demonstrate the superior performance of our MIL aggregator on general MIL problems.
Collapse
Affiliation(s)
- Bin Li
- Department of Biomedical Engineering, University of Wisconsin-Madison.,Morgridge Institute for Research, Madison, WI USA
| | - Yin Li
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison.,Department of Computer Sciences, University of Wisconsin-Madison
| | - Kevin W Eliceiri
- Department of Biomedical Engineering, University of Wisconsin-Madison.,Morgridge Institute for Research, Madison, WI USA.,Department of Medical Physics, University of Wisconsin-Madison
| |
Collapse
|
78
|
Shaban M, Raza SEA, Hassan M, Jamshed A, Mushtaq S, Loya A, Batis N, Brooks J, Nankivell P, Sharma N, Robinson M, Mehanna H, Khurram SA, Rajpoot N. A digital score of tumour-associated stroma infiltrating lymphocytes predicts survival in head and neck squamous cell carcinoma. J Pathol 2021; 256:174-185. [PMID: 34698394 DOI: 10.1002/path.5819] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Revised: 10/01/2021] [Accepted: 10/23/2021] [Indexed: 12/20/2022]
Abstract
The infiltration of T-lymphocytes in the stroma and tumour is an indication of an effective immune response against the tumour, resulting in better survival. In this study, our aim was to explore the prognostic significance of tumour-associated stroma infiltrating lymphocytes (TASILs) in head and neck squamous cell carcinoma (HNSCC) through an AI-based automated method. A deep learning-based automated method was employed to segment tumour, tumour-associated stroma, and lymphocytes in digitally scanned whole slide images of HNSCC tissue slides. The spatial patterns of lymphocytes and tumour-associated stroma were digitally quantified to compute the tumour-associated stroma infiltrating lymphocytes score (TASIL-score). Finally, the prognostic significance of the TASIL-score for disease-specific and disease-free survival was investigated using the Cox proportional hazard analysis. Three different cohorts of haematoxylin and eosin (H&E)-stained tissue slides of HNSCC cases (n = 537 in total) were studied, including publicly available TCGA head and neck cancer cases. The TASIL-score carries prognostic significance (p = 0.002) for disease-specific survival of HNSCC patients. The TASIL-score also shows a better separation between low- and high-risk patients compared with the manual tumour-infiltrating lymphocytes (TILs) scoring by pathologists for both disease-specific and disease-free survival. A positive correlation of TASIL-score with molecular estimates of CD8+ T cells was also found, which is in line with existing findings. To the best of our knowledge, this is the first study to automate the quantification of TASILs from routine H&E slides of head and neck cancer. Our TASIL-score-based findings are aligned with the clinical knowledge, with the added advantages of objectivity, reproducibility, and strong prognostic value. Although we validated our method on three different cohorts (n = 537 cases in total), a comprehensive evaluation on large multicentric cohorts is required before the proposed digital score can be adopted in clinical practice. © 2021 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Muhammad Shaban
- Department of Computer Science, University of Warwick, Coventry, UK
| | | | - Mariam Hassan
- Department of Pathology, Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Arif Jamshed
- Department of Pathology, Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Sajid Mushtaq
- Department of Pathology, Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Asif Loya
- Department of Pathology, Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Nikolaos Batis
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Jill Brooks
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Paul Nankivell
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Neil Sharma
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Max Robinson
- School of Dental Sciences, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, UK
| | - Hisham Mehanna
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Syed Ali Khurram
- School of Clinical Dentistry, University of Sheffield, Sheffield, UK
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, UK.,The Alan Turing Institute, London, UK.,Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Coventry, UK
| |
Collapse
|
79
|
DiPalma J, Suriawinata AA, Tafe LJ, Torresani L, Hassanpour S. Resolution-based distillation for efficient histology image classification. Artif Intell Med 2021; 119:102136. [PMID: 34531005 PMCID: PMC8449014 DOI: 10.1016/j.artmed.2021.102136] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 07/07/2021] [Accepted: 08/02/2021] [Indexed: 12/14/2022]
Abstract
Developing deep learning models to analyze histology images has been computationally challenging, as the massive size of the images causes excessive strain on all parts of the computing pipeline. This paper proposes a novel deep learning-based methodology for improving the computational efficiency of histology image classification. The proposed approach is robust when used with images that have reduced input resolution, and it can be trained effectively with limited labeled data. Moreover, our approach operates at either the tissue- or slide-level, removing the need for laborious patch-level labeling. Our method uses knowledge distillation to transfer knowledge from a teacher model pre-trained at high resolution to a student model trained on the same images at a considerably lower resolution. Also, to address the lack of large-scale labeled histology image datasets, we perform the knowledge distillation in a self-supervised fashion. We evaluate our approach on three distinct histology image datasets associated with celiac disease, lung adenocarcinoma, and renal cell carcinoma. Our results on these datasets demonstrate that a combination of knowledge distillation and self-supervision allows the student model to approach and, in some cases, surpass the teacher model's classification accuracy while being much more computationally efficient. Additionally, we observe an increase in student classification performance as the size of the unlabeled dataset increases, indicating that there is potential for this method to scale further with additional unlabeled data. Our model outperforms the high-resolution teacher model for celiac disease in accuracy, F1-score, precision, and recall while requiring 4 times fewer computations. For lung adenocarcinoma, our results at 1.25× magnification are within 1.5% of the results for the teacher model at 10× magnification, with a reduction in computational cost by a factor of 64. Our model on renal cell carcinoma at 1.25× magnification performs within 1% of the teacher model at 5× magnification while requiring 16 times fewer computations. Furthermore, our celiac disease outcomes benefit from additional performance scaling with the use of more unlabeled data. In the case of 0.625× magnification, using unlabeled data improves accuracy by 4% over the tissue-level baseline. Therefore, our approach can improve the feasibility of deep learning solutions for digital pathology on standard computational hardware and infrastructures.
Collapse
Affiliation(s)
- Joseph DiPalma
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
| | - Arief A Suriawinata
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH 03756, USA
| | - Laura J Tafe
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH 03756, USA
| | - Lorenzo Torresani
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
| | - Saeed Hassanpour
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA; Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA; Department of Epidemiology, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA.
| |
Collapse
|
80
|
Cherian Kurian N, Sethi A, Reddy Konduru A, Mahajan A, Rane SU. A 2021 update on cancer image analytics with deep learning. WIRES DATA MINING AND KNOWLEDGE DISCOVERY 2021; 11. [DOI: 10.1002/widm.1410] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Accepted: 03/09/2021] [Indexed: 02/05/2023]
Abstract
AbstractDeep learning (DL)‐based interpretation of medical images has reached a critical juncture of expanding outside research projects into translational ones, and is ready to make its way to the clinics. Advances over the last decade in data availability, DL techniques, as well as computing capabilities have accelerated this journey. Through this journey, today we have a better understanding of the challenges to and pitfalls of wider adoption of DL into clinical care, which, according to us, should and will drive the advances in this field in the next few years. The most important among these challenges are the lack of an appropriately digitized environment within healthcare institutions, the lack of adequate open and representative datasets on which DL algorithms can be trained and tested, and the lack of robustness of widely used DL training algorithms to certain pervasive pathological characteristics of medical images and repositories. In this review, we provide an overview of the role of imaging in oncology, the different techniques that are shaping the way DL algorithms are being made ready for clinical use, and also the problems that DL techniques still need to address before DL can find a home in clinics. Finally, we also provide a summary of how DL can potentially drive the adoption of digital pathology, vendor neutral archives, and picture archival and communication systems. We caution that the respective researchers may find the coverage of their own fields to be at a high‐level. This is so by design as this format is meant to only introduce those looking in from outside of deep learning and medical research, respectively, to gain an appreciation for the main concerns and limitations of these two fields instead of telling them something new about their own.This article is categorized under:
Technologies > Artificial Intelligence
Algorithmic Development > Biological Data Mining
Collapse
Affiliation(s)
- Nikhil Cherian Kurian
- Department of Electrical Engineering Indian Institute of Technology, Bombay Mumbai India
| | - Amit Sethi
- Department of Electrical Engineering Indian Institute of Technology, Bombay Mumbai India
| | - Anil Reddy Konduru
- Department of Pathology Tata Memorial Center‐ACTREC, HBNI Navi Mumbai India
| | - Abhishek Mahajan
- Department of Radiology Tata Memorial Hospital, HBNI Mumbai India
| | - Swapnil Ulhas Rane
- Department of Pathology Tata Memorial Center‐ACTREC, HBNI Navi Mumbai India
| |
Collapse
|