1
|
Lahr I, Alfasly S, Nejat P, Khan J, Kottom L, Kumbhar V, Alsaafin A, Shafique A, Hemati S, Alabtah G, Comfere N, Murphree D, Mangold A, Yasir S, Meroueh C, Boardman L, Shah VH, Garcia JJ, Tizhoosh HR. Analysis and Validation of Image Search Engines in Histopathology. IEEE Rev Biomed Eng 2025; 18:350-367. [PMID: 38995713 DOI: 10.1109/rbme.2024.3425769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/14/2024]
Abstract
Searching for similar images in archives of histology and histopathology images is a crucial task that may aid in patient tissue comparison for various purposes, ranging from triaging and diagnosis to prognosis and prediction. Whole slide images (WSIs) are highly detailed digital representations of tissue specimens mounted on glass slides. Matching WSI to WSI can serve as the critical method for patient tissue comparison. In this paper, we report extensive analysis and validation of four search methods bag of visual words (BoVW), Yottixel, SISH, RetCCL, and some of their potential variants. We analyze their algorithms and structures and assess their performance. For this evaluation, we utilized four internal datasets (1269 patients) and three public datasets (1207 patients), totaling more than 200,000 patches from 38 different classes/subtypes across five primary sites. Certain search engines, for example, BoVW, exhibit notable efficiency and speed but suffer from low accuracy. Conversely, search engines like Yottixel demonstrate efficiency and speed, providing moderately accurate results. Recent proposals, including SISH, display inefficiency and yield inconsistent outcomes, while alternatives like RetCCL prove inadequate in both accuracy and efficiency. Further research is imperative to address the dual aspects of accuracy and minimal storage requirements in histopathological image search.
Collapse
|
2
|
Tizhoosh H, Pantanowitz L. On image search in histopathology. J Pathol Inform 2024; 15:100375. [PMID: 38645985 PMCID: PMC11033156 DOI: 10.1016/j.jpi.2024.100375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 03/21/2024] [Accepted: 03/29/2024] [Indexed: 04/23/2024] Open
Abstract
Pathology images of histopathology can be acquired from camera-mounted microscopes or whole-slide scanners. Utilizing similarity calculations to match patients based on these images holds significant potential in research and clinical contexts. Recent advancements in search technologies allow for implicit quantification of tissue morphology across diverse primary sites, facilitating comparisons, and enabling inferences about diagnosis, and potentially prognosis, and predictions for new patients when compared against a curated database of diagnosed and treated cases. In this article, we comprehensively review the latest developments in image search technologies for histopathology, offering a concise overview tailored for computational pathology researchers seeking effective, fast, and efficient image search methods in their work.
Collapse
Affiliation(s)
- H.R. Tizhoosh
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN, USA
| | - Liron Pantanowitz
- Department of Pathology, School of Medicine, University of Pittsburgh, PA, USA
| |
Collapse
|
3
|
Wang G, Duan Q, Shen T, Zhang S. SenseCare: a research platform for medical image informatics and interactive 3D visualization. FRONTIERS IN RADIOLOGY 2024; 4:1460889. [PMID: 39639965 PMCID: PMC11617158 DOI: 10.3389/fradi.2024.1460889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2024] [Accepted: 11/06/2024] [Indexed: 12/07/2024]
Abstract
Introduction Clinical research on smart health has an increasing demand for intelligent and clinic-oriented medical image computing algorithms and platforms that support various applications. However, existing research platforms for medical image informatics have limited support for Artificial Intelligence (AI) algorithms and clinical applications. Methods To this end, we have developed SenseCare research platform, which is designed to facilitate translational research on intelligent diagnosis and treatment planning in various clinical scenarios. It has several appealing functions and features such as advanced 3D visualization, concurrent and efficient web-based access, fast data synchronization and high data security, multi-center deployment, support for collaborative research, etc. Results and discussion SenseCare provides a range of AI toolkits for different tasks, including image segmentation, registration, lesion and landmark detection from various image modalities ranging from radiology to pathology. It also facilitates the data annotation and model training processes, which makes it easier for clinical researchers to develop and deploy customized AI models. In addition, it is clinic-oriented and supports various clinical applications such as diagnosis and surgical planning for lung cancer, liver tumor, coronary artery disease, etc. By simplifying AI-based medical image analysis, SenseCare has a potential to promote clinical research in a wide range of disease diagnosis and treatment applications.
Collapse
Affiliation(s)
- Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
- SenseTime Research, Shanghai, China
| | - Qi Duan
- SenseTime Research, Shanghai, China
| | | | - Shaoting Zhang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
- SenseTime Research, Shanghai, China
| |
Collapse
|
4
|
Rangaiah PKB, Pradeep Kumar BP, Augustine R. Histopathology-driven prostate cancer identification: A VBIR approach with CLAHE and GLCM insights. Comput Biol Med 2024; 182:109213. [PMID: 39357133 DOI: 10.1016/j.compbiomed.2024.109213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Revised: 09/23/2024] [Accepted: 09/24/2024] [Indexed: 10/04/2024]
Abstract
Efficient extraction and analysis of histopathological images are crucial for accurate medical diagnoses, particularly for prostate cancer. This research enhances histopathological image reclamation by integrating Visual-Based Image Reclamation (VBIR) techniques with contrast-limited adaptive Histogram Equalization (CLAHE) and the Gray-Level Co-occurrence Matrix (GLCM) algorithm. The proposed method leverages CLAHE to improve image contrast and visibility, crucial for regions with varying illumination, and employs a non-linear Support Vector Machine (SVM) to incorporate GLCM features. Our approach achieved a notable success rate of 89.6%, demonstrating significant improvement in image analysis. The average execution time for matched tissues was 41.23 s (standard deviation 36.87 s), and for unmatched tissues, 21.22 s (standard deviation 29.18 s). These results underscore the method's efficiency and reliability in processing histopathological images. The findings from this study highlight the potential of our method to enhance image reclamation processes, paving the way for further research and advancements in medical image analysis. The superior performance of our approach signifies its capability to significantly improve histopathological image analysis, contributing to more accurate and efficient diagnostic practices.
Collapse
Affiliation(s)
- Pramod K B Rangaiah
- Microwaves in Medical Engineering Group, Division of Solid State Electronics, Department of Electrical Engineering, Uppsala University, Box 65 SE-751 03, Uppsala, Sweden
| | - B P Pradeep Kumar
- Department of Computer Science and Design, Atria Institute of Technology, Bengaluru 560024, India
| | - Robin Augustine
- Microwaves in Medical Engineering Group, Division of Solid State Electronics, Department of Electrical Engineering, Uppsala University, Box 65 SE-751 03, Uppsala, Sweden.
| |
Collapse
|
5
|
Liu Z, Cai Y, Tang Q. Nuclei detection in breast histopathology images with iterative correction. Med Biol Eng Comput 2024; 62:465-478. [PMID: 37914958 DOI: 10.1007/s11517-023-02947-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 10/09/2023] [Indexed: 11/03/2023]
Abstract
This work presents a deep network architecture to improve nuclei detection performance and achieve the high localization accuracy of nuclei in breast cancer histopathology images. The proposed model consists of two parts, generating nuclear candidate module and refining nuclear localization module. We first design a novel patch learning method to obtain high-quality nuclear candidates, where in addition to categories, location representations are also added to the patch information to implement the multi-task learning process of nuclear classification and localization; meanwhile, the deep supervision mechanism is introduced to obtain the coherent contributions from each scale layer. In order to refine nuclear localization, we propose an iterative correction strategy to make the prediction progressively closer to the ground truth, which significantly improves the accuracy of nuclear localization and facilitates neighbor size selection in the nonmaximum suppression step. Experimental results demonstrate the superior performance of our method for nuclei detection on the H&E stained histopathological image dataset as compared to previous state-of-the-art methods, especially in multiple cluttered nuclei detection, can achieve better results than existing techniques.
Collapse
Affiliation(s)
- Ziyi Liu
- School of Biomedical Engineering, South Central Minzu University, Wuhan, 430074, People's Republic of China
- Affiliated Yantai Yuhuangding Hospital of Qingdao University, Yantai, 264001, People's Republic of China
| | - Yu Cai
- School of Biomedical Engineering, South Central Minzu University, Wuhan, 430074, People's Republic of China
| | - Qiling Tang
- School of Biomedical Engineering, South Central Minzu University, Wuhan, 430074, People's Republic of China.
| |
Collapse
|
6
|
Tommasino C, Merolla F, Russo C, Staibano S, Rinaldi AM. Histopathological Image Deep Feature Representation for CBIR in Smart PACS. J Digit Imaging 2023; 36:2194-2209. [PMID: 37296349 PMCID: PMC10501985 DOI: 10.1007/s10278-023-00832-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 03/16/2023] [Accepted: 04/12/2023] [Indexed: 06/12/2023] Open
Abstract
Pathological Anatomy is moving toward computerizing processes mainly due to the extensive digitization of histology slides that resulted in the availability of many Whole Slide Images (WSIs). Their use is essential, especially in cancer diagnosis and research, and raises the pressing need for increasingly influential information archiving and retrieval systems. Picture Archiving and Communication Systems (PACSs) represent an actual possibility to archive and organize this growing amount of data. The design and implementation of a robust and accurate methodology for querying them in the pathology domain using a novel approach are mandatory. In particular, the Content-Based Image Retrieval (CBIR) methodology can be involved in the PACSs using a query-by-example task. In this context, one of many crucial points of CBIR concerns the representation of images as feature vectors, and the accuracy of retrieval mainly depends on feature extraction. Thus, our study explored different representations of WSI patches by features extracted from pre-trained Convolution Neural Networks (CNNs). In order to perform a helpful comparison, we evaluated features extracted from different layers of state-of-the-art CNNs using different dimensionality reduction techniques. Furthermore, we provided a qualitative analysis of obtained results. The evaluation showed encouraging results for our proposed framework.
Collapse
Affiliation(s)
- Cristian Tommasino
- Department of Electrical Engineering and Information Technology, University of Napoli Federico II, Via Claudio 21, Naples, 80125 Italy
| | - Francesco Merolla
- Department of Advanced Biomedical Sciences, Pathology Section, University of Naples Federico II, Naples, 80131 Italy
| | - Cristiano Russo
- Department of Electrical Engineering and Information Technology, University of Napoli Federico II, Via Claudio 21, Naples, 80125 Italy
| | - Stefania Staibano
- Department of Medicine and Health Sciences V. Tiberio, University of Molise, Campobasso, 86100 Italy
| | - Antonio Maria Rinaldi
- Department of Electrical Engineering and Information Technology, University of Napoli Federico II, Via Claudio 21, Naples, 80125 Italy
| |
Collapse
|
7
|
Tabatabaei Z, Wang Y, Colomer A, Oliver Moll J, Zhao Z, Naranjo V. WWFedCBMIR: World-Wide Federated Content-Based Medical Image Retrieval. Bioengineering (Basel) 2023; 10:1144. [PMID: 37892874 PMCID: PMC10604333 DOI: 10.3390/bioengineering10101144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 09/24/2023] [Accepted: 09/25/2023] [Indexed: 10/29/2023] Open
Abstract
The paper proposes a federated content-based medical image retrieval (FedCBMIR) tool that utilizes federated learning (FL) to address the challenges of acquiring a diverse medical data set for training CBMIR models. CBMIR is a tool to find the most similar cases in the data set to assist pathologists. Training such a tool necessitates a pool of whole-slide images (WSIs) to train the feature extractor (FE) to extract an optimal embedding vector. The strict regulations surrounding data sharing in hospitals makes it difficult to collect a rich data set. FedCBMIR distributes an unsupervised FE to collaborative centers for training without sharing the data set, resulting in shorter training times and higher performance. FedCBMIR was evaluated by mimicking two experiments, including two clients with two different breast cancer data sets, namely BreaKHis and Camelyon17 (CAM17), and four clients with the BreaKHis data set at four different magnifications. FedCBMIR increases the F1 score (F1S) of each client from 96% to 98.1% in CAM17 and from 95% to 98.4% in BreaKHis, with 11.44 fewer hours in training time. FedCBMIR provides 98%, 96%, 94%, and 97% F1S in the BreaKHis experiment with a generalized model and accomplishes this in 25.53 fewer hours of training.
Collapse
Affiliation(s)
- Zahra Tabatabaei
- Department of Artificial Intelligence, Tyris Tech S.L., 46021 Valencia, Spain
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-Tech, Universitat Politècnica de València, 46021 Valencia, Spain
| | - Yuandou Wang
- Multiscale Networked Systems, Universiteit van Amsterdam, 1098XH Amsterdam, The Netherlands
| | - Adrián Colomer
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-Tech, Universitat Politècnica de València, 46021 Valencia, Spain
- ValgrAI—Valencian Graduate School and Research Network for Artificial Intelligence, 46022 Valencia, Spain
| | - Javier Oliver Moll
- Department of Artificial Intelligence, Tyris Tech S.L., 46021 Valencia, Spain
| | - Zhiming Zhao
- Multiscale Networked Systems, Universiteit van Amsterdam, 1098XH Amsterdam, The Netherlands
| | - Valery Naranjo
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-Tech, Universitat Politècnica de València, 46021 Valencia, Spain
| |
Collapse
|
8
|
Meng X, Zou T. Clinical applications of graph neural networks in computational histopathology: A review. Comput Biol Med 2023; 164:107201. [PMID: 37517325 DOI: 10.1016/j.compbiomed.2023.107201] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 06/10/2023] [Accepted: 06/19/2023] [Indexed: 08/01/2023]
Abstract
Pathological examination is the optimal approach for diagnosing cancer, and with the advancement of digital imaging technologies, it has spurred the emergence of computational histopathology. The objective of computational histopathology is to assist in clinical tasks through image processing and analysis techniques. In the early stages, the technique involved analyzing histopathology images by extracting mathematical features, but the performance of these models was unsatisfactory. With the development of artificial intelligence (AI) technologies, traditional machine learning methods were applied in this field. Although the performance of the models improved, there were issues such as poor model generalization and tedious manual feature extraction. Subsequently, the introduction of deep learning techniques effectively addressed these problems. However, models based on traditional convolutional architectures could not adequately capture the contextual information and deep biological features in histopathology images. Due to the special structure of graphs, they are highly suitable for feature extraction in tissue histopathology images and have achieved promising performance in numerous studies. In this article, we review existing graph-based methods in computational histopathology and propose a novel and more comprehensive graph construction approach. Additionally, we categorize the methods and techniques in computational histopathology according to different learning paradigms. We summarize the common clinical applications of graph-based methods in computational histopathology. Furthermore, we discuss the core concepts in this field and highlight the current challenges and future research directions.
Collapse
Affiliation(s)
- Xiangyan Meng
- Xi'an Technological University, Xi'an, Shaanxi, 710021, China.
| | - Tonghui Zou
- Xi'an Technological University, Xi'an, Shaanxi, 710021, China.
| |
Collapse
|
9
|
Li S, Zhao Y, Zhang J, Yu T, Zhang J, Gao Y. High-Order Correlation-Guided Slide-Level Histology Retrieval With Self-Supervised Hashing. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:11008-11023. [PMID: 37097802 DOI: 10.1109/tpami.2023.3269810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Histopathological Whole Slide Images (WSIs) play a crucial role in cancer diagnosis. It is of significant importance for pathologists to search for images sharing similar content with the query WSI, especially in the case-based diagnosis. While slide-level retrieval could be more intuitive and practical in clinical applications, most methods are designed for patch-level retrieval. A few recently unsupervised slide-level methods only focus on integrating patch features directly, without perceiving slide-level information, and thus severely limits the performance of WSI retrieval. To tackle the issue, we propose a High-Order Correlation-Guided Self-Supervised Hashing-Encoding Retrieval (HSHR) method. Specifically, we train an attention-based hash encoder with slide-level representation in a self-supervised manner, enabling it to generate more representative slide-level hash codes of cluster centers and assign weights for each. These optimized and weighted codes are leveraged to establish a similarity-based hypergraph, in which a hypergraph-guided retrieval module is adopted to explore high-order correlations in the multi-pairwise manifold to conduct WSI retrieval. Extensive experiments on multiple TCGA datasets with over 24,000 WSIs spanning 30 cancer subtypes demonstrate that HSHR achieves state-of-the-art performance compared with other unsupervised histology WSI retrieval methods.
Collapse
|
10
|
Rout NK, Ahirwal MK, Atulkar M. Content-Based Medical Image Retrieval System for Skin Melanoma Diagnosis Based on Optimized Pair-Wise Comparison Approach. J Digit Imaging 2023; 36:45-58. [PMID: 36253580 PMCID: PMC9984623 DOI: 10.1007/s10278-022-00710-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 08/20/2022] [Accepted: 09/27/2022] [Indexed: 10/24/2022] Open
Abstract
Medical image analysis for perfect diagnosis of disease has become a very challenging task. Due to improper diagnosis, required medical treatment may be skipped. Proper diagnosis is needed as suspected lesions could be missed by the physician's eye. Hence, this problem can be settled up by better means with the investigation of similar case studies present in the healthcare database. In this context, this paper substantiates an assistive system that would help dermatologists for accurate identification of 23 different kinds of melanoma. For this, 2300 dermoscopic images were used to train the skin-melanoma similar image search system. The proposed system uses feature extraction by assigning dynamic weights to the low-level features based on the individual characteristics of the searched images. Optimal weights are obtained by the newly proposed optimized pair-wise comparison (OPWC) approach. The uniqueness of the proposed approach is that it provides the dynamic weights to the features of the searched image instead of applying static weights. The proposed approach is supported by analytic hierarchy process (AHP) and meta-heuristic optimization algorithms such as particle swarm optimization (PSO), JAYA, genetic algorithm (GA), and gray wolf optimization (GWO). The proposed approach has been tested with images of 23 classes of melanoma and achieved significant precision and recall. Thus, this approach of skin melanoma image search can be used as an expert assistive system to help dermatologists/physicians for accurate identification of different types of melanomas.
Collapse
Affiliation(s)
| | - Mitul Kumar Ahirwal
- Department of Computer Science and Engineering, MANIT, Bhopal, M.P. 462003 India
| | | |
Collapse
|
11
|
Jiang Y, Sui X, Ding Y, Xiao W, Zheng Y, Zhang Y. A semi-supervised learning approach with consistency regularization for tumor histopathological images analysis. Front Oncol 2023; 12:1044026. [PMID: 36698401 PMCID: PMC9870542 DOI: 10.3389/fonc.2022.1044026] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 12/06/2022] [Indexed: 01/12/2023] Open
Abstract
Introduction Manual inspection of histopathological images is important in clinical cancer diagnosis. Pathologists implement pathological diagnosis and prognostic evaluation through the microscopic examination of histopathological slices. This entire process is time-consuming, laborious, and challenging for pathologists. The modern use of whole-slide imaging, which scans histopathology slides to digital slices, and analysis using computer-aided diagnosis is an essential problem. Methods To solve the problem of difficult labeling of histopathological data, and improve the flexibility of histopathological analysis in clinical applications, we herein propose a semi-supervised learning algorithm coupled with consistency regularization strategy, called"Semi- supervised Histopathology Analysis Network"(Semi-His-Net), for automated normal-versus-tumor and subtype classifications. Specifically, when inputted disturbing versions of the same image, the model should predict similar outputs. Based on this, the model itself can assign artificial labels to unlabeled data for subsequent model training, thereby effectively reducing the labeled data required for training. Results Our Semi-His-Net is able to classify patches from breast cancer histopathological images into normal tissue and three other different tumor subtypes, achieving an accuracy was 90%. The average AUC of cross-classification between tumors reached 0.893. Discussion To overcome the limitations of visual inspection by pathologists for histopathology images, such as long time and low repeatability, we have developed a deep learning-based framework (Semi-His-Net) for automatic classification subdivision of the subtypes contained in the whole pathological images. This learning-based framework has great potential to improve the efficiency and repeatability of histopathological image diagnosis.
Collapse
Affiliation(s)
- Yanyun Jiang
- School of Mathematics and Statistics, Shandong Normal University, Jinan, China
| | - Xiaodan Sui
- School of Mathematics and Statistics, Shandong Normal University, Jinan, China
| | - Yanhui Ding
- School of Mathematics and Statistics, Shandong Normal University, Jinan, China
| | - Wei Xiao
- Shandong Provincial Hospital, Shandong University, Jinan, China
| | - Yuanjie Zheng
- School of Mathematics and Statistics, Shandong Normal University, Jinan, China,*Correspondence: Yuanjie Zheng, ; Yongxin Zhang,
| | - Yongxin Zhang
- School of Mathematics and Statistics, Shandong Normal University, Jinan, China,*Correspondence: Yuanjie Zheng, ; Yongxin Zhang,
| |
Collapse
|
12
|
Wang X, Du Y, Yang S, Zhang J, Wang M, Zhang J, Yang W, Huang J, Han X. RetCCL: Clustering-guided contrastive learning for whole-slide image retrieval. Med Image Anal 2023; 83:102645. [PMID: 36270093 DOI: 10.1016/j.media.2022.102645] [Citation(s) in RCA: 73] [Impact Index Per Article: 36.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 07/21/2022] [Accepted: 09/27/2022] [Indexed: 02/07/2023]
Abstract
Benefiting from the large-scale archiving of digitized whole-slide images (WSIs), computer-aided diagnosis has been well developed to assist pathologists in decision-making. Content-based WSI retrieval can be a new approach to find highly correlated WSIs in a historically diagnosed WSI archive, which has the potential usages for assisted clinical diagnosis, medical research, and trainee education. During WSI retrieval, it is particularly challenging to encode the semantic content of histopathological images and to measure the similarity between images for interpretable results due to the gigapixel size of WSIs. In this work, we propose a Retrieval with Clustering-guided Contrastive Learning (RetCCL) framework for robust and accurate WSI-level image retrieval, which integrates a novel self-supervised feature learning method and a global ranking and aggregation algorithm for much improved performance. The proposed feature learning method makes use of existing large-scale unlabeled histopathological image data, which helps learn universal features that could be used directly for subsequent WSI retrieval tasks without extra fine-tuning. The proposed WSI retrieval method not only returns a set of WSIs similar to a query WSI, but also highlights patches or sub-regions of each WSI that share high similarity with patches of the query WSI, which helps pathologists interpret the searching results. Our WSI retrieval framework has been evaluated on the tasks of anatomical site retrieval and cancer subtype retrieval using over 22,000 slides, and the performance exceeds other state-of-the-art methods significantly (around 10% for the anatomic site retrieval in terms of average mMV@10). Besides, the patch retrieval using our learned feature representation offers a performance improvement of 24% on the TissueNet dataset in terms of mMV@5 compared with using ImageNet pre-trained features, which further demonstrates the effectiveness of the proposed CCL feature learning method.
Collapse
Affiliation(s)
- Xiyue Wang
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China; College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Yuexi Du
- College of Engineering, University of Michigan, Ann Arbor, MI, 48109, United States
| | - Sen Yang
- Tencent AI Lab, Shenzhen 518057, China
| | - Jun Zhang
- Tencent AI Lab, Shenzhen 518057, China
| | - Minghui Wang
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China; College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Jing Zhang
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China.
| | - Wei Yang
- Tencent AI Lab, Shenzhen 518057, China
| | | | - Xiao Han
- Tencent AI Lab, Shenzhen 518057, China.
| |
Collapse
|
13
|
Tharwat M, Sakr NA, El-Sappagh S, Soliman H, Kwak KS, Elmogy M. Colon Cancer Diagnosis Based on Machine Learning and Deep Learning: Modalities and Analysis Techniques. SENSORS (BASEL, SWITZERLAND) 2022; 22:9250. [PMID: 36501951 PMCID: PMC9739266 DOI: 10.3390/s22239250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 11/24/2022] [Indexed: 06/17/2023]
Abstract
The treatment and diagnosis of colon cancer are considered to be social and economic challenges due to the high mortality rates. Every year, around the world, almost half a million people contract cancer, including colon cancer. Determining the grade of colon cancer mainly depends on analyzing the gland's structure by tissue region, which has led to the existence of various tests for screening that can be utilized to investigate polyp images and colorectal cancer. This article presents a comprehensive survey on the diagnosis of colon cancer. This covers many aspects related to colon cancer, such as its symptoms and grades as well as the available imaging modalities (particularly, histopathology images used for analysis) in addition to common diagnosis systems. Furthermore, the most widely used datasets and performance evaluation metrics are discussed. We provide a comprehensive review of the current studies on colon cancer, classified into deep-learning (DL) and machine-learning (ML) techniques, and we identify their main strengths and limitations. These techniques provide extensive support for identifying the early stages of cancer that lead to early treatment of the disease and produce a lower mortality rate compared with the rate produced after symptoms develop. In addition, these methods can help to prevent colorectal cancer from progressing through the removal of pre-malignant polyps, which can be achieved using screening tests to make the disease easier to diagnose. Finally, the existing challenges and future research directions that open the way for future work in this field are presented.
Collapse
Affiliation(s)
- Mai Tharwat
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Nehal A. Sakr
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Shaker El-Sappagh
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Benha 13512, Egypt
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
| | - Hassan Soliman
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Kyung-Sup Kwak
- Department of Information and Communication Engineering, Inha University, Incheon 22212, Republic of Korea
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| |
Collapse
|
14
|
Da Q, Huang X, Li Z, Zuo Y, Zhang C, Liu J, Chen W, Li J, Xu D, Hu Z, Yi H, Guo Y, Wang Z, Chen L, Zhang L, He X, Zhang X, Mei K, Zhu C, Lu W, Shen L, Shi J, Li J, S S, Krishnamurthi G, Yang J, Lin T, Song Q, Liu X, Graham S, Bashir RMS, Yang C, Qin S, Tian X, Yin B, Zhao J, Metaxas DN, Li H, Wang C, Zhang S. DigestPath: A benchmark dataset with challenge review for the pathological detection and segmentation of digestive-system. Med Image Anal 2022; 80:102485. [DOI: 10.1016/j.media.2022.102485] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Revised: 04/08/2022] [Accepted: 05/20/2022] [Indexed: 12/19/2022]
|
15
|
Approximate Nearest Neighbor Search Using Enhanced Accumulative Quantization. ELECTRONICS 2022. [DOI: 10.3390/electronics11142236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Approximate nearest neighbor (ANN) search is fundamental for fast content-based image retrieval. While vector quantization is one key to performing an effective ANN search, in order to further improve ANN search accuracy, we propose an enhanced accumulative quantization (E-AQ). Based on our former work, we introduced the idea of the quarter point into accumulative quantization (AQ). Instead of finding the nearest centroid, the quarter vector was used to quantize the vector and was computed for each vector according to its nearest centroid and second nearest centroid. Then, the error produced through codebook training and vector quantization was reduced without increasing the number of centroids in each codebook. To evaluate the accuracy to which vectors were approximated by their quantization outputs, we realized an E-AQ-based exhaustive method for ANN search. Experimental results show that our approach gained up to 0.996 and 0.776 Recall@100 with eight size 256 codebooks on SIFT and GIST datasets, respectively, which is at least 1.6% and 4.9% higher than six other state-of-the-art methods. Moreover, based on the experimental results, E-AQ needs fewer codebooks while still providing the same ANN search accuracy.
Collapse
|
16
|
Baidoo N, Crawley E, Knowles CH, Sanger GJ, Belai A. Total collagen content and distribution is increased in human colon during advancing age. PLoS One 2022; 17:e0269689. [PMID: 35714071 PMCID: PMC9205511 DOI: 10.1371/journal.pone.0269689] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 05/25/2022] [Indexed: 12/26/2022] Open
Abstract
Background The effect of ageing on total collagen content of human colon has been poorly investigated. The aim of this study was to determine if ageing altered total collagen content and distribution in the human colon. Methods Macroscopically normal ascending colon was obtained at surgery from cancer patients (n = 31) without diagnosis of diverticular disease or inflammatory bowel disease. Masson’s trichrome and Picrosirius red stains were employed to identify the total collagen content and distribution within the sublayers of the colonic wall for adult (22–60 years; 6 males, 6 females) and elderly (70 – 91years; 6 males, 4 female) patients. A hydroxyproline assay evaluated the total collagen concentration for adult (30–64 years; 9 male, 6 female) and elderly (66–91 years; 8 male, 8 female) patients. Key results Histological studies showed that the percentage mean intensity of total collagen staining in the mucosa, submucosa and muscularis externa was, respectively, 14(1.9) %, 74(3.2) % and 12(1.5) % in the adult ascending colon. Compared with the adults, the total collagen fibres content was increased in the submucosa (mean intensity; 163.1 ± 11.1 vs. 124.5 ± 7.8; P < 0.05) and muscularis externa (42.5 ± 8.0 vs. 20.6 ± 2.8; P < 0.01) of the elderly patients. There was no change in collagen content of the mucosa. The total collagen concentration was increased in the elderly by 16%. Sex-related differences were not found, and data were combined for analysis. Conclusions Greater total collagen content was found in the submucosa and muscularis externa of the elderly human male and female colon. These changes may contribute to a possible loss of function with ageing.
Collapse
Affiliation(s)
- Nicholas Baidoo
- University of Roehampton, School of Life Sciences, London, United Kingdom
| | - Ellie Crawley
- Faculty of Medicine and Dentistry, Blizard Institute, Queen Mary University of London, London, United Kingdom
| | - Charles H. Knowles
- Faculty of Medicine and Dentistry, Blizard Institute, Queen Mary University of London, London, United Kingdom
| | - Gareth J. Sanger
- Faculty of Medicine and Dentistry, Blizard Institute, Queen Mary University of London, London, United Kingdom
| | - Abi Belai
- University of Roehampton, School of Life Sciences, London, United Kingdom
- * E-mail:
| |
Collapse
|
17
|
Liu X, Kang X, Nie X, Guo J, Wang S, Yin Y. Learning Binary Semantic Embedding forLarge-Scale Breast Histology Image Analysis. IEEE J Biomed Health Inform 2022; PP:3240-3250. [PMID: 35320109 DOI: 10.1109/jbhi.2022.3161341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
With the progress of clinical imaging innovation and machine learning, the computer-assisted diagnosis of breast histology images has attracted broad attention. Nonetheless, the use of computer-assisted diagnoses has been blocked due to the incomprehensibility of customary classification models. In view of this question, we propose a novel method for Learning Binary Semantic Embedding (LBSE). In this study, bit balance and uncorrela-tion constraints, double supervision, discrete optimization and asymmetric pairwise similarity are seamlessly integrated for learning binary semantic-preserving embedding. Moreover, a fusion-based strategy is carefully designed to handle the intractable problem of parameter setting, saving huge amounts of time for boundary tuning. Based on the above-mentioned proficient and effective embedding, classification and retrieval are simultaneously performed to give interpretable image-based deduction and model helped conclusions for breast histology images. Extensive experiments are conducted on three benchmark datasets to approve the predominance of LBSE in different situations.
Collapse
|
18
|
Classification of Breast Cancer Images by Implementing Improved DCNN with Artificial Fish School Model. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6785707. [PMID: 35242181 PMCID: PMC8888076 DOI: 10.1155/2022/6785707] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 01/21/2022] [Accepted: 01/28/2022] [Indexed: 11/17/2022]
Abstract
Breast cancer is an important factor affecting human health. This issue has various diagnosis process which were evolved such as mammography, fine needle aspirate, and surgical biopsy. These techniques use pathological breast cancer images for diagnosis. Breast cancer surgery allows the forensic doctor to histologist to access the microscopic level of breast tissues. The conventional method uses an optimized radial basis neural network using a cuckoo search algorithm. Existing radial basis neural network techniques utilized feature extraction and reduction parts separately. It is proposed that it overcomes the CNN approach for all the feature extraction and classification process to reduce time complexity. In this proposed method, a convolutional neural network is proposed based on an artificial fish school algorithm. The breast cancer image dataset is taken from cancer imaging archives. In the preprocessing step of classification, the breast cancer image is filtered with the support of a wiener filter for classification. The convolutional neural network has set the intense data of an image and is used to remove the features. After executing the extraction procedure, the reduction process is performed to speed up the train and test data processing. Here, the artificial fish school optimization algorithm is utilized to give the direct training data to the deep convolutional neural network. The extraction, reduction, and classification of features are utilized in the single deep convolutional neural network process. In this process, the optimization technique helps to decrease the error rate and increases the performance efficiency by finding the number of epochs and training images to the Deep CNN. In this system, the normal, benign, and malignant tissues are predicted. By comparing the existing RBF technique with the cuckoo search algorithm, the presented model attains the outcome in the way of sensitivity, accuracy, specificity, F1 score, and recall.
Collapse
|
19
|
Abdou MA. Literature review: efficient deep neural networks techniques for medical image analysis. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-06960-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
20
|
Fast and scalable search of whole-slide images via self-supervised deep learning. Nat Biomed Eng 2022; 6:1420-1434. [PMID: 36217022 PMCID: PMC9792371 DOI: 10.1038/s41551-022-00929-8] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 07/15/2022] [Indexed: 01/14/2023]
Abstract
The adoption of digital pathology has enabled the curation of large repositories of gigapixel whole-slide images (WSIs). Computationally identifying WSIs with similar morphologic features within large repositories without requiring supervised training can have significant applications. However, the retrieval speeds of algorithms for searching similar WSIs often scale with the repository size, which limits their clinical and research potential. Here we show that self-supervised deep learning can be leveraged to search for and retrieve WSIs at speeds that are independent of repository size. The algorithm, which we named SISH (for self-supervised image search for histology) and provide as an open-source package, requires only slide-level annotations for training, encodes WSIs into meaningful discrete latent representations and leverages a tree data structure for fast searching followed by an uncertainty-based ranking algorithm for WSI retrieval. We evaluated SISH on multiple tasks (including retrieval tasks based on tissue-patch queries) and on datasets spanning over 22,000 patient cases and 56 disease subtypes. SISH can also be used to aid the diagnosis of rare cancer types for which the number of available WSIs is often insufficient to train supervised deep-learning models.
Collapse
|
21
|
Rao PMM, Singh SK, Khamparia A, Bhushan B, Podder P. Multi-Class Breast Cancer Classification Using Ensemble of Pretrained models and Transfer Learning. Curr Med Imaging 2022; 18:409-416. [PMID: 33602102 DOI: 10.2174/1573405617666210218101418] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 12/21/2020] [Accepted: 01/06/2021] [Indexed: 02/08/2023]
Abstract
AIMS Early detection of breast cancer has reduced many deaths. Earlier CAD systems used to be the second opinion for radiologists and clinicians. Machine learning and deep learning have brought tremendous changes in medical diagnosis and imagining. BACKGROUND Breast cancer is the most commonly occurring cancer in women and it is the second most common cancer overall. According to the 2018 statistics, there were over 2million cases all over the world. Belgium and Luxembourg have the highest rate of cancer. OBJECTIVE A method for breast cancer detection has been proposed using Ensemble learning. 2- class and 8-class classification is performed. METHODS To deal with imbalance classification, the authors have proposed an ensemble of pretrained models. RESULTS 98.5% training accuracy and 89% of test accuracy are achieved on 8-class classification. Moreover, 99.1% and 98% train and test accuracy are achieved on 2 class classification. CONCLUSION it is found that there are high misclassifications in class DC when compared to the other classes, this is due to the imbalance in the dataset. In the future, one can increase the size of the datasets or use different methods. In implement this research work, authors have used 2 Nvidia Tesla V100 GPU's in google cloud platform.
Collapse
Affiliation(s)
| | - Sanjay Kumar Singh
- School of computer science and engineering, Lovely professional university, Punjab, India
| | - Aditya Khamparia
- School of computer science and engineering, Lovely professional university, Punjab, India
| | - Bharat Bhushan
- Department of CSE, School of Engineering and Technology, Sharda University, India
| | - Prajoy Podder
- Department of ICT, Bangladesh University of Engineering & Technology, Dhaka, Bangladesh
| |
Collapse
|
22
|
|
23
|
Zheng Y, Jiang Z, Shi J, Xie F, Zhang H, Luo W, Hu D, Sun S, Jiang Z, Xue C. Encoding histopathology whole slide images with location-aware graphs for diagnostically relevant regions retrieval. Med Image Anal 2021; 76:102308. [PMID: 34856455 DOI: 10.1016/j.media.2021.102308] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2019] [Revised: 10/14/2021] [Accepted: 11/17/2021] [Indexed: 01/18/2023]
Abstract
Content-based histopathological image retrieval (CBHIR) has become popular in recent years in histopathological image analysis. CBHIR systems provide auxiliary diagnosis information for pathologists by searching for and returning regions that are contently similar to the region of interest (ROI) from a pre-established database. It is challenging and yet significant in clinical applications to retrieve diagnostically relevant regions from a database consisting of histopathological whole slide images (WSIs). In this paper, we propose a novel framework for regions retrieval from WSI database based on location-aware graphs and deep hash techniques. Compared to the present CBHIR framework, both structural information and global location information of ROIs in the WSI are preserved by graph convolution and self-attention operations, which makes the retrieval framework more sensitive to regions that are similar in tissue distribution. Moreover, benefited from the graph structure, the proposed framework has good scalability for both the size and shape variation of ROIs. It allows the pathologist to define query regions using free curves according to the appearance of tissue. Thirdly, the retrieval is achieved based on the hash technique, which ensures the framework is efficient and adequate for practical large-scale WSI database. The proposed method was evaluated on an in-house endometrium dataset with 2650 WSIs and the public ACDC-LungHP dataset. The experimental results have demonstrated that the proposed method achieved a mean average precision above 0.667 on the endometrium dataset and above 0.869 on the ACDC-LungHP dataset in the task of irregular region retrieval, which are superior to the state-of-the-art methods. The average retrieval time from a database containing 1855 WSIs is 0.752 ms. The source code is available at https://github.com/zhengyushan/lagenet.
Collapse
Affiliation(s)
- Yushan Zheng
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China
| | - Zhiguo Jiang
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China.
| | - Jun Shi
- School of Software, Hefei University of Technology, Hefei 230601, China.
| | - Fengying Xie
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China
| | - Haopeng Zhang
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China
| | - Wei Luo
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China
| | - Dingyi Hu
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China
| | - Shujiao Sun
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China
| | - Zhongmin Jiang
- Department of Pathology, Tianjin Fifth Central Hospital, Tianjin 300450, China
| | - Chenghai Xue
- Wankangyuan Tianjin Gene Technology, Inc, Tianjin 300220, China; Tianjin Institute of Industrial Biotechnology, Chinese Academy of Sciences, Tianjin 300308, China
| |
Collapse
|
24
|
Chen P, Liang Y, Shi X, Yang L, Gader P. Automatic Whole Slide Pathology Image Diagnosis Framework via Unit Stochastic Selection and Attention Fusion. Neurocomputing 2021; 453:312-325. [PMID: 35082453 PMCID: PMC8786216 DOI: 10.1016/j.neucom.2020.04.153] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Pathology tissue slides are taken as the gold standard for the diagnosis of most cancer diseases. Automatic pathology slide diagnosis is still a challenging task for researchers because of the high-resolution, significant morphological variation, and ambiguity between malignant and benign regions in whole slide images (WSIs). In this study, we introduce a general framework to automatically diagnose different types of WSIs via unit stochastic selection and attention fusion. For example, a unit can denote a patch in a histopathology slide or a cell in a cytopathology slide. To be specific, we first train a unit-level convolutional neural network (CNN) to perform two tasks: constructing feature extractors for the units and for estimating a unit's non-benign probability. Then we use our novel stochastic selection algorithm to choose a small subset of units that are most likely to be non-benign, referred to as the Units Of Interest (UOI), as determined by the CNN. Next, we use the attention mechanism to fuse the representations of the UOI to form a fixed-length descriptor for the WSI's diagnosis. We evaluate the proposed framework on three datasets: histological thyroid frozen sections, histological colonoscopy tissue slides, and cytological cervical pap smear slides. The framework achieves diagnosis accuracies higher than 0.8 and AUC values higher than 0.85 in all three applications. Experiments demonstrate the generality and effectiveness of the proposed framework and its potentiality for clinical applications.
Collapse
Affiliation(s)
- Pingjun Chen
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida
| | - Yun Liang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida
| | - Xiaoshuang Shi
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida
| | - Lin Yang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida
| | - Paul Gader
- Computer and Information Science and Engineering, University of Florida
| |
Collapse
|
25
|
Sotomayor CG, Mendoza M, Castañeda V, Farías H, Molina G, Pereira G, Härtel S, Solar M, Araya M. Content-Based Medical Image Retrieval and Intelligent Interactive Visual Browser for Medical Education, Research and Care. Diagnostics (Basel) 2021; 11:1470. [PMID: 34441404 PMCID: PMC8392084 DOI: 10.3390/diagnostics11081470] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Revised: 08/03/2021] [Accepted: 08/09/2021] [Indexed: 01/17/2023] Open
Abstract
Medical imaging is essential nowadays throughout medical education, research, and care. Accordingly, international efforts have been made to set large-scale image repositories for these purposes. Yet, to date, browsing of large-scale medical image repositories has been troublesome, time-consuming, and generally limited by text search engines. A paradigm shift, by means of a query-by-example search engine, would alleviate these constraints and beneficially impact several practical demands throughout the medical field. The current project aims to address this gap in medical imaging consumption by developing a content-based image retrieval (CBIR) system, which combines two image processing architectures based on deep learning. Furthermore, a first-of-its-kind intelligent visual browser was designed that interactively displays a set of imaging examinations with similar visual content on a similarity map, making it possible to search for and efficiently navigate through a large-scale medical imaging repository, even if it has been set with incomplete and curated metadata. Users may, likewise, provide text keywords, in which case the system performs a content- and metadata-based search. The system was fashioned with an anonymizer service and designed to be fully interoperable according to international standards, to stimulate its integration within electronic healthcare systems and its adoption for medical education, research and care. Professionals of the healthcare sector, by means of a self-administered questionnaire, underscored that this CBIR system and intelligent interactive visual browser would be highly useful for these purposes. Further studies are warranted to complete a comprehensive assessment of the performance of the system through case description and protocolized evaluations by medical imaging specialists.
Collapse
Affiliation(s)
- Camilo G. Sotomayor
- Radiology Department, Clinical Hospital University of Chile, University of Chile, Santiago 8380453, Chile; (C.G.S.); (G.P.)
- Center for Medical Informatics and Telemedicine, Institute of Biomedical Sciences, Faculty of Medicine, University of Chile, Santiago 8380453, Chile; (V.C.); (S.H.)
- Department of Electronic Engineering, Federico Santa Maria Technical University, Valparaíso 2340000, Chile
| | - Marcelo Mendoza
- Department of Informatics, Federico Santa Maria Technical University, Santiago 8380453, Chile; (M.M.); (H.F.); (G.M.); (M.S.)
| | - Víctor Castañeda
- Center for Medical Informatics and Telemedicine, Institute of Biomedical Sciences, Faculty of Medicine, University of Chile, Santiago 8380453, Chile; (V.C.); (S.H.)
- Department of Medical Technology, Faculty of Medicine, University of Chile, Santiago 8380453, Chile
| | - Humberto Farías
- Department of Informatics, Federico Santa Maria Technical University, Santiago 8380453, Chile; (M.M.); (H.F.); (G.M.); (M.S.)
| | - Gabriel Molina
- Department of Informatics, Federico Santa Maria Technical University, Santiago 8380453, Chile; (M.M.); (H.F.); (G.M.); (M.S.)
| | - Gonzalo Pereira
- Radiology Department, Clinical Hospital University of Chile, University of Chile, Santiago 8380453, Chile; (C.G.S.); (G.P.)
| | - Steffen Härtel
- Center for Medical Informatics and Telemedicine, Institute of Biomedical Sciences, Faculty of Medicine, University of Chile, Santiago 8380453, Chile; (V.C.); (S.H.)
| | - Mauricio Solar
- Department of Informatics, Federico Santa Maria Technical University, Santiago 8380453, Chile; (M.M.); (H.F.); (G.M.); (M.S.)
| | - Mauricio Araya
- Department of Electronic Engineering, Federico Santa Maria Technical University, Valparaíso 2340000, Chile
| |
Collapse
|
26
|
Yu H, Zhang X, Song L, Jiang L, Huang X, Chen W, Zhang C, Li J, Yang J, Hu Z, Duan Q, Chen W, He X, Fan J, Jiang W, Zhang L, Qiu C, Gu M, Sun W, Zhang Y, Peng G, Shen W, Fu G. Large-scale gastric cancer screening and localization using multi-task deep neural network. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
27
|
A Security-Enhanced Image Communication Scheme Using Cellular Neural Network. ENTROPY 2021; 23:e23081000. [PMID: 34441140 PMCID: PMC8392563 DOI: 10.3390/e23081000] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 07/28/2021] [Accepted: 07/28/2021] [Indexed: 11/17/2022]
Abstract
In the current network and big data environment, the secure transmission of digital images is facing huge challenges. The use of some methodologies in artificial intelligence to enhance its security is extremely cutting-edge and also a development trend. To this end, this paper proposes a security-enhanced image communication scheme based on cellular neural network (CNN) under cryptanalysis. First, the complex characteristics of CNN are used to create pseudorandom sequences for image encryption. Then, a plain image is sequentially confused, permuted and diffused to get the cipher image by these CNN-based sequences. Based on cryptanalysis theory, a security-enhanced algorithm structure and relevant steps are detailed. Theoretical analysis and experimental results both demonstrate its safety performance. Moreover, the structure of image cipher can effectively resist various common attacks in cryptography. Therefore, the image communication scheme based on CNN proposed in this paper is a competitive security technology method.
Collapse
|
28
|
Sahasrabudhe M, Sujobert P, Zacharaki EI, Maurin E, Grange B, Jallades L, Paragios N, Vakalopoulou M. Deep Multi-Instance Learning Using Multi-Modal Data for Diagnosis of Lymphocytosis. IEEE J Biomed Health Inform 2021; 25:2125-2136. [PMID: 33206611 DOI: 10.1109/jbhi.2020.3038889] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
We investigate the use of recent advances in deep learning and propose an end-to-end trainable multi-instance convolutional neural network within a mixture-of-experts formulation that combines information from two types of data-images and clinical attributes-for the diagnosis of lymphocytosis. The convolutional network learns to extract meaningful features from images of blood cells using an embedding level approach and aggregates them. Moreover, the mixture-of-experts model combines information from these images as well as clinical attributes to form an end-to-end trainable pipeline for diagnosis of lymphocytosis. Our results demonstrate that even the convolutional network by itself is able to discover meaningful associations between the images and the diagnosis, indicating the presence of important unexploited information in the images. The mixture-of-experts formulation is shown to be more robust while maintaining performance via. a repeatability study to assess the effect of variability in data acquisition on the predictions. The proposed methods are compared with different methods from literature based both on conventional handcrafted features and machine learning, and on recent deep learning models based on attention mechanisms. Our method reports a balanced accuracy of [Formula: see text] and outperfroms the handcrafted feature-based and attention-based approaches as well that of biologists which scored [Formula: see text], [Formula: see text] and [Formula: see text] respectively. These results give insights on the potentials of the applicability of the proposed method in clinical practice. Our code and datasets can be found at https://github.com/msahasrabudhe/lymphoMIL.
Collapse
|
29
|
Improved bag-of-features using grey relational analysis for classification of histology images. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00275-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractAn efficient classification method to categorize histopathological images is a challenging research problem. In this paper, an improved bag-of-features approach is presented as an efficient image classification method. In bag-of-features, a large number of keypoints are extracted from histopathological images that increases the computational cost of the codebook construction step. Therefore, to select the a relevant subset of keypoints, a new keypoints selection method is introduced in the bag-of-features method. To validate the performance of the proposed method, an extensive experimental analysis is conducted on two standard histopathological image datasets, namely ADL and Blue histology datasets. The proposed keypoint selection method reduces the extracted high dimensional features by 95% and 68% from the ADL and Blue histology datasets respectively with less computational time. Moreover, the enhanced bag-of-features method increases classification accuracy by from other considered classification methods.
Collapse
|
30
|
Sze-To A, Riasatian A, Tizhoosh HR. Searching for pneumothorax in x-ray images using autoencoded deep features. Sci Rep 2021; 11:9817. [PMID: 33972606 PMCID: PMC8111019 DOI: 10.1038/s41598-021-89194-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Accepted: 04/20/2021] [Indexed: 12/02/2022] Open
Abstract
Fast diagnosis and treatment of pneumothorax, a collapsed or dropped lung, is crucial to avoid fatalities. Pneumothorax is typically detected on a chest X-ray image through visual inspection by experienced radiologists. However, the detection rate is quite low due to the complexity of visual inspection for small lung collapses. Therefore, there is an urgent need for automated detection systems to assist radiologists. Although deep learning classifiers generally deliver high accuracy levels in many applications, they may not be useful in clinical practice due to the lack of high-quality and representative labeled image sets. Alternatively, searching in the archive of past cases to find matching images may serve as a "virtual second opinion" through accessing the metadata of matched evidently diagnosed cases. To use image search as a triaging or diagnosis assistant, we must first tag all chest X-ray images with expressive identifiers, i.e., deep features. Then, given a query chest X-ray image, the majority vote among the top k retrieved images can provide a more explainable output. In this study, we searched in a repository with more than 550,000 chest X-ray images. We developed the Autoencoding Thorax Net (short AutoThorax -Net) for image search in chest radiographs. Experimental results show that image search based on AutoThorax -Net features can achieve high identification performance providing a path towards real-world deployment. We achieved 92% AUC accuracy for a semi-automated search in 194,608 images (pneumothorax and normal) and 82% AUC accuracy for fully automated search in 551,383 images (normal, pneumothorax and many other chest diseases).
Collapse
Affiliation(s)
- Antonio Sze-To
- Kimia Lab, University of Waterloo, Waterloo, ON, N2L 3G1, Canada
| | - Abtin Riasatian
- Kimia Lab, University of Waterloo, Waterloo, ON, N2L 3G1, Canada
| | - H R Tizhoosh
- Kimia Lab, University of Waterloo, Waterloo, ON, N2L 3G1, Canada.
- Vector Institute, MaRS Centre, Toronto, ON, M5G 1M1, Canada.
| |
Collapse
|
31
|
Zheng Y, Jiang Z, Xie F, Shi J, Zhang H, Huai J, Cao M, Yang X. Diagnostic Regions Attention Network (DRA-Net) for Histopathology WSI Recommendation and Retrieval. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1090-1103. [PMID: 33351756 DOI: 10.1109/tmi.2020.3046636] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The development of whole slide imaging techniques and online digital pathology platforms have accelerated the popularization of telepathology for remote tumor diagnoses. During a diagnosis, the behavior information of the pathologist can be recorded by the platform and then archived with the digital case. The browsing path of the pathologist on the WSI is one of the valuable information in the digital database because the image content within the path is expected to be highly correlated with the diagnosis report of the pathologist. In this article, we proposed a novel approach for computer-assisted cancer diagnosis named session-based histopathology image recommendation (SHIR) based on the browsing paths on WSIs. To achieve the SHIR, we developed a novel diagnostic regions attention network (DRA-Net) to learn the pathology knowledge from the image content associated with the browsing paths. The DRA-Net does not rely on the pixel-level or region-level annotations of pathologists. All the data for training can be automatically collected by the digital pathology platform without interrupting the pathologists' diagnoses. The proposed approaches were evaluated on a gastric dataset containing 983 cases within 5 categories of gastric lesions. The quantitative and qualitative assessments on the dataset have demonstrated the proposed SHIR framework with the novel DRA-Net is effective in recommending diagnostically relevant cases for auxiliary diagnosis. The MRR and MAP for the recommendation are respectively 0.816 and 0.836 on the gastric dataset. The source code of the DRA-Net is available at https://github.com/zhengyushan/dpathnet.
Collapse
|
32
|
Qu H, Wu P, Huang Q, Yi J, Yan Z, Li K, Riedlinger GM, De S, Zhang S, Metaxas DN. Weakly Supervised Deep Nuclei Segmentation Using Partial Points Annotation in Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3655-3666. [PMID: 32746112 DOI: 10.1109/tmi.2020.3002244] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Nuclei segmentation is a fundamental task in histopathology image analysis. Typically, such segmentation tasks require significant effort to manually generate accurate pixel-wise annotations for fully supervised training. To alleviate such tedious and manual effort, in this paper we propose a novel weakly supervised segmentation framework based on partial points annotation, i.e., only a small portion of nuclei locations in each image are labeled. The framework consists of two learning stages. In the first stage, we design a semi-supervised strategy to learn a detection model from partially labeled nuclei locations. Specifically, an extended Gaussian mask is designed to train an initial model with partially labeled data. Then, self-training with background propagation is proposed to make use of the unlabeled regions to boost nuclei detection and suppress false positives. In the second stage, a segmentation model is trained from the detected nuclei locations in a weakly-supervised fashion. Two types of coarse labels with complementary information are derived from the detected points and are then utilized to train a deep neural network. The fully-connected conditional random field loss is utilized in training to further refine the model without introducing extra computational complexity during inference. The proposed method is extensively evaluated on two nuclei segmentation datasets. The experimental results demonstrate that our method can achieve competitive performance compared to the fully supervised counterpart and the state-of-the-art methods while requiring significantly less annotation effort.
Collapse
|
33
|
Objective Diagnosis for Histopathological Images Based on Machine Learning Techniques: Classical Approaches and New Trends. MATHEMATICS 2020. [DOI: 10.3390/math8111863] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Histopathology refers to the examination by a pathologist of biopsy samples. Histopathology images are captured by a microscope to locate, examine, and classify many diseases, such as different cancer types. They provide a detailed view of different types of diseases and their tissue status. These images are an essential resource with which to define biological compositions or analyze cell and tissue structures. This imaging modality is very important for diagnostic applications. The analysis of histopathology images is a prolific and relevant research area supporting disease diagnosis. In this paper, the challenges of histopathology image analysis are evaluated. An extensive review of conventional and deep learning techniques which have been applied in histological image analyses is presented. This review summarizes many current datasets and highlights important challenges and constraints with recent deep learning techniques, alongside possible future research avenues. Despite the progress made in this research area so far, it is still a significant area of open research because of the variety of imaging techniques and disease-specific characteristics.
Collapse
|
34
|
Haq NF, Moradi M, Wang ZJ. A deep community based approach for large scale content based X-ray image retrieval. Med Image Anal 2020; 68:101847. [PMID: 33249389 DOI: 10.1016/j.media.2020.101847] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 07/31/2020] [Accepted: 08/19/2020] [Indexed: 02/01/2023]
Abstract
A computer assisted system for automatic retrieval of medical images with similar image contents can serve as an efficient management tool for handling and mining large scale data, and can also be used as a tool in clinical decision support systems. In this paper, we propose a deep community based automated medical image retrieval framework for extracting similar images from a large scale X-ray database. The framework integrates a deep learning-based image feature generation approach and a network community detection technique to extract similar images. When compared with the state-of-the-art medical image retrieval techniques, the proposed approach demonstrated improved performance. We evaluated the performance of the proposed method on two large scale chest X-ray datasets, where given a query image, the proposed approach was able to extract images with similar disease labels with a precision of 85%. To the best of our knowledge, this is the first deep community based image retrieval application on large scale chest X-ray database.
Collapse
Affiliation(s)
| | - Mehdi Moradi
- IBM Research - Almaden Research Center, San Jose, USA
| | - Z Jane Wang
- The University of British Columbia, Vancouver, Canada
| |
Collapse
|
35
|
Chen P, Shi X, Liang Y, Li Y, Yang L, Gader PD. Interactive thyroid whole slide image diagnostic system using deep representation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 195:105630. [PMID: 32634647 PMCID: PMC7492444 DOI: 10.1016/j.cmpb.2020.105630] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 06/22/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVES The vast size of the histopathology whole slide image poses formidable challenges to its automatic diagnosis. With the goal of computer-aided diagnosis and the insights that suspicious regions are generally easy to identify in thyroid whole slide images (WSIs), we develop an interactive whole slide diagnostic system for thyroid frozen sections based on the suspicious regions preselected by pathologists. METHODS We propose to generate feature representations for the suspicious regions via extracting and fusing patch features using deep neural networks. We then evaluate region classification and retrieval on four classifiers and three supervised hashing methods based on the feature representations. The code is released at https://github.com/PingjunChen/ThyroidInteractive. RESULTS We evaluate the proposed system on 345 thyroid frozen sections and achieve 96.1% cross-validated classification accuracy, and retrieval mean average precision (MAP) of 0.972. CONCLUSIONS With the participation of pathologists, the system possesses the following four notable advantages compared to directly handling whole slide images: 1) Reduced interference of irrelevant regions; 2) Alleviated computation and memory cost. 3) Fine-grained and precise suspicious region retrieval. 4) Cooperative relationship between pathologists and the diagnostic system. Additionally, experimental results demonstrate the potential of the proposed system on the practical thyroid frozen section diagnosis.
Collapse
Affiliation(s)
- Pingjun Chen
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, United States.
| | - Xiaoshuang Shi
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, United States
| | - Yun Liang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, United States
| | - Yuan Li
- Department of Pathology, Peking Union Medical College Hospital, Beijing, China
| | - Lin Yang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, United States
| | - Paul D Gader
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL, United States
| |
Collapse
|
36
|
Kalra S, Tizhoosh HR, Choi C, Shah S, Diamandis P, Campbell CJV, Pantanowitz L. Yottixel - An Image Search Engine for Large Archives of Histopathology Whole Slide Images. Med Image Anal 2020; 65:101757. [PMID: 32623275 DOI: 10.1016/j.media.2020.101757] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 06/11/2020] [Accepted: 06/16/2020] [Indexed: 01/21/2023]
Abstract
With the emergence of digital pathology, searching for similar images in large archives has gained considerable attention. Image retrieval can provide pathologists with unprecedented access to the evidence embodied in already diagnosed and treated cases from the past. This paper proposes a search engine specialized for digital pathology, called Yottixel, a portmanteau for "one yotta pixel," alluding to the big-data nature of histopathology images. The most impressive characteristic of Yottixel is its ability to represent whole slide images (WSIs) in a compact manner. Yottixel can perform millions of searches in real-time with a high search accuracy and low storage profile. Yottixel uses an intelligent indexing algorithm capable of representing WSIs with a mosaic of patches which are then converted into barcodes, called "Bunch of Barcodes" (BoB), the most prominent performance enabler of Yottixel. The performance of the prototype platform is qualitatively tested using 300 WSIs from the University of Pittsburgh Medical Center (UPMC) and 2,020 WSIs from The Cancer Genome Atlas Program (TCGA) provided by the National Cancer Institute. Both datasets amount to more than 4,000,000 patches of 1000 × 1000 pixels. We report three sets of experiments that show that Yottixel can accurately retrieve organs and malignancies, and its semantic ordering shows good agreement with the subjective evaluation of human observers.
Collapse
Affiliation(s)
- Shivam Kalra
- Kimia Lab, University of Waterloo, Ontario, Canada; Huron Digital Pathology, St. Jacobs, ON, Canada
| | - H R Tizhoosh
- Kimia Lab, University of Waterloo, Ontario, Canada; Vector Institute, MaRS Centre, Toronto, Canada.
| | | | | | | | | | - Liron Pantanowitz
- University of Pittsburgh Medical Center, Department of Pathology, PA, USA
| |
Collapse
|
37
|
Cao J, Chen L, Wu C, Zhang Z. CM-supplement network model for reducing the memory consumption during multilabel image annotation. PLoS One 2020; 15:e0234014. [PMID: 32479515 PMCID: PMC7263637 DOI: 10.1371/journal.pone.0234014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 05/15/2020] [Indexed: 11/24/2022] Open
Abstract
With the rapid development of the Internet and the increasing popularity of mobile devices, the availability of digital image resources is increasing exponentially. How to rapidly and effectively retrieve and organize image information has been a hot issue that urgently must be solved. In the field of image retrieval, image auto-annotation remains a basic and challenging task. Targeting the drawbacks of the low accuracy rate and high memory resource consumption of current multilabel annotation methods, this study proposed a CM-supplement network model. This model combines the merits of cavity convolutions, Inception modules and a supplement network. The replacement of common convolutions with cavity convolutions enlarged the receptive field without increasing the number of parameters. The incorporation of Inception modules enables the model to extract image features at different scales with less memory consumption than before. The adoption of the supplement network enables the model to obtain the negative features of images. After 100 training iterations on the PASCAL VOC 2012 dataset, the proposed model achieved an overall annotation accuracy rate of 94.5%, which increased by 10.0 and 1.1 percentage points compared with the traditional convolution neural network (CNN) and double-channel CNN (DCCNN). After stabilization, this model achieved an accuracy of up to 96.4%. Moreover, the number of parameters in the DCCNN was more than 1.5 times that of the CM-supplement network. Without increasing the amount of memory resources consumed, the proposed CM-supplement network can achieve comparable or even better annotation effects than a DCCNN.
Collapse
Affiliation(s)
- Jianfang Cao
- Department of Computer Science and Technology, Xinzhou Teachers University, Xinzhou, China
- School of Computer Science and Technology, Taiyuan University of Science and Technology, Taiyuan, China
- * E-mail:
| | - Lichao Chen
- School of Computer Science and Technology, Taiyuan University of Science and Technology, Taiyuan, China
| | - Chenyan Wu
- School of Computer Science and Technology, Taiyuan University of Science and Technology, Taiyuan, China
| | - Zibang Zhang
- School of Computer Science and Technology, Taiyuan University of Science and Technology, Taiyuan, China
| |
Collapse
|
38
|
Yang P, Zhai Y, Li L, Lv H, Wang J, Zhu C, Jiang R. A deep metric learning approach for histopathological image retrieval. Methods 2020; 179:14-25. [PMID: 32439386 DOI: 10.1016/j.ymeth.2020.05.015] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 05/04/2020] [Accepted: 05/13/2020] [Indexed: 11/30/2022] Open
Abstract
To distinguish ambiguous images during specimen slides viewing, pathologists usually spend lots of time to seek guidance from confirmed similar images or cases, which is inefficient. Therefore, several histopathological image retrieval methods have been proposed for pathologists to easily obtain images sharing similar content with the query images. However, these methods cannot ensure a reasonable similarity metric, and some of them need lots of annotated images to train a feature extractor to represent images. Motivated by this circumstance, we propose the first deep metric learning-based histopathological image retrieval method in this paper and construct a deep neural network based on the mixed attention mechanism to learn an embedding function under the supervision of image category information. With the learned embedding function, original images are mapped into the predefined metric space where similar images from the same category are close to each other, so that the distance between image pairs in the metric space can be regarded as a reasonable metric for image similarity. We evaluate the proposed method on two histopathological image retrieval datasets: our self-established dataset and a public dataset called Kimia Path24, on which the proposed method achieves recall in top-1 recommendation (Recall@1) of 84.04% and 97.89% respectively. Moreover, further experiments confirm that the proposed method can achieve comparable performance to several published methods with less training data, which hedges the shortage of annotated medical image data to some extent. Code is available at https://github.com/easonyang1996/DML_HistoImgRetrieval.
Collapse
Affiliation(s)
- Pengshuai Yang
- Ministry of Education Key Laboratory of Bioinformatics; Bioinformatics Division and and Center for Synthetic and Systems Biology, Beijing National Research Center for Information Science and Technology; Department of Automation, Tsinghua University, Beijing 100084, China.
| | - Yupeng Zhai
- Ministry of Education Key Laboratory of Bioinformatics; Bioinformatics Division and and Center for Synthetic and Systems Biology, Beijing National Research Center for Information Science and Technology; Department of Automation, Tsinghua University, Beijing 100084, China.
| | - Lin Li
- Ministry of Education Key Laboratory of Bioinformatics; Bioinformatics Division and and Center for Synthetic and Systems Biology, Beijing National Research Center for Information Science and Technology; Department of Automation, Tsinghua University, Beijing 100084, China.
| | - Hairong Lv
- Ministry of Education Key Laboratory of Bioinformatics; Bioinformatics Division and and Center for Synthetic and Systems Biology, Beijing National Research Center for Information Science and Technology; Department of Automation, Tsinghua University, Beijing 100084, China.
| | - Jigang Wang
- Department of Pathology, The Affiliated Hospital of Qingdao University, Qingdao City 266000, Shandong Province, China.
| | - Chengzhan Zhu
- Department of Hepatobiliary and Pancreatic Surgery, The Affiliated Hospital of Qingdao University, Qingdao City 266000, Shandong Province, China.
| | - Rui Jiang
- Ministry of Education Key Laboratory of Bioinformatics; Bioinformatics Division and and Center for Synthetic and Systems Biology, Beijing National Research Center for Information Science and Technology; Department of Automation, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
39
|
Multiple Query Content-Based Image Retrieval Using Relevance Feature Weight Learning. J Imaging 2020; 6:jimaging6010002. [PMID: 34460641 PMCID: PMC8321011 DOI: 10.3390/jimaging6010002] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2019] [Revised: 01/07/2020] [Accepted: 01/13/2020] [Indexed: 11/16/2022] Open
Abstract
We propose a novel multiple query retrieval approach, named weight-learner, which relies on visual feature discrimination to estimate the distances between the query images and images in the database. For each query image, this discrimination consists of learning, in an unsupervised manner, the optimal relevance weight for each visual feature/descriptor. These feature relevance weights are designed to reduce the semantic gap between the extracted visual features and the user’s high-level semantics. We mathematically formulate the proposed solution through the minimization of some objective functions. This optimization aims to produce optimal feature relevance weights with respect to the user query. The proposed approach is assessed using an image collection from the Corel database.
Collapse
|
40
|
Shi X, Su H, Xing F, Liang Y, Qu G, Yang L. Graph temporal ensembling based semi-supervised convolutional neural network with noisy labels for histopathology image analysis. Med Image Anal 2019; 60:101624. [PMID: 31841948 DOI: 10.1016/j.media.2019.101624] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2018] [Revised: 11/22/2019] [Accepted: 11/25/2019] [Indexed: 12/21/2022]
Abstract
Although convolutional neural networks have achieved tremendous success on histopathology image classification, they usually require large-scale clean annotated data and are sensitive to noisy labels. Unfortunately, labeling large-scale images is laborious, expensive and lowly reliable for pathologists. To address these problems, in this paper, we propose a novel self-ensembling based deep architecture to leverage the semantic information of annotated images and explore the information hidden in unlabeled data, and meanwhile being robust to noisy labels. Specifically, the proposed architecture first creates ensemble targets for feature and label predictions of training samples, by using exponential moving average (EMA) to aggregate feature and label predictions within multiple previous training epochs. Then, the ensemble targets within the same class are mapped into a cluster so that they are further enhanced. Next, a consistency cost is utilized to form consensus predictions under different configurations. Finally, we validate the proposed method with extensive experiments on lung and breast cancer datasets that contain thousands of images. It can achieve 90.5% and 89.5% image classification accuracy using only 20% labeled patients on the two datasets, respectively. This performance is comparable to that of the baseline method with all labeled patients. Experiments also demonstrate its robustness to small percentage of noisy labels.
Collapse
Affiliation(s)
- Xiaoshuang Shi
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, United States.
| | - Hai Su
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, United States
| | - Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado, Denver, United States
| | - Yun Liang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, United States
| | - Gang Qu
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, United States
| | - Lin Yang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, United States.
| |
Collapse
|
41
|
Sukhia K, Riaz M, Ghafoor A, Ali S, Iltaf N. Content-based histopathological image retrieval using multi-scale and multichannel decoder based LTP. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.101582] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
42
|
Gu Y, Yang J. Multi-level magnification correlation hashing for scalable histopathological image retrieval. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.03.050] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
43
|
Schaer R, Otálora S, Jimenez-Del-Toro O, Atzori M, Müller H. Deep Learning-Based Retrieval System for Gigapixel Histopathology Cases and the Open Access Literature. J Pathol Inform 2019; 10:19. [PMID: 31367471 PMCID: PMC6639847 DOI: 10.4103/jpi.jpi_88_18] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Accepted: 05/17/2019] [Indexed: 11/08/2022] Open
Abstract
Background: The introduction of digital pathology into clinical practice has led to the development of clinical workflows with digital images, in connection with pathology reports. Still, most of the current work is time-consuming manual analysis of image areas at different scales. Links with data in the biomedical literature are rare, and a need for search based on visual similarity within whole slide images (WSIs) exists. Objectives: The main objective of the work presented is to integrate content-based visual retrieval with a WSI viewer in a prototype. Another objective is to connect cases analyzed in the viewer with cases or images from the biomedical literature, including the search through visual similarity and text. Methods: An innovative retrieval system for digital pathology is integrated with a WSI viewer, allowing to define regions of interest (ROIs) in images as queries for finding visually similar areas in the same or other images and to zoom in/out to find structures at varying magnification levels. The algorithms are based on a multimodal approach, exploiting both text information and content-based image features. Results: The retrieval system allows viewing WSIs and searching for regions that are visually similar to manually defined ROIs in various data sources (proprietary and public datasets, e.g., scientific literature). The system was tested by pathologists, highlighting its capabilities and suggesting ways to improve it and make it more usable in clinical practice. Conclusions: The developed system can enhance the practice of pathologists by enabling them to use their experience and knowledge to control artificial intelligence tools for navigating repositories of images for clinical decision support and teaching, where the comparison with visually similar cases can help to avoid misinterpretations. The system is available as open source, allowing the scientific community to test, ideate and develop similar systems for research and clinical practice.
Collapse
Affiliation(s)
- Roger Schaer
- Institute of Information Systems, HES-SO (University of Applied Sciences of Western Switzerland), Sierre, Switzerland
| | - Sebastian Otálora
- Institute of Information Systems, HES-SO (University of Applied Sciences of Western Switzerland), Sierre, Switzerland.,Department of Computer Science, University of Geneva (UNIGE), Geneva, Switzerland
| | - Oscar Jimenez-Del-Toro
- Institute of Information Systems, HES-SO (University of Applied Sciences of Western Switzerland), Sierre, Switzerland.,Department of Computer Science, University of Geneva (UNIGE), Geneva, Switzerland
| | - Manfredo Atzori
- Institute of Information Systems, HES-SO (University of Applied Sciences of Western Switzerland), Sierre, Switzerland
| | - Henning Müller
- Institute of Information Systems, HES-SO (University of Applied Sciences of Western Switzerland), Sierre, Switzerland.,Department of Computer Science, University of Geneva (UNIGE), Geneva, Switzerland
| |
Collapse
|
44
|
A Hybridized ELM for Automatic Micro Calcification Detection in Mammogram Images Based on Multi-Scale Features. J Med Syst 2019; 43:183. [PMID: 31093789 DOI: 10.1007/s10916-019-1316-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Accepted: 04/25/2019] [Indexed: 01/27/2023]
Abstract
Detection of masses and micro calcifications are a stimulating task for radiologists in digital mammogram images. Radiologists using Computer Aided Detection (CAD) frameworks to find the breast lesion. Micro calcification may be the early sign of breast cancer. There are different kinds of methods used to detect and recognize micro calcification from mammogram images. This paper presents an ELM (Extreme Learning Machine) algorithm for micro calcification detection in digital mammogram images. The interference of mammographic image is removed at the pre-processing stages. A multi-scale features are extracted by a feature generation model. The performance did not improve by all extracted feature, therefore feature selection is performed by nature-inspired optimization algorithm. At last, the hybridized ELM classifier taken the selected optimal features to classify malignant from benign micro calcifications. The proposed work is compared with various classifiers and it shown better performance in training time, sensitivity, specificity and accuracy. The existing approaches considered here are SVM (Support Vector Machine) and NB (Naïve Bayes classifier). The proposed detection system provides 99.04% accuracy which is the better performance than the existing approaches. The optimal selection of feature vectors and the efficient classifier improves the performance of proposed system. Results illustrate the classification performance is better when compared with several other classification approaches.
Collapse
|
45
|
Wei S, Liao L, Li J, Zheng Q, Yang F, Zhao Y. Saliency Inside: Learning Attentive CNNs for Content-based Image Retrieval. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:4580-4593. [PMID: 31059441 DOI: 10.1109/tip.2019.2913513] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In content-based image retrieval (CBIR), one of the most challenging and ambiguous tasks are to correctly understand the human query intention and measure its semantic relevance with images in the database. Due to the impressive capability of visual saliency in predicting human visual attention that is closely related to the query intention, this paper attempts to explicitly discover the essential effect of visual saliency in CBIR via qualitative and quantitative experiments. Toward this end, we first generate the fixation density maps of images from a widely used CBIR dataset by using an eye-tracking apparatus. These ground-truth saliency maps are then used to measure the influence of visual saliency to the task of CBIR by exploring several probable ways of incorporating such saliency cues into the retrieval process. We find that visual saliency is indeed beneficial to the CBIR task, and the best saliency involving scheme is possibly different for different image retrieval models. Inspired by the findings, this paper presents two-stream attentive CNNs with saliency embedded inside for CBIR. The proposed network has two streams that simultaneously handle two tasks. The main stream focuses on extracting discriminative visual features that are tightly related to semantic attributes. Meanwhile, the auxiliary stream aims to facilitate the main stream by redirecting the feature extraction to the salient image content that human may pay attention to. By fusing these two streams into the Main and Auxiliary CNNs (MAC), image similarity can be computed as the human being does by reserving conspicuous content and suppressing irrelevant regions. Extensive experiments show that the proposed model achieves impressive performance in image retrieval on four public datasets.
Collapse
|
46
|
Li Z, Butler E, Li K, Lu A, Ji S, Zhang S. Large-scale Exploration of Neuronal Morphologies Using Deep Learning and Augmented Reality. Neuroinformatics 2019; 16:339-349. [PMID: 29435954 DOI: 10.1007/s12021-018-9361-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Recently released large-scale neuron morphological data has greatly facilitated the research in neuroinformatics. However, the sheer volume and complexity of these data pose significant challenges for efficient and accurate neuron exploration. In this paper, we propose an effective retrieval framework to address these problems, based on frontier techniques of deep learning and binary coding. For the first time, we develop a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs). The deep features are subsequently fused with the hand-crafted features for more accurate representation. Considering the exhaustive search is usually very time-consuming in large-scale databases, we employ a novel binary coding method to compress feature vectors into short binary codes. Our framework is validated on a public data set including 58,000 neurons, showing promising retrieval precision and efficiency compared with state-of-the-art methods. In addition, we develop a novel neuron visualization program based on the techniques of augmented reality (AR), which can help users take a deep exploration of neuron morphologies in an interactive and immersive manner.
Collapse
Affiliation(s)
- Zhongyu Li
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28223, USA
| | - Erik Butler
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28223, USA
| | - Kang Li
- Department of Industrial and Systems Engineering, The State University of New Jersey, Piscataway, NJ, 08854, USA
| | - Aidong Lu
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28223, USA
| | - Shuiwang Ji
- School of Electrical Engineering and Computer Science, Washington State University, Pullman, WA, 99164, USA
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28223, USA.
| |
Collapse
|
47
|
Sapkota M, Shi X, Xing F, Yang L. Deep Convolutional Hashing for Low-Dimensional Binary Embedding of Histopathological Images. IEEE J Biomed Health Inform 2019; 23:805-816. [PMID: 29993648 PMCID: PMC6429565 DOI: 10.1109/jbhi.2018.2827703] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Compact binary representations of histopa-thology images using hashing methods provide efficient approximate nearest neighbor search for direct visual query in large-scale databases. They can be utilized to measure the probability of the abnormality of the query image based on the retrieved similar cases, thereby providing support for medical diagnosis. They also allow for efficient managing of large-scale image databases because of a low storage requirement. However, the effectiveness of binary representations heavily relies on the visual descriptors that represent the semantic information in the histopathological images. Traditional approaches with hand-crafted visual descriptors might fail due to significant variations in image appearance. Recently, deep learning architectures provide promising solutions to address this problem using effective semantic representations. In this paper, we propose a deep convolutional hashing method that can be trained "point-wise" to simultaneously learn both semantic and binary representations of histopathological images. Specifically, we propose a convolutional neural network that introduces a latent binary encoding (LBE) layer for low-dimensional feature embedding to learn binary codes. We design a joint optimization objective function that encourages the network to learn discriminative representations from the label information, and reduce the gap between the real-valued low-dimensional embedded features and desired binary values. The binary encoding for new images can be obtained by forward propagating through the network and quantizing the output of the LBE layer. Experimental results on a large-scale histopathological image dataset demonstrate the effectiveness of the proposed method.
Collapse
|
48
|
Xu J, Gong L, Wang G, Lu C, Gilmore H, Zhang S, Madabhushi A. Convolutional neural network initialized active contour model with adaptive ellipse fitting for nuclear segmentation on breast histopathological images. J Med Imaging (Bellingham) 2019; 6:017501. [PMID: 30840729 DOI: 10.1117/1.jmi.6.1.017501] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2018] [Accepted: 01/07/2019] [Indexed: 11/14/2022] Open
Abstract
Automated detection and segmentation of nuclei from high-resolution histopathological images is a challenging problem owing to the size and complexity of digitized histopathologic images. In the context of breast cancer, the modified Bloom-Richardson Grading system is highly correlated with the morphological and topological nuclear features are highly correlated with Modified Bloom-Richardson grading. Therefore, to develop a computer-aided prognosis system, automated detection and segmentation of nuclei are critical prerequisite steps. We present a method for automated detection and segmentation of breast cancer nuclei named a convolutional neural network initialized active contour model with adaptive ellipse fitting (CoNNACaeF). The CoNNACaeF model is able to detect and segment nuclei simultaneously, which consist of three different modules: convolutional neural network (CNN) for accurate nuclei detection, (2) region-based active contour (RAC) model for subsequent nuclear segmentation based on the initial CNN-based detection of nuclear patches, and (3) adaptive ellipse fitting for overlapping solution of clumped nuclear regions. The performance of the CoNNACaeF model is evaluated on three different breast histological data sets, comprising a total of 257 H&E-stained images. The model is shown to have improved detection accuracy of F-measure 80.18%, 85.71%, and 80.36% and average area under precision-recall curves (AveP) 77%, 82%, and 74% on a total of 3 million nuclei from 204 whole slide images from three different datasets. Additionally, CoNNACaeF yielded an F-measure at 74.01% and 85.36%, respectively, for two different breast cancer datasets. The CoNNACaeF model also outperformed the three other state-of-the-art nuclear detection and segmentation approaches, which are blue ratio initialized local region active contour, iterative radial voting initialized local region active contour, and maximally stable extremal region initialized local region active contour models.
Collapse
Affiliation(s)
- Jun Xu
- Nanjing University of Information Science and Technology, Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing, China
| | - Lei Gong
- Nanjing University of Information Science and Technology, Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing, China
| | - Guanhao Wang
- Nanjing University of Information Science and Technology, Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing, China
| | - Cheng Lu
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States
| | - Hannah Gilmore
- University Hospitals Case Medical Center, Case Western Reserve University, Institute for Pathology, Cleveland, Ohio, United States
| | - Shaoting Zhang
- University of North Carolina at Charlotte, Department of Computer Science, Charlotte, North Carolina, United States
| | - Anant Madabhushi
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States.,Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, Ohio, United States
| |
Collapse
|
49
|
Cheng J, Mo X, Wang X, Parwani A, Feng Q, Huang K. Identification of topological features in renal tumor microenvironment associated with patient survival. Bioinformatics 2019; 34:1024-1030. [PMID: 29136101 PMCID: PMC7263397 DOI: 10.1093/bioinformatics/btx723] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Accepted: 11/07/2017] [Indexed: 11/13/2022] Open
Abstract
Motivation As a highly heterogeneous disease, the progression of tumor is not only achieved by unlimited growth of the tumor cells, but also supported, stimulated, and nurtured by the microenvironment around it. However, traditional qualitative and/or semi-quantitative parameters obtained by pathologist’s visual examination have very limited capability to capture this interaction between tumor and its microenvironment. With the advent of digital pathology, computerized image analysis may provide a better tumor characterization and give new insights into this problem. Results We propose a novel bioimage informatics pipeline for automatically characterizing the topological organization of different cell patterns in the tumor microenvironment. We apply this pipeline to the only publicly available large histopathology image dataset for a cohort of 190 patients with papillary renal cell carcinoma obtained from The Cancer Genome Atlas project. Experimental results show that the proposed topological features can successfully stratify early- and middle-stage patients with distinct survival, and show superior performance to traditional clinical features and cellular morphological and intensity features. The proposed features not only provide new insights into the topological organizations of cancers, but also can be integrated with genomic data in future studies to develop new integrative biomarkers. Availability and implementation https://github.com/chengjun583/KIRP-topological-features Supplementary information Supplementary data are available atBioinformatics online.
Collapse
Affiliation(s)
- Jun Cheng
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Xiaokui Mo
- Center for Biostatistics, The Ohio State University Wexner Medical Center
| | - Xusheng Wang
- Department of Electrical and Computer Engineering
| | | | - Qianjin Feng
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Kun Huang
- Department of Electrical and Computer Engineering.,Department of Biomedical Informatics, The Ohio State University, Columbus, OH 43210, USA.,Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA
| |
Collapse
|
50
|
Li J, Yang S, Huang X, Da Q, Yang X, Hu Z, Duan Q, Wang C, Li H. Signet Ring Cell Detection with a Semi-supervised Learning Framework. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-20351-1_66] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|