1
|
Lahr I, Alfasly S, Nejat P, Khan J, Kottom L, Kumbhar V, Alsaafin A, Shafique A, Hemati S, Alabtah G, Comfere N, Murphree D, Mangold A, Yasir S, Meroueh C, Boardman L, Shah VH, Garcia JJ, Tizhoosh HR. Analysis and Validation of Image Search Engines in Histopathology. IEEE Rev Biomed Eng 2025; 18:350-367. [PMID: 38995713 DOI: 10.1109/rbme.2024.3425769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/14/2024]
Abstract
Searching for similar images in archives of histology and histopathology images is a crucial task that may aid in patient tissue comparison for various purposes, ranging from triaging and diagnosis to prognosis and prediction. Whole slide images (WSIs) are highly detailed digital representations of tissue specimens mounted on glass slides. Matching WSI to WSI can serve as the critical method for patient tissue comparison. In this paper, we report extensive analysis and validation of four search methods bag of visual words (BoVW), Yottixel, SISH, RetCCL, and some of their potential variants. We analyze their algorithms and structures and assess their performance. For this evaluation, we utilized four internal datasets (1269 patients) and three public datasets (1207 patients), totaling more than 200,000 patches from 38 different classes/subtypes across five primary sites. Certain search engines, for example, BoVW, exhibit notable efficiency and speed but suffer from low accuracy. Conversely, search engines like Yottixel demonstrate efficiency and speed, providing moderately accurate results. Recent proposals, including SISH, display inefficiency and yield inconsistent outcomes, while alternatives like RetCCL prove inadequate in both accuracy and efficiency. Further research is imperative to address the dual aspects of accuracy and minimal storage requirements in histopathological image search.
Collapse
|
2
|
Alsaafin A, Nejat P, Shafique A, Khan J, Alfasly S, Alabtah G, Tizhoosh HR. Sequential Patching Lattice for Image Classification and Enquiry: Streamlining Digital Pathology Image Processing. THE AMERICAN JOURNAL OF PATHOLOGY 2024; 194:1898-1912. [PMID: 39032601 DOI: 10.1016/j.ajpath.2024.06.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 06/06/2024] [Accepted: 06/18/2024] [Indexed: 07/23/2024]
Abstract
Digital pathology and the integration of artificial intelligence (AI) models have revolutionized histopathology, opening new opportunities. With the increasing availability of whole-slide images (WSIs), demand is growing for efficient retrieval, processing, and analysis of relevant images from vast biomedical archives. However, processing WSIs presents challenges due to their large size and content complexity. Full computer digestion of WSIs is impractical, and processing all patches individually is prohibitively expensive. In this article, we propose an unsupervised patching algorithm, Sequential Patching Lattice for Image Classification and Enquiry (SPLICE). This novel approach condenses a histopathology WSI into a compact set of representative patches, forming a collage of WSI while minimizing redundancy. SPLICE prioritizes patch quality and uniqueness by sequentially analyzing a WSI and selecting nonredundant representative features. In search and match applications, SPLICE showed improved accuracy, reduced computation time, and storage requirements compared with existing state-of-the-art methods. As an unsupervised method, SPLICE effectively reduced storage requirements for representing tissue images by 50%. This reduction can enable numerous algorithms in computational pathology to operate much more efficiently, paving the way for accelerated adoption of digital pathology.
Collapse
Affiliation(s)
- Areej Alsaafin
- KIMIA Lab, Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota
| | - Peyman Nejat
- KIMIA Lab, Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota
| | - Abubakr Shafique
- KIMIA Lab, Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota
| | - Jibran Khan
- KIMIA Lab, Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota
| | - Saghir Alfasly
- KIMIA Lab, Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota
| | - Ghazal Alabtah
- KIMIA Lab, Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota
| | - Hamid R Tizhoosh
- KIMIA Lab, Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota.
| |
Collapse
|
3
|
Frewing A, Gibson AB, Robertson R, Urie PM, Corte DD. Don't Fear the Artificial Intelligence: A Systematic Review of Machine Learning for Prostate Cancer Detection in Pathology. Arch Pathol Lab Med 2024; 148:603-612. [PMID: 37594900 DOI: 10.5858/arpa.2022-0460-ra] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2023] [Indexed: 08/20/2023]
Abstract
CONTEXT Automated prostate cancer detection using machine learning technology has led to speculation that pathologists will soon be replaced by algorithms. This review covers the development of machine learning algorithms and their reported effectiveness specific to prostate cancer detection and Gleason grading. OBJECTIVE To examine current algorithms regarding their accuracy and classification abilities. We provide a general explanation of the technology and how it is being used in clinical practice. The challenges to the application of machine learning algorithms in clinical practice are also discussed. DATA SOURCES The literature for this review was identified and collected using a systematic search. Criteria were established prior to the sorting process to effectively direct the selection of studies. A 4-point system was implemented to rank the papers according to their relevancy. For papers accepted as relevant to our metrics, all cited and citing studies were also reviewed. Studies were then categorized based on whether they implemented binary or multi-class classification methods. Data were extracted from papers that contained accuracy, area under the curve (AUC), or κ values in the context of prostate cancer detection. The results were visually summarized to present accuracy trends between classification abilities. CONCLUSIONS It is more difficult to achieve high accuracy metrics for multiclassification tasks than for binary tasks. The clinical implementation of an algorithm that can assign a Gleason grade to clinical whole slide images (WSIs) remains elusive. Machine learning technology is currently not able to replace pathologists but can serve as an important safeguard against misdiagnosis.
Collapse
Affiliation(s)
- Aaryn Frewing
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Alexander B Gibson
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Richard Robertson
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Paul M Urie
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Dennis Della Corte
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| |
Collapse
|
4
|
Qureshi HA, Chetty R, Kuklyte J, Ratcliff K, Morrissey M, Lyons C, Rafferty M. Synergies and Challenges in the Preclinical and Clinical Implementation of Pathology Artificial Intelligence Applications. MAYO CLINIC PROCEEDINGS. DIGITAL HEALTH 2023; 1:601-613. [PMID: 40206312 PMCID: PMC11975742 DOI: 10.1016/j.mcpdig.2023.08.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
Recent introduction of digitalization in pathology has disrupted the field greatly with the potential to change the area immensely. Digital pathology has created the potential of applying advanced quantitative analysis and artificial intelligence (AI) to the domain. In this study, we present an overview of what pathology AI applications have the greatest potential of widespread adoption in the preclinical domain and subsequently, in the clinical setting. We also discuss the major challenges in AI adoption being faced by the field of digital and computational pathology. We review the research literature in the domain and present a detailed analysis of the most promising areas of digital and computational pathology AI research and identify applications that are likely to see the first adoptions of AI technology. Our analysis shows that certain areas and fields of application have received more attention and can potentially affect the field of digital and computational pathology more favorably, leading to the advancement of the field. We also present the main challenges that are faced by the field and provide a comparative analysis of various aspects that are likely to influence the field for the long term in the future.
Collapse
|
5
|
Tommasino C, Merolla F, Russo C, Staibano S, Rinaldi AM. Histopathological Image Deep Feature Representation for CBIR in Smart PACS. J Digit Imaging 2023; 36:2194-2209. [PMID: 37296349 PMCID: PMC10501985 DOI: 10.1007/s10278-023-00832-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 03/16/2023] [Accepted: 04/12/2023] [Indexed: 06/12/2023] Open
Abstract
Pathological Anatomy is moving toward computerizing processes mainly due to the extensive digitization of histology slides that resulted in the availability of many Whole Slide Images (WSIs). Their use is essential, especially in cancer diagnosis and research, and raises the pressing need for increasingly influential information archiving and retrieval systems. Picture Archiving and Communication Systems (PACSs) represent an actual possibility to archive and organize this growing amount of data. The design and implementation of a robust and accurate methodology for querying them in the pathology domain using a novel approach are mandatory. In particular, the Content-Based Image Retrieval (CBIR) methodology can be involved in the PACSs using a query-by-example task. In this context, one of many crucial points of CBIR concerns the representation of images as feature vectors, and the accuracy of retrieval mainly depends on feature extraction. Thus, our study explored different representations of WSI patches by features extracted from pre-trained Convolution Neural Networks (CNNs). In order to perform a helpful comparison, we evaluated features extracted from different layers of state-of-the-art CNNs using different dimensionality reduction techniques. Furthermore, we provided a qualitative analysis of obtained results. The evaluation showed encouraging results for our proposed framework.
Collapse
Affiliation(s)
- Cristian Tommasino
- Department of Electrical Engineering and Information Technology, University of Napoli Federico II, Via Claudio 21, Naples, 80125 Italy
| | - Francesco Merolla
- Department of Advanced Biomedical Sciences, Pathology Section, University of Naples Federico II, Naples, 80131 Italy
| | - Cristiano Russo
- Department of Electrical Engineering and Information Technology, University of Napoli Federico II, Via Claudio 21, Naples, 80125 Italy
| | - Stefania Staibano
- Department of Medicine and Health Sciences V. Tiberio, University of Molise, Campobasso, 86100 Italy
| | - Antonio Maria Rinaldi
- Department of Electrical Engineering and Information Technology, University of Napoli Federico II, Via Claudio 21, Naples, 80125 Italy
| |
Collapse
|
6
|
Dehkharghanian T, Bidgoli AA, Riasatian A, Mazaheri P, Campbell CJV, Pantanowitz L, Tizhoosh HR, Rahnamayan S. Biased data, biased AI: deep networks predict the acquisition site of TCGA images. Diagn Pathol 2023; 18:67. [PMID: 37198691 DOI: 10.1186/s13000-023-01355-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 05/07/2023] [Indexed: 05/19/2023] Open
Abstract
BACKGROUND Deep learning models applied to healthcare applications including digital pathology have been increasing their scope and importance in recent years. Many of these models have been trained on The Cancer Genome Atlas (TCGA) atlas of digital images, or use it as a validation source. One crucial factor that seems to have been widely ignored is the internal bias that originates from the institutions that contributed WSIs to the TCGA dataset, and its effects on models trained on this dataset. METHODS 8,579 paraffin-embedded, hematoxylin and eosin stained, digital slides were selected from the TCGA dataset. More than 140 medical institutions (acquisition sites) contributed to this dataset. Two deep neural networks (DenseNet121 and KimiaNet were used to extract deep features at 20× magnification. DenseNet was pre-trained on non-medical objects. KimiaNet has the same structure but trained for cancer type classification on TCGA images. The extracted deep features were later used to detect each slide's acquisition site, and also for slide representation in image search. RESULTS DenseNet's deep features could distinguish acquisition sites with 70% accuracy whereas KimiaNet's deep features could reveal acquisition sites with more than 86% accuracy. These findings suggest that there are acquisition site specific patterns that could be picked up by deep neural networks. It has also been shown that these medically irrelevant patterns can interfere with other applications of deep learning in digital pathology, namely image search. This study shows that there are acquisition site specific patterns that can be used to identify tissue acquisition sites without any explicit training. Furthermore, it was observed that a model trained for cancer subtype classification has exploited such medically irrelevant patterns to classify cancer types. Digital scanner configuration and noise, tissue stain variation and artifacts, and source site patient demographics are among factors that likely account for the observed bias. Therefore, researchers should be cautious of such bias when using histopathology datasets for developing and training deep networks.
Collapse
Affiliation(s)
- Taher Dehkharghanian
- University Health Network, Toronto, ON, Canada
- Department of Pathology and Molecular Medicine, Faculty of Health Science, McMaster University, Hamilton, ON, Canada
| | - Azam Asilian Bidgoli
- Nature Inspired Computational Intelligence (NICI), Ontario Tech University, Oshawa, ON, Canada
- Nature Inspired Computational Intelligence (NICI) Lab, Department of Engineering, Brock University, 1812 Sir Isaac Brock Way, St. Catharines, ON, L2S 3A1, Canada
- Bharti School of Engineering and Computer Science, Laurentian University, Sudbury, ON, Canada
| | | | - Pooria Mazaheri
- Nature Inspired Computational Intelligence (NICI), Ontario Tech University, Oshawa, ON, Canada
| | - Clinton J V Campbell
- Department of Pathology and Molecular Medicine, Faculty of Health Science, McMaster University, Hamilton, ON, Canada
- William Osler Health System, Brampton, ON, Canada
| | | | - H R Tizhoosh
- KIMIA Lab, University of Waterloo, Waterloo, ON, Canada
- Rhazes Lab, Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN, USA
| | - Shahryar Rahnamayan
- Nature Inspired Computational Intelligence (NICI), Ontario Tech University, Oshawa, ON, Canada.
- Nature Inspired Computational Intelligence (NICI) Lab, Department of Engineering, Brock University, 1812 Sir Isaac Brock Way, St. Catharines, ON, L2S 3A1, Canada.
| |
Collapse
|
7
|
Hashimoto N, Takagi Y, Masuda H, Miyoshi H, Kohno K, Nagaishi M, Sato K, Takeuchi M, Furuta T, Kawamoto K, Yamada K, Moritsubo M, Inoue K, Shimasaki Y, Ogura Y, Imamoto T, Mishina T, Tanaka K, Kawaguchi Y, Nakamura S, Ohshima K, Hontani H, Takeuchi I. Case-based similar image retrieval for weakly annotated large histopathological images of malignant lymphoma using deep metric learning. Med Image Anal 2023; 85:102752. [PMID: 36716701 DOI: 10.1016/j.media.2023.102752] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 12/29/2022] [Accepted: 01/18/2023] [Indexed: 01/26/2023]
Abstract
In the present study, we propose a novel case-based similar image retrieval (SIR) method for hematoxylin and eosin (H&E) stained histopathological images of malignant lymphoma. When a whole slide image (WSI) is used as an input query, it is desirable to be able to retrieve similar cases by focusing on image patches in pathologically important regions such as tumor cells. To address this problem, we employ attention-based multiple instance learning, which enables us to focus on tumor-specific regions when the similarity between cases is computed. Moreover, we employ contrastive distance metric learning to incorporate immunohistochemical (IHC) staining patterns as useful supervised information for defining appropriate similarity between heterogeneous malignant lymphoma cases. In the experiment with 249 malignant lymphoma patients, we confirmed that the proposed method exhibited higher evaluation measures than the baseline case-based SIR methods. Furthermore, the subjective evaluation by pathologists revealed that our similarity measure using IHC staining patterns is appropriate for representing the similarity of H&E stained tissue images for malignant lymphoma.
Collapse
Affiliation(s)
- Noriaki Hashimoto
- RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Yusuke Takagi
- Department of Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan
| | - Hiroki Masuda
- Department of Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan
| | - Hiroaki Miyoshi
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Kei Kohno
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Miharu Nagaishi
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Kensaku Sato
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Mai Takeuchi
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Takuya Furuta
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Keisuke Kawamoto
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Kyohei Yamada
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Mayuko Moritsubo
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Kanako Inoue
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Yasumasa Shimasaki
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Yusuke Ogura
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Teppei Imamoto
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Tatsuzo Mishina
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Ken Tanaka
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Yoshino Kawaguchi
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Shigeo Nakamura
- Department of Pathology and Laboratory Medicine, Nagoya University Hospital, 65 Tsurumai-cho, Showa-ku, Nagoya 466-8560, Japan
| | - Koichi Ohshima
- Department of Pathology, Kurume University School of Medicine, 67 Asahimachi, Kurume 830-0011, Japan
| | - Hidekata Hontani
- Department of Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan
| | - Ichiro Takeuchi
- RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; Department of Mechanical Systems Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Japan.
| |
Collapse
|
8
|
Bidgoli AA, Rahnamayan S, Dehkharghanian T, Riasatian A, Kalra S, Zaveri M, Campbell CJ, Parwani A, Pantanowitz L, Tizhoosh H. Evolutionary deep feature selection for compact representation of gigapixel images in digital pathology. Artif Intell Med 2022; 132:102368. [DOI: 10.1016/j.artmed.2022.102368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Revised: 06/13/2022] [Accepted: 07/14/2022] [Indexed: 11/26/2022]
|
9
|
Rasoolijaberi M, Babaei M, Riasatian A, Hemati S, Ashrafi P, Gonzalez R, Tizhoosh HR. Multi-Magnification Image Search in Digital Pathology. IEEE J Biomed Health Inform 2022; 26:4611-4622. [PMID: 35687644 DOI: 10.1109/jbhi.2022.3181531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper investigates the effect of magnification on content-based image search in digital pathology archives and proposes to use multi-magnification image representation. Image search in large archives of digital pathology slides provides researchers and medical professionals with an opportunity to match records of current and past patients and learn from evidently diagnosed and treated cases. When working with microscopes, pathologists switch between different magnification levels while examining tissue specimens to find and evaluate various morphological features. Inspired by the conventional pathology workflow, we have investigated several magnification levels in digital pathology and their combinations to minimize the gap between AI-enabled image search methods and clinical settings. The proposed searching framework does not rely on any regional annotation and potentially applies to millions of unlabelled (raw) whole slide images. This paper suggests two approaches for combining magnification levels and compares their performance. The first approach obtains a single-vector deep feature representation for a digital slide, whereas the second approach works with a multi-vector deep feature representation. We report the search results of 20×, 10×, and 5× magnifications and their combinations on a subset of The Cancer Genome Atlas (TCGA) repository. The experiments verify that cell-level information at the highest magnification is essential for searching for diagnostic purposes. In contrast, low-magnification information may improve this assessment depending on the tumor type. Our multi-magnification approach achieved up to 11% F1-score improvement in searching among the urinary tract and brain tumor subtypes compared to the single-magnification image search.
Collapse
|
10
|
Escobar Díaz Guerrero R, Carvalho L, Bocklitz T, Popp J, Oliveira JL. Software tools and platforms in Digital Pathology: a review for clinicians and computer scientists. J Pathol Inform 2022; 13:100103. [PMID: 36268075 PMCID: PMC9576980 DOI: 10.1016/j.jpi.2022.100103] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 05/12/2022] [Accepted: 05/17/2022] [Indexed: 11/20/2022] Open
Abstract
At the end of the twentieth century, a new technology was developed that allowed an entire tissue section to be scanned on an objective slide. Originally called virtual microscopy, this technology is now known as Whole Slide Imaging (WSI). WSI presents new challenges for reading, visualization, storage, and analysis. For this reason, several technologies have been developed to facilitate the handling of these images. In this paper, we analyze the most widely used technologies in the field of digital pathology, ranging from specialized libraries for the reading of these images to complete platforms that allow reading, visualization, and analysis. Our aim is to provide the reader, whether a pathologist or a computational scientist, with the knowledge to choose the technologies to use for new studies, development, or research.
Collapse
Affiliation(s)
- Rodrigo Escobar Díaz Guerrero
- BMD Software, PCI - Creative Science Park, 3830-352 Ilhavo, Portugal
- DETI/IEETA, University of Aveiro, 3810-193 Aveiro, Portugal
| | - Lina Carvalho
- Institute of Anatomical and Molecular Pathology, Faculty of Medicine, University of Coimbra, 3004-504 Coimbra, Portugal
| | - Thomas Bocklitz
- Leibniz Institute of Photonic Technology Jena, Member of Leibniz research alliance ‘Health technologies’, Albert-Einstein-Straße 9, 07745 Jena, Germany
- Institute of Physical Chemistry and Abbe Center of Photonics (IPC), Friedrich-Schiller-University, Jena, Germany
| | - Juergen Popp
- Leibniz Institute of Photonic Technology Jena, Member of Leibniz research alliance ‘Health technologies’, Albert-Einstein-Straße 9, 07745 Jena, Germany
- Institute of Physical Chemistry and Abbe Center of Photonics (IPC), Friedrich-Schiller-University, Jena, Germany
| | | |
Collapse
|
11
|
Fast and scalable search of whole-slide images via self-supervised deep learning. Nat Biomed Eng 2022; 6:1420-1434. [PMID: 36217022 PMCID: PMC9792371 DOI: 10.1038/s41551-022-00929-8] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 07/15/2022] [Indexed: 01/14/2023]
Abstract
The adoption of digital pathology has enabled the curation of large repositories of gigapixel whole-slide images (WSIs). Computationally identifying WSIs with similar morphologic features within large repositories without requiring supervised training can have significant applications. However, the retrieval speeds of algorithms for searching similar WSIs often scale with the repository size, which limits their clinical and research potential. Here we show that self-supervised deep learning can be leveraged to search for and retrieve WSIs at speeds that are independent of repository size. The algorithm, which we named SISH (for self-supervised image search for histology) and provide as an open-source package, requires only slide-level annotations for training, encodes WSIs into meaningful discrete latent representations and leverages a tree data structure for fast searching followed by an uncertainty-based ranking algorithm for WSI retrieval. We evaluated SISH on multiple tasks (including retrieval tasks based on tissue-patch queries) and on datasets spanning over 22,000 patient cases and 56 disease subtypes. SISH can also be used to aid the diagnosis of rare cancer types for which the number of available WSIs is often insufficient to train supervised deep-learning models.
Collapse
|
12
|
Lee HN, Seo HD, Kim EM, Han BS, Kang JS. Classification of Mouse Lung Metastatic Tumor with Deep Learning. Biomol Ther (Seoul) 2021; 30:179-183. [PMID: 34725310 PMCID: PMC8902456 DOI: 10.4062/biomolther.2021.130] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 08/31/2021] [Accepted: 09/13/2021] [Indexed: 11/26/2022] Open
Abstract
Traditionally, pathologists microscopically examine tissue sections to detect pathological lesions; the many slides that must be evaluated impose severe work burdens. Also, diagnostic accuracy varies by pathologist training and experience; better diagnostic tools are required. Given the rapid development of computer vision, automated deep learning is now used to classify microscopic images, including medical images. Here, we used a Inception-v3 deep learning model to detect mouse lung metastatic tumors via whole slide imaging (WSI); we cropped the images to 151 by 151 pixels. The images were divided into training (53.8%) and test (46.2%) sets (21,017 and 18,016 images, respectively). When images from lung tissue containing tumor tissues were evaluated, the model accuracy was 98.76%. When images from normal lung tissue were evaluated, the model accuracy (“no tumor”) was 99.87%. Thus, the deep learning model distinguished metastatic lesions from normal lung tissue. Our approach will allow the rapid and accurate analysis of various tissues.
Collapse
Affiliation(s)
- Ha Neul Lee
- Department of Biomedical, Laboratory Science, Namseoul University, Cheonan 31020, Republic of Korea
| | - Hong-Deok Seo
- Department of Industrial Promotion, Spatial Information Industry Promotion Agency, Seongnam 13487, Republic of Korea
| | - Eui-Myoung Kim
- Department of Spatial Information Engineering, Namseoul University, Cheonan 31020, Republic of Korea
| | - Beom Seok Han
- Department of Pharmaceutical Engineering, Hoseo University, Asan 31499, Republic of Korea
| | - Jin Seok Kang
- Department of Biomedical, Laboratory Science, Namseoul University, Cheonan 31020, Republic of Korea
| |
Collapse
|
13
|
Otálora S, Marini N, Müller H, Atzori M. Combining weakly and strongly supervised learning improves strong supervision in Gleason pattern classification. BMC Med Imaging 2021; 21:77. [PMID: 33964886 PMCID: PMC8105943 DOI: 10.1186/s12880-021-00609-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 04/20/2021] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND One challenge to train deep convolutional neural network (CNNs) models with whole slide images (WSIs) is providing the required large number of costly, manually annotated image regions. Strategies to alleviate the scarcity of annotated data include: using transfer learning, data augmentation and training the models with less expensive image-level annotations (weakly-supervised learning). However, it is not clear how to combine the use of transfer learning in a CNN model when different data sources are available for training or how to leverage from the combination of large amounts of weakly annotated images with a set of local region annotations. This paper aims to evaluate CNN training strategies based on transfer learning to leverage the combination of weak and strong annotations in heterogeneous data sources. The trade-off between classification performance and annotation effort is explored by evaluating a CNN that learns from strong labels (region annotations) and is later fine-tuned on a dataset with less expensive weak (image-level) labels. RESULTS As expected, the model performance on strongly annotated data steadily increases as the percentage of strong annotations that are used increases, reaching a performance comparable to pathologists ([Formula: see text]). Nevertheless, the performance sharply decreases when applied for the WSI classification scenario with [Formula: see text]. Moreover, it only provides a lower performance regardless of the number of annotations used. The model performance increases when fine-tuning the model for the task of Gleason scoring with the weak WSI labels [Formula: see text]. CONCLUSION Combining weak and strong supervision improves strong supervision in classification of Gleason patterns using tissue microarrays (TMA) and WSI regions. Our results contribute very good strategies for training CNN models combining few annotated data and heterogeneous data sources. The performance increases in the controlled TMA scenario with the number of annotations used to train the model. Nevertheless, the performance is hindered when the trained TMA model is applied directly to the more challenging WSI classification problem. This demonstrates that a good pre-trained model for prostate cancer TMA image classification may lead to the best downstream model if fine-tuned on the WSI target dataset. We have made available the source code repository for reproducing the experiments in the paper: https://github.com/ilmaro8/Digital_Pathology_Transfer_Learning.
Collapse
Affiliation(s)
- Sebastian Otálora
- HES-SO Valais, Technopôle 3, 3960 Sierre, Switzerland
- Computer Science Centre (CUI), University of Geneva, Route de Drize 7, Battelle A, Carouge, Switzerland
| | - Niccolò Marini
- HES-SO Valais, Technopôle 3, 3960 Sierre, Switzerland
- Computer Science Centre (CUI), University of Geneva, Route de Drize 7, Battelle A, Carouge, Switzerland
| | - Henning Müller
- HES-SO Valais, Technopôle 3, 3960 Sierre, Switzerland
- Faculty of Medicine, University of Geneva, 1 rue Michel-Servet, 1211 Geneva, Switzerland
| | - Manfredo Atzori
- HES-SO Valais, Technopôle 3, 3960 Sierre, Switzerland
- Department of Neuroscience, University of Padova, via Belzoni 160, 35121 Padova, Italy
| |
Collapse
|
14
|
Burlingame EA, McDonnell M, Schau GF, Thibault G, Lanciault C, Morgan T, Johnson BE, Corless C, Gray JW, Chang YH. SHIFT: speedy histological-to-immunofluorescent translation of a tumor signature enabled by deep learning. Sci Rep 2020; 10:17507. [PMID: 33060677 PMCID: PMC7566625 DOI: 10.1038/s41598-020-74500-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Accepted: 09/28/2020] [Indexed: 02/07/2023] Open
Abstract
Spatially-resolved molecular profiling by immunostaining tissue sections is a key feature in cancer diagnosis, subtyping, and treatment, where it complements routine histopathological evaluation by clarifying tumor phenotypes. In this work, we present a deep learning-based method called speedy histological-to-immunofluorescent translation (SHIFT) which takes histologic images of hematoxylin and eosin (H&E)-stained tissue as input, then in near-real time returns inferred virtual immunofluorescence (IF) images that estimate the underlying distribution of the tumor cell marker pan-cytokeratin (panCK). To build a dataset suitable for learning this task, we developed a serial staining protocol which allows IF and H&E images from the same tissue to be spatially registered. We show that deep learning-extracted morphological feature representations of histological images can guide representative sample selection, which improved SHIFT generalizability in a small but heterogenous set of human pancreatic cancer samples. With validation in larger cohorts, SHIFT could serve as an efficient preliminary, auxiliary, or substitute for panCK IF by delivering virtual panCK IF images for a fraction of the cost and in a fraction of the time required by traditional IF.
Collapse
Affiliation(s)
- Erik A Burlingame
- Computational Biology Program, Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, USA
- OHSU Center for Spatial Systems Biomedicine, Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, USA
| | - Mary McDonnell
- OHSU Center for Spatial Systems Biomedicine, Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, USA
| | - Geoffrey F Schau
- Computational Biology Program, Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, USA
- OHSU Center for Spatial Systems Biomedicine, Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, USA
| | - Guillaume Thibault
- OHSU Center for Spatial Systems Biomedicine, Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, USA
| | - Christian Lanciault
- Department of Pathology, Oregon Health and Science University, Portland, OR, USA
| | - Terry Morgan
- Department of Pathology, Oregon Health and Science University, Portland, OR, USA
| | - Brett E Johnson
- OHSU Center for Spatial Systems Biomedicine, Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, USA
| | - Christopher Corless
- Knight Diagnostic Laboratories, Oregon Health and Science University, Portland, OR, USA
- Knight Cancer Institute, Oregon Health and Science University, Portland, OR, USA
| | - Joe W Gray
- OHSU Center for Spatial Systems Biomedicine, Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, USA
- Knight Cancer Institute, Oregon Health and Science University, Portland, OR, USA
- Brenden-Colson Center for Pancreatic Care, Oregon Health and Science University, Portland, OR, USA
| | - Young Hwan Chang
- Computational Biology Program, Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, USA.
- OHSU Center for Spatial Systems Biomedicine, Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, USA.
- Brenden-Colson Center for Pancreatic Care, Oregon Health and Science University, Portland, OR, USA.
| |
Collapse
|
15
|
Menter T, Nicolet S, Baumhoer D, Tolnay M, Tzankov A. Intraoperative frozen section consultation by remote whole-slide imaging analysis -validation and comparison to robotic remote microscopy. J Clin Pathol 2019; 73:350-352. [PMID: 31719106 PMCID: PMC7279565 DOI: 10.1136/jclinpath-2019-206261] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Revised: 11/01/2019] [Accepted: 11/01/2019] [Indexed: 11/10/2022]
Abstract
Digital pathology including whole slide image (WSI) acquisition is a promising tool for histopathologic teleconsultation. To test and validate the use of WSI in comparison with robotic microscopy for intraoperative frozen section consultation of peripheral hospitals serviced by our department, we compared the VENTANA DP 200 slide scanner with an established remote-controlled digital microscope. Thirty cases were retrospectively analysed. In comparison with a median specimen handling time of 19 min using remote-controlled microscopy, the WSI handling was significantly shorter (11 min, p=0.0089) and offered better image quality, for example, allowing to detect a positive resection margin by a malignant melanoma that had been missed using the former system. Prospectively assessed on 12 cases, the median handling time was 6 min. Here, we demonstrate the applicability and the advantages of WSI for intraoperative frozen section teleconsultation. WSI-based telepathology prooves to be an efficient and reliable tool providing superior turn-around time and image resolution.
Collapse
Affiliation(s)
- Thomas Menter
- Institute of Pathology and Medical Genetics, University Hospital Basel, Basel, Switzerland
| | - Stefan Nicolet
- Institute of Pathology and Medical Genetics, University Hospital Basel, Basel, Switzerland
| | - Daniel Baumhoer
- Institute of Pathology and Medical Genetics, University Hospital Basel, Basel, Switzerland
| | - Markus Tolnay
- Institute of Pathology and Medical Genetics, University Hospital Basel, Basel, Switzerland
| | - Alexandar Tzankov
- Institute of Pathology and Medical Genetics, University Hospital Basel, Basel, Switzerland
| |
Collapse
|