1
|
Karz A, Coudray N, Bayraktar E, Galbraith K, Jour G, Shadaloey AAS, Eskow N, Rubanov A, Navarro M, Moubarak R, Baptiste G, Levinson G, Mezzano V, Alu M, Loomis C, Lima D, Rubens A, Jilaveanu L, Tsirigos A, Hernando E. MetFinder: A Tool for Automated Quantitation of Metastatic Burden in Histological Sections From Preclinical Models. Pigment Cell Melanoma Res 2025; 38:e13195. [PMID: 39254030 PMCID: PMC11948878 DOI: 10.1111/pcmr.13195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 07/31/2024] [Accepted: 08/06/2024] [Indexed: 09/11/2024]
Abstract
As efforts to study the mechanisms of melanoma metastasis and novel therapeutic approaches multiply, researchers need accurate, high-throughput methods to evaluate the effects on tumor burden resulting from specific interventions. We show that automated quantification of tumor content from whole slide images is a compelling solution to assess in vivo experiments. In order to increase the outflow of data collection from preclinical studies, we assembled a large dataset with annotations and trained a deep neural network for the quantitative analysis of melanoma tumor content on histopathological sections of murine models. After assessing its performance in segmenting these images, the tool obtained consistent results with an orthogonal method (bioluminescence) of measuring metastasis in an experimental setting. This AI-based algorithm, made freely available to academic laboratories through a web-interface called MetFinder, promises to become an asset for melanoma researchers and pathologists interested in accurate, quantitative assessment of metastasis burden.
Collapse
Affiliation(s)
- Alcida Karz
- Department of Pathology, NYU Grossman School of Medicine, New York, NY 10016
- Interdisciplinary Melanoma Cooperative Group, Perlmutter Cancer Center, NYU Langone Health, New York, NY 10016
| | - Nicolas Coudray
- Applied Bioinformatics Laboratories, NYU Langone Health, New York, NY 10016
- Department of Cell Biology, NYU School of Medicine; New York, NY, USA
| | - Erol Bayraktar
- Department of Pathology, NYU Grossman School of Medicine, New York, NY 10016
- Interdisciplinary Melanoma Cooperative Group, Perlmutter Cancer Center, NYU Langone Health, New York, NY 10016
| | - Kristyn Galbraith
- Department of Pathology, NYU Grossman School of Medicine, New York, NY 10016
| | - George Jour
- Department of Pathology, NYU Grossman School of Medicine, New York, NY 10016
| | - Arman Alberto Sorin Shadaloey
- Department of Pathology, NYU Grossman School of Medicine, New York, NY 10016
- Interdisciplinary Melanoma Cooperative Group, Perlmutter Cancer Center, NYU Langone Health, New York, NY 10016
| | - Nicole Eskow
- Department of Pathology, NYU Grossman School of Medicine, New York, NY 10016
- Interdisciplinary Melanoma Cooperative Group, Perlmutter Cancer Center, NYU Langone Health, New York, NY 10016
| | - Andrey Rubanov
- Department of Pathology, NYU Grossman School of Medicine, New York, NY 10016
- Interdisciplinary Melanoma Cooperative Group, Perlmutter Cancer Center, NYU Langone Health, New York, NY 10016
| | - Maya Navarro
- Department of Pathology, NYU Grossman School of Medicine, New York, NY 10016
- Interdisciplinary Melanoma Cooperative Group, Perlmutter Cancer Center, NYU Langone Health, New York, NY 10016
| | - Rana Moubarak
- Department of Pathology, NYU Grossman School of Medicine, New York, NY 10016
- Interdisciplinary Melanoma Cooperative Group, Perlmutter Cancer Center, NYU Langone Health, New York, NY 10016
| | - Gillian Baptiste
- Department of Pathology, NYU Grossman School of Medicine, New York, NY 10016
- Interdisciplinary Melanoma Cooperative Group, Perlmutter Cancer Center, NYU Langone Health, New York, NY 10016
| | - Grace Levinson
- Department of Pathology, NYU Grossman School of Medicine, New York, NY 10016
- Interdisciplinary Melanoma Cooperative Group, Perlmutter Cancer Center, NYU Langone Health, New York, NY 10016
| | - Valeria Mezzano
- Experimental Pathology Research Laboratory, Division of Advanced Research Technologies, NYU Grossman School of Medicine, New York, NY 10016
| | - Mark Alu
- Experimental Pathology Research Laboratory, Division of Advanced Research Technologies, NYU Grossman School of Medicine, New York, NY 10016
| | - Cynthia Loomis
- Department of Pathology, NYU Grossman School of Medicine, New York, NY 10016
- Experimental Pathology Research Laboratory, Division of Advanced Research Technologies, NYU Grossman School of Medicine, New York, NY 10016
| | - Daniel Lima
- Research Software Engineering Core, Medical Center Information Technology Department, NYU Langone Health, New York, NY 10016
| | - Adam Rubens
- Research Software Engineering Core, Medical Center Information Technology Department, NYU Langone Health, New York, NY 10016
| | - Lucia Jilaveanu
- Department of Medicine, Yale University, New Haven, CT 06510
| | - Aristotelis Tsirigos
- Department of Pathology, NYU Grossman School of Medicine, New York, NY 10016
- Applied Bioinformatics Laboratories, NYU Langone Health, New York, NY 10016
| | - Eva Hernando
- Department of Pathology, NYU Grossman School of Medicine, New York, NY 10016
- Interdisciplinary Melanoma Cooperative Group, Perlmutter Cancer Center, NYU Langone Health, New York, NY 10016
| |
Collapse
|
2
|
Walker C, Talawalla T, Toth R, Ambekar A, Rea K, Chamian O, Fan F, Berezowska S, Rottenberg S, Madabhushi A, Maillard M, Barisoni L, Horlings HM, Janowczyk A. PatchSorter: a high throughput deep learning digital pathology tool for object labeling. NPJ Digit Med 2024; 7:164. [PMID: 38902336 PMCID: PMC11190251 DOI: 10.1038/s41746-024-01150-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 05/31/2024] [Indexed: 06/22/2024] Open
Abstract
The discovery of patterns associated with diagnosis, prognosis, and therapy response in digital pathology images often requires intractable labeling of large quantities of histological objects. Here we release an open-source labeling tool, PatchSorter, which integrates deep learning with an intuitive web interface. Using >100,000 objects, we demonstrate a >7x improvement in labels per second over unaided labeling, with minimal impact on labeling accuracy, thus enabling high-throughput labeling of large datasets.
Collapse
Affiliation(s)
- Cédric Walker
- Institute of Animal Pathology, Vetsuisse Faculty, University of Bern, Bern, Switzerland
- Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | - Tasneem Talawalla
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Robert Toth
- Toth Technology LLC, Toth Technology LLC, New Brunswick, NJ, USA
| | - Akhil Ambekar
- Department of Pathology, Division of AI & Computational Pathology, Duke University, Durham, NC, USA
- AI Health, Duke University, Durham, NC, USA
| | - Kien Rea
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Oswin Chamian
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Fan Fan
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Sabina Berezowska
- Institute of Pathology, Lausanne University Hospital, Lausanne, Switzerland
| | - Sven Rottenberg
- Institute of Animal Pathology, Vetsuisse Faculty, University of Bern, Bern, Switzerland
- Bern Center for Precision Medicine, University of Bern, Bern, Switzerland
| | - Anant Madabhushi
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
- Atlanta Veterans Medical Center, Atlanta, GA, USA
| | - Marie Maillard
- Institute of Pathology, Lausanne University Hospital, Lausanne, Switzerland
| | - Laura Barisoni
- Department of Pathology, Division of AI & Computational Pathology, Duke University, Durham, NC, USA
- Department of Medicine, Division of Nephrology, Duke University, Durham, NC, USA
| | - Hugo Mark Horlings
- Department of Pathology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Andrew Janowczyk
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA.
- Department of Oncology, Division of Precision Oncology, Geneva University Hospitals, Geneva, Switzerland.
- Department of Diagnostics, Division of Clinical Pathology, Geneva University Hospitals, Geneva, Switzerland.
| |
Collapse
|
3
|
Chelebian E, Avenel C, Ciompi F, Wählby C. DEPICTER: Deep representation clustering for histology annotation. Comput Biol Med 2024; 170:108026. [PMID: 38308865 DOI: 10.1016/j.compbiomed.2024.108026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 01/24/2024] [Accepted: 01/24/2024] [Indexed: 02/05/2024]
Abstract
Automatic segmentation of histopathology whole-slide images (WSI) usually involves supervised training of deep learning models with pixel-level labels to classify each pixel of the WSI into tissue regions such as benign or cancerous. However, fully supervised segmentation requires large-scale data manually annotated by experts, which can be expensive and time-consuming to obtain. Non-fully supervised methods, ranging from semi-supervised to unsupervised, have been proposed to address this issue and have been successful in WSI segmentation tasks. But these methods have mainly been focused on technical advancements in algorithmic performance rather than on the development of practical tools that could be used by pathologists or researchers in real-world scenarios. In contrast, we present DEPICTER (Deep rEPresentatIon ClusTERing), an interactive segmentation tool for histopathology annotation that produces a patch-wise dense segmentation map at WSI level. The interactive nature of DEPICTER leverages self- and semi-supervised learning approaches to allow the user to participate in the segmentation producing reliable results while reducing the workload. DEPICTER consists of three steps: first, a pretrained model is used to compute embeddings from image patches. Next, the user selects a number of benign and cancerous patches from the multi-resolution image. Finally, guided by the deep representations, label propagation is achieved using our novel seeded iterative clustering method or by directly interacting with the embedding space via feature space gating. We report both real-time interaction results with three pathologists and evaluate the performance on three public cancer classification dataset benchmarks through simulations. The code and demos of DEPICTER are publicly available at https://github.com/eduardchelebian/depicter.
Collapse
Affiliation(s)
- Eduard Chelebian
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden.
| | - Chirstophe Avenel
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Carolina Wählby
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden
| |
Collapse
|
4
|
Miao R, Toth R, Zhou Y, Madabhushi A, Janowczyk A. Quick Annotator: an open-source digital pathology based rapid image annotation tool. J Pathol Clin Res 2021; 7:542-547. [PMID: 34288586 PMCID: PMC8503896 DOI: 10.1002/cjp2.229] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 05/16/2021] [Accepted: 05/22/2021] [Indexed: 11/23/2022]
Abstract
Image-based biomarker discovery typically requires accurate segmentation of histologic structures (e.g. cell nuclei, tubules, and epithelial regions) in digital pathology whole slide images (WSIs). Unfortunately, annotating each structure of interest is laborious and often intractable even in moderately sized cohorts. Here, we present an open-source tool, Quick Annotator (QA), designed to improve annotation efficiency of histologic structures by orders of magnitude. While the user annotates regions of interest (ROIs) via an intuitive web interface, a deep learning (DL) model is concurrently optimized using these annotations and applied to the ROI. The user iteratively reviews DL results to either (1) accept accurately annotated regions or (2) correct erroneously segmented structures to improve subsequent model suggestions, before transitioning to other ROIs. We demonstrate the effectiveness of QA over comparable manual efforts via three use cases. These include annotating (1) 337,386 nuclei in 5 pancreatic WSIs, (2) 5,692 tubules in 10 colorectal WSIs, and (3) 14,187 regions of epithelium in 10 breast WSIs. Efficiency gains in terms of annotations per second of 102×, 9×, and 39× were, respectively, witnessed while retaining f-scores >0.95, suggesting that QA may be a valuable tool for efficiently fully annotating WSIs employed in downstream biomarker studies.
Collapse
Affiliation(s)
- Runtian Miao
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
| | | | - Yu Zhou
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
| | - Anant Madabhushi
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
- Louis Stokes Veterans Administration Medical CenterClevelandOHUSA
| | - Andrew Janowczyk
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
- Precision Oncology CenterLausanne University HospitalLausanneSwitzerland
| |
Collapse
|
5
|
Stadler CB, Lindvall M, Lundström C, Bodén A, Lindman K, Rose J, Treanor D, Blomma J, Stacke K, Pinchaud N, Hedlund M, Landgren F, Woisetschläger M, Forsberg D. Proactive Construction of an Annotated Imaging Database for Artificial Intelligence Training. J Digit Imaging 2021; 34:105-115. [PMID: 33169211 PMCID: PMC7887127 DOI: 10.1007/s10278-020-00384-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
Artificial intelligence (AI) holds much promise for enabling highly desired imaging diagnostics improvements. One of the most limiting bottlenecks for the development of useful clinical-grade AI models is the lack of training data. One aspect is the large amount of cases needed and another is the necessity of high-quality ground truth annotation. The aim of the project was to establish and describe the construction of a database with substantial amounts of detail-annotated oncology imaging data from pathology and radiology. A specific objective was to be proactive, that is, to support undefined subsequent AI training across a wide range of tasks, such as detection, quantification, segmentation, and classification, which puts particular focus on the quality and generality of the annotations. The main outcome of this project was the database as such, with a collection of labeled image data from breast, ovary, skin, colon, skeleton, and liver. In addition, this effort also served as an exploration of best practices for further scalability of high-quality image collections, and a main contribution of the study was generic lessons learned regarding how to successfully organize efforts to construct medical imaging databases for AI training, summarized as eight guiding principles covering team, process, and execution aspects.
Collapse
Affiliation(s)
- Caroline Bivik Stadler
- Center for Medical Image Science and Visualization (CMIV), Linköping University Hospital, Linköping University, SE-581 85, Linköping, Sweden. .,Department of Health, Medicine and Caring Sciences (HMV), Linköping University, SE-581 85, Linköping, Sweden.
| | - Martin Lindvall
- Sectra AB, Teknikringen 20, SE-583 30, Linköping, Sweden.,Department of Science and Technology (ITN), Linköping University, Campus Norrköping, SE-601 74, Norrköping, Sweden
| | - Claes Lundström
- Center for Medical Image Science and Visualization (CMIV), Linköping University Hospital, Linköping University, SE-581 85, Linköping, Sweden.,Sectra AB, Teknikringen 20, SE-583 30, Linköping, Sweden.,Department of Science and Technology (ITN), Linköping University, Campus Norrköping, SE-601 74, Norrköping, Sweden
| | - Anna Bodén
- Center for Medical Image Science and Visualization (CMIV), Linköping University Hospital, Linköping University, SE-581 85, Linköping, Sweden.,Department of Clinical Pathology, Region Östergötland, Linköping University Hospital, SE-581 85, Linköping, Sweden.,Department of Biomedical and Clinical Sciences (BKV), Linköping University, SE-581 85, Linköping, Sweden
| | - Karin Lindman
- Center for Medical Image Science and Visualization (CMIV), Linköping University Hospital, Linköping University, SE-581 85, Linköping, Sweden.,Department of Clinical Pathology, Region Östergötland, Linköping University Hospital, SE-581 85, Linköping, Sweden.,Department of Biomedical and Clinical Sciences (BKV), Linköping University, SE-581 85, Linköping, Sweden
| | - Jeronimo Rose
- Center for Medical Image Science and Visualization (CMIV), Linköping University Hospital, Linköping University, SE-581 85, Linköping, Sweden
| | - Darren Treanor
- Center for Medical Image Science and Visualization (CMIV), Linköping University Hospital, Linköping University, SE-581 85, Linköping, Sweden.,Department of Clinical Pathology, Region Östergötland, Linköping University Hospital, SE-581 85, Linköping, Sweden.,Department of Biomedical and Clinical Sciences (BKV), Linköping University, SE-581 85, Linköping, Sweden.,Department of Cellular Pathology, Leeds Teaching Hospital NHS Trust, Beckett St, Leeds, LS9 7TF, UK.,University of Leeds, Leeds, LS2 9JT, UK
| | - Johan Blomma
- Department of Radiology, Region Östergötland, Linköping University Hospital, SE-581 85, Linköping, Sweden
| | - Karin Stacke
- Sectra AB, Teknikringen 20, SE-583 30, Linköping, Sweden.,Department of Science and Technology (ITN), Linköping University, Campus Norrköping, SE-601 74, Norrköping, Sweden
| | - Nicolas Pinchaud
- ContextVision AB, Klara Norra Kyrkogata 31, SE-111 22, Stockholm, Sweden
| | - Martin Hedlund
- ContextVision AB, Klara Norra Kyrkogata 31, SE-111 22, Stockholm, Sweden
| | - Filip Landgren
- Department of Radiology, Region Östergötland, Linköping University Hospital, SE-581 85, Linköping, Sweden
| | - Mischa Woisetschläger
- Center for Medical Image Science and Visualization (CMIV), Linköping University Hospital, Linköping University, SE-581 85, Linköping, Sweden.,Department of Radiology, Region Östergötland, Linköping University Hospital, SE-581 85, Linköping, Sweden
| | | |
Collapse
|