1
|
Ahmad Z, Al-Thelaya K, Alzubaidi M, Joad F, Gilal NU, Mifsud W, Boughorbel S, Pintore G, Gobbetti E, Schneider J, Agus M. HistoMSC: Density and topology analysis for AI-based visual annotation of histopathology whole slide images. Comput Biol Med 2025; 190:109991. [PMID: 40120181 DOI: 10.1016/j.compbiomed.2025.109991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 12/20/2024] [Accepted: 03/04/2025] [Indexed: 03/25/2025]
Abstract
We introduce an end-to-end framework for the automated visual annotation of histopathology whole slide images. Our method integrates deep learning models to achieve precise localization and classification of cell nuclei with spatial data aggregation to extend classes of sparsely distributed nuclei across the entire slide. We introduce a novel and cost-effective approach to localization, leveraging a U-Net architecture and a ResNet-50 backbone. The performance is boosted through color normalization techniques, helping achieve robustness under color variations resulting from diverse scanners and staining reagents. The framework is complemented by a YOLO detection architecture, augmented with generative methods. For classification, we use context patches around each nucleus, fed to various deep architectures. Sparse nuclei-level annotations are then aggregated using kernel density estimation, followed by color-coding and isocontouring. This reduces visual clutter and provides per-pixel probabilities with respect to pathology taxonomies. Finally, we use Morse-Smale theory to generate abstract annotations, highlighting extrema in the density functions and potential spatial interactions in the form of abstract graphs. Thus, our visualization allows for exploration at scales ranging from individual nuclei to the macro-scale. We tested the effectiveness of our framework in an assessment by six pathologists using various neoplastic cases. Our results demonstrate the robustness and usefulness of the proposed framework in aiding histopathologists in their analysis and interpretation of whole slide images.
Collapse
Affiliation(s)
- Zahoor Ahmad
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Khaled Al-Thelaya
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Mahmood Alzubaidi
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Faaiz Joad
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Nauman Ullah Gilal
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | | | - Sabri Boughorbel
- Qatar Computing Research Institute, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | | | | | - Jens Schneider
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Marco Agus
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar.
| |
Collapse
|
2
|
Lange D, Judson-Torres R, Zangle TA, Lex A. Aardvark: Composite Visualizations of Trees, Time-Series, and Images. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:1290-1300. [PMID: 39255114 DOI: 10.1109/tvcg.2024.3456193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
How do cancer cells grow, divide, proliferate, and die? How do drugs influence these processes? These are difficult questions that we can attempt to answer with a combination of time-series microscopy experiments, classification algorithms, and data visualization. However, collecting this type of data and applying algorithms to segment and track cells and construct lineages of proliferation is error-prone; and identifying the errors can be challenging since it often requires cross-checking multiple data types. Similarly, analyzing and communicating the results necessitates synthesizing different data types into a single narrative. State-of-the-art visualization methods for such data use independent line charts, tree diagrams, and images in separate views. However, this spatial separation requires the viewer of these charts to combine the relevant pieces of data in memory. To simplify this challenging task, we describe design principles for weaving cell images, time-series data, and tree data into a cohesive visualization. Our design principles are based on choosing a primary data type that drives the layout and integrates the other data types into that layout. We then introduce Aardvark, a system that uses these principles to implement novel visualization techniques. Based on Aardvark, we demonstrate the utility of each of these approaches for discovery, communication, and data debugging in a series of case studies.
Collapse
|
3
|
Rudinskiy M, Morone D, Molinari M. Fluorescent Reporters, Imaging, and Artificial Intelligence Toolkits to Monitor and Quantify Autophagy, Heterophagy, and Lysosomal Trafficking Fluxes. Traffic 2024; 25:e12957. [PMID: 39450581 DOI: 10.1111/tra.12957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 08/21/2024] [Accepted: 10/03/2024] [Indexed: 10/26/2024]
Abstract
Lysosomal compartments control the clearance of cell-own material (autophagy) or of material that cells endocytose from the external environment (heterophagy) to warrant supply of nutrients, to eliminate macromolecules or parts of organelles present in excess, aged, or containing toxic material. Inherited or sporadic mutations in lysosomal proteins and enzymes may hamper their folding in the endoplasmic reticulum (ER) and their lysosomal transport via the Golgi compartment, resulting in lysosomal dysfunction and storage disorders. Defective cargo delivery to lysosomal compartments is harmful to cells and organs since it causes accumulation of toxic compounds and defective organellar homeostasis. Assessment of resident proteins and cargo fluxes to the lysosomal compartments is crucial for the mechanistic dissection of intracellular transport and catabolic events. It might be combined with high-throughput screenings to identify cellular, chemical, or pharmacological modulators of these events that may find therapeutic use for autophagy-related and lysosomal storage disorders. Here, discuss qualitative, quantitative and chronologic monitoring of autophagic, heterophagic and lysosomal protein trafficking in fixed and live cells, which relies on fluorescent single and tandem reporters used in combination with biochemical, flow cytometry, light and electron microscopy approaches implemented by artificial intelligence-based technology.
Collapse
Affiliation(s)
- Mikhail Rudinskiy
- Università della Svizzera italiana, Lugano, Switzerland
- Institute for Research in Biomedicine, Bellinzona, Switzerland
- Department of Biology, Swiss Federal Institute of Technology, Zurich, Switzerland
| | - Diego Morone
- Università della Svizzera italiana, Lugano, Switzerland
- Institute for Research in Biomedicine, Bellinzona, Switzerland
- Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | - Maurizio Molinari
- Università della Svizzera italiana, Lugano, Switzerland
- Institute for Research in Biomedicine, Bellinzona, Switzerland
- École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
4
|
Warchol S, Troidl J, Muhlich J, Krueger R, Hoffer J, Lin T, Beyer J, Glassman E, Sorger PK, Pfister H. psudo: Exploring Multi-Channel Biomedical Image Data with Spatially and Perceptually Optimized Pseudocoloring. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.11.589087. [PMID: 38659870 PMCID: PMC11042212 DOI: 10.1101/2024.04.11.589087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
Over the past century, multichannel fluorescence imaging has been pivotal in myriad scientific breakthroughs by enabling the spatial visualization of proteins within a biological sample. With the shift to digital methods and visualization software, experts can now flexibly pseudocolor and combine image channels, each corresponding to a different protein, to explore their spatial relationships. We thus propose psudo, an interactive system that allows users to create optimal color palettes for multichannel spatial data. In psudo, a novel optimization method generates palettes that maximize the perceptual differences between channels while mitigating confusing color blending in overlapping channels. We integrate this method into a system that allows users to explore multi-channel image data and compare and evaluate color palettes for their data. An interactive lensing approach provides on-demand feedback on channel overlap and a color confusion metric while giving context to the underlying channel values. Color palettes can be applied globally or, using the lens, to local regions of interest. We evaluate our palette optimization approach using three graphical perception tasks in a crowdsourced user study with 150 participants, showing that users are more accurate at discerning and comparing the underlying data using our approach. Additionally, we showcase psudo in a case study exploring the complex immune responses in cancer tissue data with a biologist.
Collapse
Affiliation(s)
- Simon Warchol
- Harvard John A. Paulson School Of Engineering And Applied Sciences
- Visual Computing Group, Harvard University
- Laboratory of Systems Pharmacology, Harvard Medical School
| | - Jakob Troidl
- Harvard John A. Paulson School Of Engineering And Applied Sciences
- Visual Computing Group, Harvard University
| | - Jeremy Muhlich
- Department of Systems Biology, Harvard Medical School
- Visual Computing Group, Harvard University
| | - Robert Krueger
- Laboratory of Systems Pharmacology, Harvard Medical School
| | - John Hoffer
- Department of Systems Biology, Harvard Medical School
- Laboratory of Systems Pharmacology, Harvard Medical School
| | - Tica Lin
- Harvard John A. Paulson School Of Engineering And Applied Sciences
- Visual Computing Group, Harvard University
| | - Johanna Beyer
- Harvard John A. Paulson School Of Engineering And Applied Sciences
- Visual Computing Group, Harvard University
| | - Elena Glassman
- Harvard John A. Paulson School Of Engineering And Applied Sciences
| | - Peter K Sorger
- Department of Systems Biology, Harvard Medical School
- Laboratory of Systems Pharmacology, Harvard Medical School
| | - Hanspeter Pfister
- Harvard John A. Paulson School Of Engineering And Applied Sciences
- Visual Computing Group, Harvard University
- Laboratory of Systems Pharmacology, Harvard Medical School
| |
Collapse
|
5
|
Guo G, Deng L, Tandon A, Endert A, Kwon BC. MiMICRI: Towards Domain-centered Counterfactual Explanations of Cardiovascular Image Classification Models. FACCT '24 : PROCEEDINGS OF THE 2024 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY (ACM FACCT '24) : JUNE 3RD-6TH 2024, RIO DE JANEIRO, BRAZIL. ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY (2024 : RIO DE JA... 2024; 2024:1861-1874. [PMID: 39877054 PMCID: PMC11774553 DOI: 10.1145/3630106.3659011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/31/2025]
Abstract
The recent prevalence of publicly accessible, large medical imaging datasets has led to a proliferation of artificial intelligence (AI) models for cardiovascular image classification and analysis. At the same time, the potentially significant impacts of these models have motivated the development of a range of explainable AI (XAI) methods that aim to explain model predictions given certain image inputs. However, many of these methods are not developed or evaluated with domain experts, and explanations are not contextualized in terms of medical expertise or domain knowledge. In this paper, we propose a novel framework and python library, MiMICRI, that provides domain-centered counterfactual explanations of cardiovascular image classification models. MiMICRI helps users interactively select and replace segments of medical images that correspond to morphological structures. From the counterfactuals generated, users can then assess the influence of each segment on model predictions, and validate the model against known medical facts. We evaluate this library with two medical experts. Our evaluation demonstrates that a domain-centered XAI approach can enhance the interpretability of model explanations, and help experts reason about models in terms of relevant domain knowledge. However, concerns were also surfaced about the clinical plausibility of the counterfactuals generated. We conclude with a discussion on the generalizability and trustworthiness of the MiMICRI framework, as well as the implications of our findings on the development of domain-centered XAI methods for model interpretability in healthcare contexts.
Collapse
Affiliation(s)
- Grace Guo
- Georgia Institute of Technology Atlanta, Georgia, USA
| | - Lifu Deng
- Cleveland Clinic Cleveland, Ohio, USA
| | | | - Alex Endert
- Georgia Institute of Technology Atlanta, Georgia, USA
| | | |
Collapse
|
6
|
Herzberger L, Hadwiger M, Kruger R, Sorger P, Pfister H, Groller E, Beyer J. Residency Octree: A Hybrid Approach for Scalable Web-Based Multi-Volume Rendering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:1380-1390. [PMID: 37889813 PMCID: PMC10840607 DOI: 10.1109/tvcg.2023.3327193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/29/2023]
Abstract
We present a hybrid multi-volume rendering approach based on a novel Residency Octree that combines the advantages of out-of-core volume rendering using page tables with those of standard octrees. Octree approaches work by performing hierarchical tree traversal. However, in octree volume rendering, tree traversal and the selection of data resolution are intrinsically coupled. This makes fine-grained empty-space skipping costly. Page tables, on the other hand, allow access to any cached brick from any resolution. However, they do not offer a clear and efficient strategy for substituting missing high-resolution data with lower-resolution data. We enable flexible mixed-resolution out-of-core multi-volume rendering by decoupling the cache residency of multi-resolution data from a resolution-independent spatial subdivision determined by the tree. Instead of one-to-one node-to-brick correspondences, each residency octree node is mapped to a set of bricks from different resolution levels. This makes it possible to efficiently and adaptively choose and mix resolutions, adapt sampling rates, and compensate for cache misses. At the same time, residency octrees support fine-grained empty-space skipping, independent of the data subdivision used for caching. Finally, to facilitate collaboration and outreach, and to eliminate local data storage, our implementation is a web-based, pure client-side renderer using WebGPU and WebAssembly. Our method is faster than prior approaches and efficient for many data channels with a flexible and adaptive choice of data resolution.
Collapse
|
7
|
Way GP, Sailem H, Shave S, Kasprowicz R, Carragher NO. Evolution and impact of high content imaging. SLAS DISCOVERY : ADVANCING LIFE SCIENCES R & D 2023; 28:292-305. [PMID: 37666456 DOI: 10.1016/j.slasd.2023.08.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 08/09/2023] [Accepted: 08/29/2023] [Indexed: 09/06/2023]
Abstract
The field of high content imaging has steadily evolved and expanded substantially across many industry and academic research institutions since it was first described in the early 1990's. High content imaging refers to the automated acquisition and analysis of microscopic images from a variety of biological sample types. Integration of high content imaging microscopes with multiwell plate handling robotics enables high content imaging to be performed at scale and support medium- to high-throughput screening of pharmacological, genetic and diverse environmental perturbations upon complex biological systems ranging from 2D cell cultures to 3D tissue organoids to small model organisms. In this perspective article the authors provide a collective view on the following key discussion points relevant to the evolution of high content imaging: • Evolution and impact of high content imaging: An academic perspective • Evolution and impact of high content imaging: An industry perspective • Evolution of high content image analysis • Evolution of high content data analysis pipelines towards multiparametric and phenotypic profiling applications • The role of data integration and multiomics • The role and evolution of image data repositories and sharing standards • Future perspective of high content imaging hardware and software.
Collapse
Affiliation(s)
- Gregory P Way
- Department of Biomedical Informatics, University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| | - Heba Sailem
- School of Cancer and Pharmaceutical Sciences, King's College London, UK
| | - Steven Shave
- GlaxoSmithKline Medicines Research Centre, Gunnels Wood Rd, Stevenage SG1 2NY, UK; Edinburgh Cancer Research, Cancer Research UK Scotland Centre, Institute of Genetics and Cancer, University of Edinburgh, UK
| | - Richard Kasprowicz
- GlaxoSmithKline Medicines Research Centre, Gunnels Wood Rd, Stevenage SG1 2NY, UK
| | - Neil O Carragher
- Edinburgh Cancer Research, Cancer Research UK Scotland Centre, Institute of Genetics and Cancer, University of Edinburgh, UK.
| |
Collapse
|
8
|
Xu C, Neuroth T, Fujiwara T, Liang R, Ma KL. A Predictive Visual Analytics System for Studying Neurodegenerative Disease Based on DTI Fiber Tracts. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:2020-2035. [PMID: 34965212 DOI: 10.1109/tvcg.2021.3137174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Diffusion tensor imaging (DTI) has been used to study the effects of neurodegenerative diseases on neural pathways, which may lead to more reliable and early diagnosis of these diseases as well as a better understanding of how they affect the brain. We introduce a predictive visual analytics system for studying patient groups based on their labeled DTI fiber tract data and corresponding statistics. The system's machine-learning-augmented interface guides the user through an organized and holistic analysis space, including the statistical feature space, the physical space, and the space of patients over different groups. We use a custom machine learning pipeline to help narrow down this large analysis space and then explore it pragmatically through a range of linked visualizations. We conduct several case studies using DTI and T1-weighted images from the research database of Parkinson's Progression Markers Initiative.
Collapse
|
9
|
Choi J, Lee SE, Lee Y, Cho E, Chang S, Jeong WK. DXplorer: A Unified Visualization Framework for Interactive Dendritic Spine Analysis Using 3D Morphological Features. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1424-1437. [PMID: 34591770 DOI: 10.1109/tvcg.2021.3116656] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Dendritic spines are dynamic, submicron-scale protrusions on neuronal dendrites that receive neuronal inputs. Morphological changes in the dendritic spine often reflect alterations in physiological conditions and are indicators of various neuropsychiatric conditions. However, owing to the highly dynamic and heterogeneous nature of spines, accurate measurement and objective analysis of spine morphology are major challenges in neuroscience research. Most conventional approaches for analyzing dendritic spines are based on two-dimensional (2D) images, which barely reflect the actual three-dimensional (3D) shapes. Although some recent studies have attempted to analyze spines with various 3D-based features, it is still difficult to objectively categorize and analyze spines based on 3D morphology. Here, we propose a unified visualization framework for an interactive 3D dendritic spine analysis system, DXplorer, that displays 3D rendering of spines and plots the high-dimensional features extracted from the 3D mesh of spines. With this system, users can perform the clustering of spines interactively and explore and analyze dendritic spines based on high-dimensional features. We propose a series of high-dimensional morphological features extracted from a 3D mesh of dendritic spines. In addition, an interactive machine learning classifier with visual exploration and user feedback using an interactive 3D mesh grid view ensures a more precise classification based on the spine phenotype. A user study and two case studies were conducted to quantitatively verify the performance and usability of the DXplorer. We demonstrate that the system performs the entire analytic process effectively and provides high-quality, accurate, and objective analysis.
Collapse
|
10
|
Afzal S, Ghani S, Hittawe MM, Rashid SF, Knio OM, Hadwiger M, Hoteit I. Visualization and Visual Analytics Approaches for Image and Video Datasets: A Survey. ACM T INTERACT INTEL 2023. [DOI: 10.1145/3576935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Image and video data analysis has become an increasingly important research area with applications in different domains such as security surveillance, healthcare, augmented and virtual reality, video and image editing, activity analysis and recognition, synthetic content generation, distance education, telepresence, remote sensing, sports analytics, art, non-photorealistic rendering, search engines, and social media. Recent advances in Artificial Intelligence (AI) and particularly deep learning have sparked new research challenges and led to significant advancements, especially in image and video analysis. These advancements have also resulted in significant research and development in other areas such as visualization and visual analytics, and have created new opportunities for future lines of research. In this survey paper, we present the current state of the art at the intersection of visualization and visual analytics, and image and video data analysis. We categorize the visualization papers included in our survey based on different taxonomies used in visualization and visual analytics research. We review these papers in terms of task requirements, tools, datasets, and application areas. We also discuss insights based on our survey results, trends and patterns, the current focus of visualization research, and opportunities for future research.
Collapse
Affiliation(s)
- Shehzad Afzal
- King Abdullah University of Science & Technology, Saudi Arabia
| | - Sohaib Ghani
- King Abdullah University of Science & Technology, Saudi Arabia
| | | | | | - Omar M Knio
- King Abdullah University of Science & Technology, Saudi Arabia
| | - Markus Hadwiger
- King Abdullah University of Science & Technology, Saudi Arabia
| | - Ibrahim Hoteit
- King Abdullah University of Science & Technology, Saudi Arabia
| |
Collapse
|
11
|
Cheng F, Keller MS, Qu H, Gehlenborg N, Wang Q. Polyphony: an Interactive Transfer Learning Framework for Single-Cell Data Analysis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:591-601. [PMID: 36155452 PMCID: PMC10039961 DOI: 10.1109/tvcg.2022.3209408] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Reference-based cell-type annotation can significantly reduce time and effort in single-cell analysis by transferring labels from a previously-annotated dataset to a new dataset. However, label transfer by end-to-end computational methods is challenging due to the entanglement of technical (e.g., from different sequencing batches or techniques) and biological (e.g., from different cellular microenvironments) variations, only the first of which must be removed. To address this issue, we propose Polyphony, an interactive transfer learning (ITL) framework, to complement biologists' knowledge with advanced computational methods. Polyphony is motivated and guided by domain experts' needs for a controllable, interactive, and algorithm-assisted annotation process, identified through interviews with seven biologists. We introduce anchors, i.e., analogous cell populations across datasets, as a paradigm to explain the computational process and collect user feedback for model improvement. We further design a set of visualizations and interactions to empower users to add, delete, or modify anchors, resulting in refined cell type annotations. The effectiveness of this approach is demonstrated through quantitative experiments, two hypothetical use cases, and interviews with two biologists. The results show that our anchor-based ITL method takes advantage of both human and machine intelligence in annotating massive single-cell datasets.
Collapse
|
12
|
Warchol S, Krueger R, Nirmal AJ, Gaglia G, Jessup J, Ritch CC, Hoffer J, Muhlich J, Burger ML, Jacks T, Santagata S, Sorger PK, Pfister H. Visinity: Visual Spatial Neighborhood Analysis for Multiplexed Tissue Imaging Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:106-116. [PMID: 36170403 PMCID: PMC10043053 DOI: 10.1109/tvcg.2022.3209378] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
New highly-multiplexed imaging technologies have enabled the study of tissues in unprecedented detail. These methods are increasingly being applied to understand how cancer cells and immune response change during tumor development, progression, and metastasis, as well as following treatment. Yet, existing analysis approaches focus on investigating small tissue samples on a per-cell basis, not taking into account the spatial proximity of cells, which indicates cell-cell interaction and specific biological processes in the larger cancer microenvironment. We present Visinity, a scalable visual analytics system to analyze cell interaction patterns across cohorts of whole-slide multiplexed tissue images. Our approach is based on a fast regional neighborhood computation, leveraging unsupervised learning to quantify, compare, and group cells by their surrounding cellular neighborhood. These neighborhoods can be visually analyzed in an exploratory and confirmatory workflow. Users can explore spatial patterns present across tissues through a scalable image viewer and coordinated views highlighting the neighborhood composition and spatial arrangements of cells. To verify or refine existing hypotheses, users can query for specific patterns to determine their presence and statistical significance. Findings can be interactively annotated, ranked, and compared in the form of small multiples. In two case studies with biomedical experts, we demonstrate that Visinity can identify common biological processes within a human tonsil and uncover novel white-blood cell networks and immune-tumor interactions.
Collapse
|
13
|
Wrobel J, Harris C, Vandekar S. Statistical Analysis of Multiplex Immunofluorescence and Immunohistochemistry Imaging Data. Methods Mol Biol 2023; 2629:141-168. [PMID: 36929077 DOI: 10.1007/978-1-0716-2986-4_8] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/18/2023]
Abstract
Advances in multiplexed single-cell immunofluorescence (mIF) and multiplex immunohistochemistry (mIHC) imaging technologies have enabled the analysis of cell-to-cell spatial relationships that promise to revolutionize our understanding of tissue-based diseases and autoimmune disorders. Multiplex images are collected as multichannel TIFF files; then denoised, segmented to identify cells and nuclei, normalized across slides with protein markers to correct for batch effects, and phenotyped; and then tissue composition and spatial context at the cellular level are analyzed. This chapter discusses methods and software infrastructure for image processing and statistical analysis of mIF/mIHC data.
Collapse
Affiliation(s)
- Julia Wrobel
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, Aurora, CO, USA.
| | - Coleman Harris
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Simon Vandekar
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
14
|
Lou P, Wang C, Guo R, Yao L, Zhang G, Yang J, Yuan Y, Dong Y, Gao Z, Gong T, Li C. HistoML, a markup language for representation and exchange of histopathological features in pathology images. Sci Data 2022; 9:387. [PMID: 35803960 PMCID: PMC9270329 DOI: 10.1038/s41597-022-01505-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 06/23/2022] [Indexed: 11/09/2022] Open
Abstract
The study of histopathological phenotypes is vital for cancer research and medicine as it links molecular mechanisms to disease prognosis. It typically involves integration of heterogenous histopathological features in whole-slide images (WSI) to objectively characterize a histopathological phenotype. However, the large-scale implementation of phenotype characterization has been hindered by the fragmentation of histopathological features, resulting from the lack of a standardized format and a controlled vocabulary for structured and unambiguous representation of semantics in WSIs. To fill this gap, we propose the Histopathology Markup Language (HistoML), a representation language along with a controlled vocabulary (Histopathology Ontology) based on Semantic Web technologies. Multiscale features within a WSI, from single-cell features to mesoscopic features, could be represented using HistoML which is a crucial step towards the goal of making WSIs findable, accessible, interoperable and reusable (FAIR). We pilot HistoML in representing WSIs of kidney cancer as well as thyroid carcinoma and exemplify the uses of HistoML representations in semantic queries to demonstrate the potential of HistoML-powered applications for phenotype characterization.
Collapse
Affiliation(s)
- Peiliang Lou
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China
| | - Chunbao Wang
- Department of Pathology, The First Affiliated Hospital of Xi'an Jiaotong University, 277 West Yanta Road, Xi'an, Shaanxi, China
| | - Ruifeng Guo
- Division of Anatomic Pathology, Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota, USA
| | - Lixia Yao
- Department of Health Services Administration and Policy, Temple University, Philadelphia, PA, USA
| | - Guanjun Zhang
- Department of Pathology, The First Affiliated Hospital of Xi'an Jiaotong University, 277 West Yanta Road, Xi'an, Shaanxi, China
| | - Jun Yang
- Department of Pathology, The Second Affiliated Hospital of Xi'an Jiaotong University, No. 3, Shang Qin Road, Xi'an, Shaanxi, China
| | - Yong Yuan
- Department of Pathology, Shaanxi Provincial Tumor Hospital, Xi'an Jiaotong University, 309 Yanta West Road, Xi'an, Shaanxi, China
| | - Yuxin Dong
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China
| | - Zeyu Gao
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China
| | - Tieliang Gong
- Key Laboratory of Intelligent Networks and Network Security (Xi'an Jiaotong University), Ministry of Education, Xi'an, Shaanxi, 710049, China
| | - Chen Li
- National Engineering Lab for Big Data Analytics, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China.
| |
Collapse
|
15
|
Khawatmi M, Steux Y, Zourob S, Sailem HZ. ShapoGraphy: A User-Friendly Web Application for Creating Bespoke and Intuitive Visualisation of Biomedical Data. FRONTIERS IN BIOINFORMATICS 2022; 2:788607. [PMID: 36304310 PMCID: PMC9580894 DOI: 10.3389/fbinf.2022.788607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Accepted: 05/23/2022] [Indexed: 12/05/2022] Open
Abstract
Effective visualisation of quantitative microscopy data is crucial for interpreting and discovering new patterns from complex bioimage data. Existing visualisation approaches, such as bar charts, scatter plots and heat maps, do not accommodate the complexity of visual information present in microscopy data. Here we develop ShapoGraphy, a first of its kind method accompanied by an interactive web-based application for creating customisable quantitative pictorial representations to facilitate the understanding and analysis of image datasets (www.shapography.com). ShapoGraphy enables the user to create a structure of interest as a set of shapes. Each shape can encode different variables that are mapped to the shape dimensions, colours, symbols, or outline. We illustrate the utility of ShapoGraphy using various image data, including high dimensional multiplexed data. Our results show that ShapoGraphy allows a better understanding of cellular phenotypes and relationships between variables. In conclusion, ShapoGraphy supports scientific discovery and communication by providing a rich vocabulary to create engaging and intuitive representations of diverse data types.
Collapse
Affiliation(s)
| | | | | | - Heba Z. Sailem
- Institute of Biomedical Engineering, Department of Engineering, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
16
|
Rashid R, Chen YA, Hoffer J, Muhlich JL, Lin JR, Krueger R, Pfister H, Mitchell R, Santagata S, Sorger PK. Narrative online guides for the interpretation of digital-pathology images and tissue-atlas data. Nat Biomed Eng 2022; 6:515-526. [PMID: 34750536 PMCID: PMC9079188 DOI: 10.1038/s41551-021-00789-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 06/02/2021] [Indexed: 01/20/2023]
Abstract
Multiplexed tissue imaging facilitates the diagnosis and understanding of complex disease traits. However, the analysis of such digital images heavily relies on the experience of anatomical pathologists for the review, annotation and description of tissue features. In addition, the wider use of data from tissue atlases in basic and translational research and in classrooms would benefit from software that facilitates the easy visualization and sharing of the images and the results of their analyses. In this Perspective, we describe the ecosystem of software available for the analysis of tissue images and discuss the need for interactive online guides that help histopathologists make complex images comprehensible to non-specialists. We illustrate this idea via a software interface (Minerva), accessible via web browsers, that integrates multi-omic and tissue-atlas features. We argue that such interactive narrative guides can effectively disseminate digital histology data and aid their interpretation.
Collapse
Affiliation(s)
- Rumana Rashid
- Laboratory of Systems Pharmacology, Harvard Medical School, Boston, MA, USA
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Yu-An Chen
- Laboratory of Systems Pharmacology, Harvard Medical School, Boston, MA, USA
| | - John Hoffer
- Laboratory of Systems Pharmacology, Harvard Medical School, Boston, MA, USA
| | - Jeremy L Muhlich
- Laboratory of Systems Pharmacology, Harvard Medical School, Boston, MA, USA
| | - Jia-Ren Lin
- Laboratory of Systems Pharmacology, Harvard Medical School, Boston, MA, USA
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, USA
| | - Robert Krueger
- Laboratory of Systems Pharmacology, Harvard Medical School, Boston, MA, USA
- School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Hanspeter Pfister
- School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Richard Mitchell
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Sandro Santagata
- Laboratory of Systems Pharmacology, Harvard Medical School, Boston, MA, USA.
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, USA.
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
| | - Peter K Sorger
- Laboratory of Systems Pharmacology, Harvard Medical School, Boston, MA, USA.
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, USA.
- Department of Systems Biology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
17
|
Melanthota SK, Gopal D, Chakrabarti S, Kashyap AA, Radhakrishnan R, Mazumder N. Deep learning-based image processing in optical microscopy. Biophys Rev 2022; 14:463-481. [PMID: 35528030 PMCID: PMC9043085 DOI: 10.1007/s12551-022-00949-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 03/14/2022] [Indexed: 12/19/2022] Open
Abstract
Optical microscopy has emerged as a key driver of fundamental research since it provides the ability to probe into imperceptible structures in the biomedical world. For the detailed investigation of samples, a high-resolution image with enhanced contrast and minimal damage is preferred. To achieve this, an automated image analysis method is preferable over manual analysis in terms of both speed of acquisition and reduced error accumulation. In this regard, deep learning (DL)-based image processing can be highly beneficial. The review summarises and critiques the use of DL in image processing for the data collected using various optical microscopic techniques. In tandem with optical microscopy, DL has already found applications in various problems related to image classification and segmentation. It has also performed well in enhancing image resolution in smartphone-based microscopy, which in turn enablse crucial medical assistance in remote places. Graphical abstract
Collapse
Affiliation(s)
- Sindhoora Kaniyala Melanthota
- Department of Biophysics, Manipal School of Life Sciences, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Dharshini Gopal
- Department of Bioinformatics, Manipal School of Life Sciences, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Shweta Chakrabarti
- Department of Bioinformatics, Manipal School of Life Sciences, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Anirudh Ameya Kashyap
- Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Raghu Radhakrishnan
- Department of Oral Pathology, Manipal College of Dental Sciences, Manipal, Manipal Academy of Higher Education, Manipal, 576104 India
| | - Nirmal Mazumder
- Department of Biophysics, Manipal School of Life Sciences, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| |
Collapse
|
18
|
Jessup J, Krueger R, Warchol S, Hoffer J, Muhlich J, Ritch CC, Gaglia G, Coy S, Chen YA, Lin JR, Santagata S, Sorger PK, Pfister H. Scope2Screen: Focus+Context Techniques for Pathology Tumor Assessment in Multivariate Image Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:259-269. [PMID: 34606456 PMCID: PMC8805697 DOI: 10.1109/tvcg.2021.3114786] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Inspection of tissues using a light microscope is the primary method of diagnosing many diseases, notably cancer. Highly multiplexed tissue imaging builds on this foundation, enabling the collection of up to 60 channels of molecular information plus cell and tissue morphology using antibody staining. This provides unique insight into disease biology and promises to help with the design of patient-specific therapies. However, a substantial gap remains with respect to visualizing the resulting multivariate image data and effectively supporting pathology workflows in digital environments on screen. We, therefore, developed Scope2Screen, a scalable software system for focus+context exploration and annotation of whole-slide, high-plex, tissue images. Our approach scales to analyzing 100GB images of 109 or more pixels per channel, containing millions of individual cells. A multidisciplinary team of visualization experts, microscopists, and pathologists identified key image exploration and annotation tasks involving finding, magnifying, quantifying, and organizing regions of interest (ROIs) in an intuitive and cohesive manner. Building on a scope-to-screen metaphor, we present interactive lensing techniques that operate at single-cell and tissue levels. Lenses are equipped with task-specific functionality and descriptive statistics, making it possible to analyze image features, cell types, and spatial arrangements (neighborhoods) across image channels and scales. A fast sliding-window search guides users to regions similar to those under the lens; these regions can be analyzed and considered either separately or as part of a larger image collection. A novel snapshot method enables linked lens configurations and image statistics to be saved, restored, and shared with these regions. We validate our designs with domain experts and apply Scope2Screen in two case studies involving lung and colorectal cancers to discover cancer-relevant image features.
Collapse
|
19
|
Lange D, Polanco E, Judson-Torres R, Zangle T, Lex A. Loon: Using Exemplars to Visualize Large-Scale Microscopy Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:248-258. [PMID: 34587022 DOI: 10.1109/tvcg.2021.3114766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Which drug is most promising for a cancer patient? A new microscopy-based approach for measuring the mass of individual cancer cells treated with different drugs promises to answer this question in only a few hours. However, the analysis pipeline for extracting data from these images is still far from complete automation: human intervention is necessary for quality control for preprocessing steps such as segmentation, adjusting filters, removing noise, and analyzing the result. To address this workflow, we developed Loon, a visualization tool for analyzing drug screening data based on quantitative phase microscopy imaging. Loon visualizes both derived data such as growth rates and imaging data. Since the images are collected automatically at a large scale, manual inspection of images and segmentations is infeasible. However, reviewing representative samples of cells is essential, both for quality control and for data analysis. We introduce a new approach for choosing and visualizing representative exemplar cells that retain a close connection to the low-level data. By tightly integrating the derived data visualization capabilities with the novel exemplar visualization and providing selection and filtering capabilities, Loon is well suited for making decisions about which drugs are suitable for a specific patient.
Collapse
|
20
|
AlQuraishi M, Sorger PK. Differentiable biology: using deep learning for biophysics-based and data-driven modeling of molecular mechanisms. Nat Methods 2021; 18:1169-1180. [PMID: 34608321 PMCID: PMC8793939 DOI: 10.1038/s41592-021-01283-4] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 08/27/2021] [Indexed: 02/08/2023]
Abstract
Deep learning using neural networks relies on a class of machine-learnable models constructed using 'differentiable programs'. These programs can combine mathematical equations specific to a particular domain of natural science with general-purpose, machine-learnable components trained on experimental data. Such programs are having a growing impact on molecular and cellular biology. In this Perspective, we describe an emerging 'differentiable biology' in which phenomena ranging from the small and specific (for example, one experimental assay) to the broad and complex (for example, protein folding) can be modeled effectively and efficiently, often by exploiting knowledge about basic natural phenomena to overcome the limitations of sparse, incomplete and noisy data. By distilling differentiable biology into a small set of conceptual primitives and illustrative vignettes, we show how it can help to address long-standing challenges in integrating multimodal data from diverse experiments across biological scales. This promises to benefit fields as diverse as biophysics and functional genomics.
Collapse
Affiliation(s)
- Mohammed AlQuraishi
- Department of Systems Biology, Columbia University, New York, NY, USA.
- Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA, USA.
| | - Peter K Sorger
- Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
21
|
Kalra J, Baker J, Song J, Kyle A, Minchinton A, Bally M. Inter-Metastatic Heterogeneity of Tumor Marker Expression and Microenvironment Architecture in a Preclinical Cancer Model. Int J Mol Sci 2021; 22:6336. [PMID: 34199298 PMCID: PMC8231937 DOI: 10.3390/ijms22126336] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/25/2021] [Accepted: 06/09/2021] [Indexed: 11/30/2022] Open
Abstract
BACKGROUND Preclinical drug development studies rarely consider the impact of a candidate drug on established metastatic disease. This may explain why agents that are successful in subcutaneous and even orthotopic preclinical models often fail to demonstrate efficacy in clinical trials. It is reasonable to anticipate that sites of metastasis will be phenotypically unique, as each tumor will have evolved heterogeneously with respect to gene expression as well as the associated phenotypic outcome of that expression. The objective for the studies described here was to gain an understanding of the tumor heterogeneity that exists in established metastatic disease and use this information to define a preclinical model that is more predictive of treatment outcome when testing novel drug candidates clinically. METHODS Female NCr nude mice were inoculated with fluorescent (mKate), Her2/neu-positive human breast cancer cells (JIMT-mKate), either in the mammary fat pad (orthotopic; OT) to replicate a primary tumor, or directly into the left ventricle (intracardiac; IC), where cells eventually localize in multiple sites to create a model of established metastasis. Tumor development was monitored by in vivo fluorescence imaging (IVFI). Subsequently, animals were sacrificed, and tumor tissues were isolated and imaged ex vivo. Tumors within organ tissues were further analyzed via multiplex immunohistochemistry (mIHC) for Her2/neu expression, blood vessels (CD31), as well as a nuclear marker (Hoechst) and fluorescence (mKate) expressed by the tumor cells. RESULTS Following IC injection, JIMT-1mKate cells consistently formed tumors in the lung, liver, brain, kidney, ovaries, and adrenal glands. Disseminated tumors were highly variable when assessing vessel density (CD31) and tumor marker expression (mkate, Her2/neu). Interestingly, tumors which developed within an organ did not adopt a vessel microarchitecture that mimicked the organ where growth occurred, nor did the vessel microarchitecture appear comparable to the primary tumor. Rather, metastatic lesions showed considerable variability, suggesting that each secondary tumor is a distinct disease entity from a microenvironmental perspective. CONCLUSIONS The data indicate that more phenotypic heterogeneity in the tumor microenvironment exists in models of metastatic disease than has been previously appreciated, and this heterogeneity may better reflect the metastatic cancer in patients typically enrolled in early-stage Phase I/II clinical trials. Similar to the suggestion of others in the past, the use of models of established metastasis preclinically should be required as part of the anticancer drug candidate development process, and this may be particularly important for targeted therapeutics and/or nanotherapeutics.
Collapse
Affiliation(s)
- Jessica Kalra
- Experimental Therapeutics, BC Cancer Agency, Vancouver, BC V5Z 1L3, Canada;
- Applied Research Centre, Langara, Vancouver, BC V5Y 2Z6, Canada
- Department Anesthesia Pharmacology and Therapeutics, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
- Faculty of Pharmaceutical Sciences, University of British Columbia, Vancouver, BC V6T 1Z4, Canada;
| | - Jennifer Baker
- Integrative Oncology, BC Cancer Agency, Vancouver, BC V5Z 1L3, Canada; (J.B.); (A.K.)
| | - Justin Song
- Chemical and Biomolecular Engineering Department, Vanderbilt University, Nashville, TN 37235, USA;
| | - Alastair Kyle
- Integrative Oncology, BC Cancer Agency, Vancouver, BC V5Z 1L3, Canada; (J.B.); (A.K.)
| | - Andrew Minchinton
- Faculty of Pharmaceutical Sciences, University of British Columbia, Vancouver, BC V6T 1Z4, Canada;
- Integrative Oncology, BC Cancer Agency, Vancouver, BC V5Z 1L3, Canada; (J.B.); (A.K.)
| | - Marcel Bally
- Experimental Therapeutics, BC Cancer Agency, Vancouver, BC V5Z 1L3, Canada;
- Faculty of Pharmaceutical Sciences, University of British Columbia, Vancouver, BC V6T 1Z4, Canada;
- Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
- Nanomedicine Innovation Network, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
| |
Collapse
|
22
|
Somarakis A, Ijsselsteijn ME, Luk SJ, Kenkhuis B, de Miranda NFCC, Lelieveldt BPF, Hollt T. Visual cohort comparison for spatial single-cell omics-data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:733-743. [PMID: 33112747 DOI: 10.1109/tvcg.2020.3030336] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Spatially-resolved omics-data enable researchers to precisely distinguish cell types in tissue and explore their spatial interactions, enabling deep understanding of tissue functionality. To understand what causes or deteriorates a disease and identify related biomarkers, clinical researchers regularly perform large-scale cohort studies, requiring the comparison of such data at cellular level. In such studies, with little a-priori knowledge of what to expect in the data, explorative data analysis is a necessity. Here, we present an interactive visual analysis workflow for the comparison of cohorts of spatially-resolved omics-data. Our workflow allows the comparative analysis of two cohorts based on multiple levels-of-detail, from simple abundance of contained cell types over complex co-localization patterns to individual comparison of complete tissue images. As a result, the workflow enables the identification of cohort-differentiating features, as well as outlier samples at any stage of the workflow. During the development of the workflow, we continuously consulted with domain experts. To show the effectiveness of the workflow, we conducted multiple case studies with domain experts from different application areas and with different data modalities.
Collapse
|
23
|
Fassler DJ, Abousamra S, Gupta R, Chen C, Zhao M, Paredes D, Batool SA, Knudsen BS, Escobar-Hoyos L, Shroyer KR, Samaras D, Kurc T, Saltz J. Deep learning-based image analysis methods for brightfield-acquired multiplex immunohistochemistry images. Diagn Pathol 2020; 15:100. [PMID: 32723384 PMCID: PMC7385962 DOI: 10.1186/s13000-020-01003-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 07/12/2020] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Multiplex immunohistochemistry (mIHC) permits the labeling of six or more distinct cell types within a single histologic tissue section. The classification of each cell type requires detection of the unique colored chromogens localized to cells expressing biomarkers of interest. The most comprehensive and reproducible method to evaluate such slides is to employ digital pathology and image analysis pipelines to whole-slide images (WSIs). Our suite of deep learning tools quantitatively evaluates the expression of six biomarkers in mIHC WSIs. These methods address the current lack of readily available methods to evaluate more than four biomarkers and circumvent the need for specialized instrumentation to spectrally separate different colors. The use case application for our methods is a study that investigates tumor immune interactions in pancreatic ductal adenocarcinoma (PDAC) with a customized mIHC panel. METHODS Six different colored chromogens were utilized to label T-cells (CD3, CD4, CD8), B-cells (CD20), macrophages (CD16), and tumor cells (K17) in formalin-fixed paraffin-embedded (FFPE) PDAC tissue sections. We leveraged pathologist annotations to develop complementary deep learning-based methods: (1) ColorAE is a deep autoencoder which segments stained objects based on color; (2) U-Net is a convolutional neural network (CNN) trained to segment cells based on color, texture and shape; and ensemble methods that employ both ColorAE and U-Net, collectively referred to as (3) ColorAE:U-Net. We assessed the performance of our methods using: structural similarity and DICE score to evaluate segmentation results of ColorAE against traditional color deconvolution; F1 score, sensitivity, positive predictive value, and DICE score to evaluate the predictions from ColorAE, U-Net, and ColorAE:U-Net ensemble methods against pathologist-generated ground truth. We then used prediction results for spatial analysis (nearest neighbor). RESULTS We observed that (1) the performance of ColorAE is comparable to traditional color deconvolution for single-stain IHC images (note: traditional color deconvolution cannot be used for mIHC); (2) ColorAE and U-Net are complementary methods that detect 6 different classes of cells with comparable performance; (3) combinations of ColorAE and U-Net into ensemble methods outperform using either ColorAE and U-Net alone; and (4) ColorAE:U-Net ensemble methods can be employed for detailed analysis of the tumor microenvironment (TME). We developed a suite of scalable deep learning methods to analyze 6 distinctly labeled cell populations in mIHC WSIs. We evaluated our methods and found that they reliably detected and classified cells in the PDAC tumor microenvironment. We also present a use case, wherein we apply the ColorAE:U-Net ensemble method across 3 mIHC WSIs and use the predictions to quantify all stained cell populations and perform nearest neighbor spatial analysis. Thus, we provide proof of concept that these methods can be employed to quantitatively describe the spatial distribution immune cells within the tumor microenvironment. These complementary deep learning methods are readily deployable for use in clinical research studies.
Collapse
Affiliation(s)
- Danielle J Fassler
- Department of Pathology, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook, 11794, USA
| | - Shahira Abousamra
- Department of Computer Science, Stony Brook University, 100 Nicolls Rd, Stony Brook, 11794, USA
| | - Rajarsi Gupta
- Department of Biomedical Informatics, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook, 11794, USA
| | - Chao Chen
- Department of Biomedical Informatics, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook, 11794, USA
| | - Maozheng Zhao
- Department of Computer Science, Stony Brook University, 100 Nicolls Rd, Stony Brook, 11794, USA
| | - David Paredes
- Department of Computer Science, Stony Brook University, 100 Nicolls Rd, Stony Brook, 11794, USA
| | - Syeda Areeha Batool
- Department of Biomedical Informatics, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook, 11794, USA
| | - Beatrice S Knudsen
- Department of Pathology, University of Utah, 2000 Circle of Hope, Salt Lake City, UT, 84112, USA
| | - Luisa Escobar-Hoyos
- Department of Pathology, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook, 11794, USA
- Department Therapeutic Radiology, Yale University, 15 York Street, New Haven, CT, 06513, USA
| | - Kenneth R Shroyer
- Department of Pathology, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook, 11794, USA
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, 100 Nicolls Rd, Stony Brook, 11794, USA
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook, 11794, USA
| | - Joel Saltz
- Department of Biomedical Informatics, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook, 11794, USA.
| |
Collapse
|
24
|
Rozenblatt-Rosen O, Regev A, Oberdoerffer P, Nawy T, Hupalowska A, Rood JE, Ashenberg O, Cerami E, Coffey RJ, Demir E, Ding L, Esplin ED, Ford JM, Goecks J, Ghosh S, Gray JW, Guinney J, Hanlon SE, Hughes SK, Hwang ES, Iacobuzio-Donahue CA, Jané-Valbuena J, Johnson BE, Lau KS, Lively T, Mazzilli SA, Pe'er D, Santagata S, Shalek AK, Schapiro D, Snyder MP, Sorger PK, Spira AE, Srivastava S, Tan K, West RB, Williams EH. The Human Tumor Atlas Network: Charting Tumor Transitions across Space and Time at Single-Cell Resolution. Cell 2020; 181:236-249. [PMID: 32302568 PMCID: PMC7376497 DOI: 10.1016/j.cell.2020.03.053] [Citation(s) in RCA: 330] [Impact Index Per Article: 66.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 03/24/2020] [Accepted: 03/24/2020] [Indexed: 12/22/2022]
Abstract
Crucial transitions in cancer-including tumor initiation, local expansion, metastasis, and therapeutic resistance-involve complex interactions between cells within the dynamic tumor ecosystem. Transformative single-cell genomics technologies and spatial multiplex in situ methods now provide an opportunity to interrogate this complexity at unprecedented resolution. The Human Tumor Atlas Network (HTAN), part of the National Cancer Institute (NCI) Cancer Moonshot Initiative, will establish a clinical, experimental, computational, and organizational framework to generate informative and accessible three-dimensional atlases of cancer transitions for a diverse set of tumor types. This effort complements both ongoing efforts to map healthy organs and previous large-scale cancer genomics approaches focused on bulk sequencing at a single point in time. Generating single-cell, multiparametric, longitudinal atlases and integrating them with clinical outcomes should help identify novel predictive biomarkers and features as well as therapeutically relevant cell types, cell states, and cellular interactions across transitions. The resulting tumor atlases should have a profound impact on our understanding of cancer biology and have the potential to improve cancer detection, prevention, and therapeutic discovery for better precision-medicine treatments of cancer patients and those at risk for cancer.
Collapse
Affiliation(s)
| | - Aviv Regev
- Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA; Howard Hughes Medical Institute, Chevy Chase, MD 20815, USA; Koch Institute for Integrative Cancer Research, Department of Biology, MIT, Cambridge, MA 02139, USA.
| | - Philipp Oberdoerffer
- Division of Cancer Biology, National Cancer Institute, NIH, Rockville, MD 20850, USA
| | - Tal Nawy
- Computational and Systems Biology Program, Sloan Kettering Institute, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Anna Hupalowska
- Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA
| | - Jennifer E Rood
- Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA
| | - Orr Ashenberg
- Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA
| | - Ethan Cerami
- Department of Data Sciences, Dana-Farber Cancer Institute, Boston, MA 02215, USA
| | - Robert J Coffey
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, USA
| | - Emek Demir
- Department of Molecular and Medical Genetics, School of Medicine, Oregon Health & Science University, Portland, OR 97239, USA
| | - Li Ding
- Department of Medicine, McDonnell Genome Institute, and Siteman Cancer Center, Washington University in St. Louis, Saint Louis, MO 63108, USA
| | - Edward D Esplin
- Department of Genetics, Stanford School of Medicine, Stanford, CA 94305, USA
| | - James M Ford
- Department of Genetics, Stanford School of Medicine, Stanford, CA 94305, USA; Department of Medicine, Oncology Division, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Jeremy Goecks
- Computational Biology Program, Oregon Health and Science University, OR 97201, USA
| | - Sharmistha Ghosh
- Division of Cancer Prevention, National Cancer Institute, NIH, Rockville, MD 20850, USA
| | - Joe W Gray
- Center for Spatial Systems Biomedicine, Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97201, USA
| | - Justin Guinney
- Sage Bionetworks, Seattle, WA 98121, USA; Biomedical Informatics and Medical Education, University of Washington, Seattle, WA 98195, USA
| | - Sean E Hanlon
- Center for Strategic Scientific Initiatives, National Cancer Institute, NIH, Bethesda, MD 20892, USA
| | - Shannon K Hughes
- Division of Cancer Biology, National Cancer Institute, NIH, Rockville, MD 20850, USA
| | - E Shelley Hwang
- Department of Surgery, Duke University School of Medicine, Durham, NC 27710, USA; Women's Cancer Program, Duke Cancer Institute, Duke University, Durham, NC 27710, USA
| | - Christine A Iacobuzio-Donahue
- David M. Rubenstein Center for Pancreatic Cancer Research, Human Oncology and Pathogenesis Program, and Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | | | - Bruce E Johnson
- Department of Medical Oncology and Department of Medicine, Dana-Farber Cancer Institute and Brigham and Women's Hospital, 450 Brookline Avenue, Boston, MA 02215, USA
| | - Ken S Lau
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, USA
| | - Tracy Lively
- Division of Cancer Treatment and Diagnosis, National Cancer Institute, NIH, Rockville, MD 20850, USA
| | - Sarah A Mazzilli
- Department of Medicine, Division of Computational Biomedicine, Boston University School of Medicine, Boston, MA 02118, USA
| | - Dana Pe'er
- Computational and Systems Biology Program, Sloan Kettering Institute, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Sandro Santagata
- Ludwig Center for Cancer Research and Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA 02115, USA; Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Alex K Shalek
- Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA; Institute for Medical Engineering and Science, Department of Chemistry, and Koch Institute for Integrative Cancer Research, MIT, Cambridge, MA 02139, USA; Ragon Institute of Massachusetts General Hospital, MIT and Harvard University, Cambridge, MA 02139, USA; Division of Health Sciences and Technology, Harvard Medical School, Boston, MA 02115, USA; Department of Immunology, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Denis Schapiro
- Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA; Ludwig Center for Cancer Research and Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA 02115, USA
| | - Michael P Snyder
- Department of Genetics, Stanford School of Medicine, Stanford, CA 94305, USA
| | - Peter K Sorger
- Ludwig Center for Cancer Research and Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA 02115, USA
| | - Avrum E Spira
- Department of Medicine, Division of Computational Biomedicine, Boston University School of Medicine, Boston, MA 02118, USA; Johnson & Johnson, Cambridge, MA 02142, USA
| | - Sudhir Srivastava
- Division of Cancer Prevention, National Cancer Institute, NIH, Rockville, MD 20850, USA
| | - Kai Tan
- Division of Oncology and Center for Childhood Cancer Research, 4004 CTRB, Children's Hospital of Philadelphia, 3501 Civic Center Boulevard, Philadelphia, PA 19104, USA; Department of Pediatrics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Robert B West
- Department of Pathology, Stanford School of Medicine, Stanford, CA 94305, USA
| | - Elizabeth H Williams
- Department of Data Sciences, Dana-Farber Cancer Institute, Boston, MA 02215, USA; Present address: Foundation Medicine, Cambridge, MA 02141, USA
| |
Collapse
|